AI is on every board agenda. Yet across the conversations I’ve had with directors and executives this year, one question comes up more than any other: where do we start?
The confusion isn’t about technology. It’s about accountability. Most leaders are still asking whether they need an AI strategy, when they should be asking what questions they need to put to their teams. The right questions are how governance begins.
1. Start with strategy, not technology
The first conversation about AI should be strategic, not technical.
Ask:
- What are we trying to achieve or improve with AI?
- Where does AI genuinely align with our mission and long-term goals?
- What happens if we move too slowly, or too fast?
If AI is treated as a series of pilots or experiments, it will stay disconnected from business value. Strategy sets the frame for responsible innovation.
2. Make ownership explicit
Every board should be able to answer a simple question: who owns AI in our organisation?
If responsibility sits everywhere, it effectively sits nowhere. Someone needs to be accountable for oversight, risk, and outcomes. Whether that’s the CEO, CIO, or a dedicated AI lead, ownership must be clear.
You wouldn’t delegate financial control without an audit trail. AI should be no different.
3. Examine the risks beyond data privacy
Data privacy is important, but it’s only the surface layer.
Boards should be asking:
- Could our use of AI expose us to legal, reputational, or ethical risk?
- Do we actually know where AI is being used across the organisation, including unofficial tools?
- What checks exist to detect and correct errors, bias, or drift over time?
An AI risk register should now sit alongside your financial and operational risk registers. Visibility is the precondition for governance.
4. Test alignment with your values
AI adoption is not values-neutral. Every model and dataset embeds assumptions about fairness, transparency, and trade-offs.
Leaders should be asking:
- What do we mean by “ethical use of AI” in our organisation?
- Who decides where the boundaries are?
- How do we protect the people affected by AI decisions—clients, students, staff, or communities?
Values cannot be delegated to algorithms. They have to be actively governed.
5. Define how you’ll measure success
AI should be evaluated with the same discipline as any major strategic investment.
Ask:
- What outcomes matter most – productivity, accuracy, quality, equity, or all of these?
- How and when will we review performance?
- What is our process for learning from mistakes and refining our approach?
Without measurement, it’s impossible to know whether AI is delivering value or simply creating new forms of risk.
The mindset shift: disciplined curiosity
Boards that lead well in this space do not claim to have the answers. They have the discipline to ask sharper questions and the humility to keep learning.
AI literacy in governance isn’t about coding or prompting. It’s about curiosity, oversight, and accountability. The best directors I’ve seen are building that muscle now.
If your board or executive team wants to strengthen its AI governance capability, start with the questions. Everything else follows.
From questions to action
Strong governance begins with good questions, but it cannot stop there.
Boards and executives need a way to translate those questions into concrete policies, accountability structures, and measurable outcomes.
To help leaders move from awareness to action, I’ve developed practical white papers and templates that guide organisations through the early stages of AI adoption and governance.
You can access the AI for NFPs White Paper—and soon the SME version—via the Resources page.