AI Governance for Leaders | Capability @ Lunch Recap

AI Governance

It can feel like artificial intelligence (AI) has arrived overnight, and organisations are already scrambling to keep up. There is pressure to move quickly, adopt new tools, and unlock efficiencies.

But in the rush to get started, one critical piece is often missing: clear governance. Without it, organisations risk using AI in ways that are inconsistent, unstructured, and potentially harmful.

So where do you start?

That’s exactly what Kate Kolich unpacked in this month’s Capability @ Lunch session, opening the 2026 series with a practical look at AI governance for leaders.


As AI becomes more embedded in our everyday lives, Kate highlighted the importance of asking critical questions: Who controls the data? Who designs the algorithms? What narratives are being created or erased? The future is not predetermined, and with the right governance in place, leaders can guide how AI is used, responsibly and intentionally.

A Brief History of AI

Kate kicked off the session by highlighting that despite how it may feel AI is not new, it has been evolving since the 1950s.

The release of ChatGPT in November 2022 marked a turning point, reaching 100 million users in just two months. She explained that what changed was not just the speed of adoption, but the nature of the technology itself: the shift to generative AI.

ChatGPT Adoption Graph, World of Statistics.

Unlike traditional machine learning, which is trained on narrower, more structured datasets to recognise patterns and produce predictable outputs, generative AI learns from vast and diverse amounts of data, giving it the ability to create entirely new and original outputs rather than simply classifying or predicting within a defined range. That leap in capability brings a corresponding increase in complexity, especially when it comes to governance.

Kate unpacked three key components of the generative AI stack, noting that while the full landscape is broader, understanding these three enables leaders to make informed decisions about both the opportunities and the governance requirements they will encounter:

  • Large Language Models (LLMs), the foundation layer.
    • AI models trained on large amounts of text data, giving them the ability to generate and understand language. LLM’s power the tools many of us are already using, such as ChatGPT, Microsoft Copilot, and Claude, enabling everyday tasks like drafting emails, summarising documents, and creating reports.
  • Retrieval-Augmented Generation (RAG),
    • A technique that supplements an LLM with an organisation’s own knowledge, so responses are grounded in actual organisational content rather than generalisations. When implemented with appropriate access controls, data residency settings, and no-training clauses, it significantly reduces the risk of inaccurate responses, or hallucinations.
  • Agentic AI, systems that can take autonomous action toward a defined goal. Unlike an LLM that responds to a single prompt, agentic AI can perform multi-step tasks, such as triaging enquiries, updating records, or managing parts of a workflow. It is the fastest evolving layer of the stack and as Kate noted, the one that requires the most considered governance approach.

As capability increases, so does the need for responsible governance.

AI Governance in Action

Most organisations say they are committed to responsible AI, but few have translated that into action. Kate highlighted this as the ambition–practice gap, and outlined how closing this gap requires more than good intentions or vague principles. It requires embedding AI into organisational strategy with clear ownership, accountability, and measurable outcomes.

In practice, that means defining an AI vision, understanding your organisation’s risk appetite, building governance before you scale, setting clear AI policies and expectations, and measuring ROI in a meaningful way.

AI Governance at Different Leadership Levels, Kate Kolich.

Understanding Risk Appetite

Before you can govern AI effectively, you need to be clear on how much risk you are willing to take. As Kate put it, the role of responsible AI leadership is not to understand every technical detail, but to define the boundaries.

Kate outlined that one of the most practical tools available to leadership is a Risk Appetite Statement, a structured way to capture and communicate the boundaries within which AI will be used across the organisation. That means answering key questions:

  • What use AI cases align with our values, and what is off limits?
  • What level of risk are we prepared to accept?
  • Where must humans remain involved?
  • What data should never be used in AI systems?
  • How will we know if our risk appetite needs to change?

This is captured in a risk appetite statement, where leadership sets clear boundaries that move beyond principles and guide how AI is actually used in practice. Done well, it empowers the organisation to move forward with confidence, providing teams with a clear direction for how AI should and should not be used, and giving leadership a shared framework for making consistent decisions as the technology continues to evolve.

Turning Governance Into Action

Once your risk appetite is defined, the next step is turning governance into clear, actionable steps that people can consistently follow.

Responsible AI adoption starts with enabling AI literacy across the organisation. Kate emphasised that this is not a one-off training exercise, but an ongoing commitment to helping people understand how to use AI tools effectively, safely, and in a way that reflects the organisation’s values. 

One practical approach Kate shared was an accreditation-based system, where access to AI tools is tied to capability, not job title.

Think of it as moving through levels:

  • Foundation level: understanding acceptable use, data handling, and basic safety. Access to low risk, approved tools such as internal AI productivity tools.
  • Practitioner level: applying AI in workflows, including prompt design and more complex use cases such as analysing survey data or drafting reports.
  • Champion level: contributing to governance, piloting new tools, actively identifying opportunities to apply AI to optimise processes and ways of working, and helping shape how AI is adopted responsibly across the organisation

What makes this approach effective is that capability determines access. Tools are only available to those who have demonstrated a solid understanding of how to use them responsibly. This shifts governance from theory into something that actively guides behaviour in practice.

Keeping humans in the loop

A consistent theme throughout the session was the importance of keeping a knowledgeable human in the loop. Kate noted that using AI ethically and responsibly is not about slowing adoption or holding back innovation, but ensuring the right human expertise is applied at key points.

She identified four areas where this matters most:

  • Design and training: Expert human involvement is critical at this stage to identify and reduce bias, and to ensure systems are built on appropriate, high-quality contextualised data.
  • Day-to-day use of AI tools: People need to stay actively engaged, questioning outputs and avoiding automation bias or over-reliance on AI-generated content.
  • Decision making: AI can inform and support decision making, but accountability must always rest with a person. This is particularly important where outcomes could adversely impact people or carry reputational or regulatory risk.
  • Autonomous AI systems: Clear guardrails and oversight are essential to ensure systems act within defined boundaries and align with organisational expectations.

As a rule of thumb, Kate suggested that any decision that is irreversible should require human approval, and if it involves confidential information, it must follow information governance processes and remain within approved systems.

Kate also highlighted three principles that underpin responsible AI use in practice.

  1. Explainability, being able to articulate why an AI system produced a particular output.
  2. Traceability, maintaining a clear audit trail of how AI was used and what decisions it informed.
  3. And observability, the ongoing monitoring of AI system behaviour to detect when outputs are drifting or degrading over time.

Embedding these principles into how your organisation works with AI is what good AI stewardship looks like in practice, with designated people taking ongoing responsibility for monitoring and improving how AI is used across the organisation. Together, these move responsible AI from intention to practice.

Measuring what matters

Strong governance does not slow organisations down, it enables better outcomes, but only when you are measuring what actually matters.

Common pitfalls include having no baseline for comparison, measuring activity instead of impact, overlooking hidden costs, and assuming pilot results will scale in real-world conditions.

Instead, focus on metrics that reflect real value. That might look like efficiency gains such as reducing report writing from six hours to two, effectiveness through improved accuracy or fewer revisions, risk reduction through fewer compliance breaches or data handling errors, and strategic value through freeing up staff to focus on higher-value work or unlocking new insights from existing data.

Where to from here?

Kate shared a simple 90 day starting point:

  • Days 1 to 30, define your risk appetite and audit current AI use
  • Days 30 to 60, formalise governance structures and policies
  • Days 60 to 90, launch governed use cases and begin measuring impact

Take what you learnt in this session further by attending Kate Kolich’s upcoming Responsible Data and AI Use course. Click here to visit the course page and find out more.


Further resources

If you’re looking to go deeper, there are a number of valuable resources supporting responsible data and AI use in Aotearoa New Zealand.

Kate Kolich’s article, IMHO: Setting an AI risk appetite, written for the Institute of Directors, explores how boards can lead with clear governance and accountability.

The New Zealand Government’s Responsible AI Guidance provides practical advice on governance, risk, and compliance, while the Institute of Directors offers principles to support effective board oversight.

The Māori Data Sovereignty principles, developed by Te Mana Raraunga, the Māori Data Sovereignty Network, provide an important foundation for AI governance that respects Māori data rights, upholds Te Tiriti obligations, and ensures AI is developed and used in ways that are fit for purpose in Aotearoa.

Organisations like the AI Forum New Zealand provide opportunities for training, collaboration, and staying up to date with emerging practice, and the Stats NZ Centre for Data Ethics and Innovation offers guidance on ethical and responsible AI use.

Find more programmes that we offer

Contact us

We customise specific programmes for many New Zealand organisations – from short ‘in-house’ courses for employee groups, to executive education, or creating workshops within your existing programmes or events.