AI and the future of governance – part two, the human touch
19 February 2026
In the second part of his exploration of the role of AI in governance, Daniel Taylor makes the case for the kind of informed judgement that only humans can offer.
Points raised in this article:
- For all its analytical strengths, AI is fundamentally limited as a governance tool; the decision about what to do with all that analysis must be made by an experienced human practitioner.
- The future of governance reviews is not automated assurance, but augmented judgement.
- The real challenge is not whether organisations adopt AI, but whether they use the space it creates to think more deeply about outcomes, values, and power.
“And this our life, exempt from public haunt,
Finds tongues in trees, books in the running brooks.”
William Shakespeare, As You Like It
In the first part of this article, I made the case for the judicious use of artificial intelligence to enhance the practice of governance reviews, particularly around the assessment of regulatory compliance.
AI’s ability to analyse board assurance frameworks, committee terms of reference, risk registers, and policy suites against regulatory requirements and recognised good practice; the speed with which it can test alignment between quality governance, financial governance, and workforce oversight, for example, makes it an invaluable tool that will undoubtedly reshape governance practice in complex organisations such as universities and NHS trusts.
But, as I argued in my previous article, the NHS also demonstrates the limitations of automation.
The most serious governance failures across the service in recent times have stemmed from dysfunctional dynamics, not poor policies. They have been less about what boards lack on paper and more about how they behave in practice: confusing oversight with control, allowing accountability to drift, discouraging challenge, and mistaking polished assurance for real insight.
Governance is not a dataset
Governance does not fail primarily because policies are missing clauses; it fails because boards misread their context, misunderstand risk, defer judgement, or lack the confidence to exercise authority well. These are dynamic, human failures rooted in culture, power, incentives, and identity.
This is why the King approach to governance remains so important. King V is not interested in governance as mechanical compliance, but as an ethical and strategic practice oriented towards value creation in the broadest sense: social, intellectual, and institutional. Outcomes, not artefacts, are the test.
AI can help us see more clearly, if instructed right, if prompted and used with the necessary understanding and expertise to diagnose. It can tell us what to do with what we see but it cannot (yet) implement. And governance itself doesn’t improve in isolation.
It can even suggest courses of action based on pattern recognition and comparative insight. But it cannot implement change, build trust, or repair fractured relationships. Governance does not improve in isolation from leadership, culture, and capability. Improving governance without attending to the wider system risks creating technically sound arrangements that fail in practice. Expertise remains essential to translate insight into action.
The consultant’s role, redefined rather than removed
The future of governance reviews, then, is not automated assurance, but augmented judgement.
Expert consultants will increasingly work with AI systems that handle the heavy lifting of document analysis, regulatory mapping, and baseline assessment. This frees time and cognitive space for the work that only humans can do: interpreting context, facilitating difficult conversations, challenging assumptions, and helping boards understand not just whether they are compliant, but whether they are governing well.
In both higher education and the NHS, this means moving from fragmented, policy-by-policy responses to regulation towards coherent frameworks that link policy design, decision-making authority, operational enactment, cultural reinforcement, and board-level oversight.
Without this end-to-end view, compliance remains brittle and confidence illusory.
From provocation to practice
AI will raise the compliance baseline across public-purpose organisations – that is inevitable and welcome. But it will not resolve the deeper questions of purpose, authority and judgement that sit at the heart of governance. If anything, by making compliance easier, it will expose even more starkly where governance capability is thin.
The real challenge is not whether organisations adopt AI in governance reviews, but whether they use the space it creates to think more deeply about outcomes, values, and power. That is where experienced governance consultants remain indispensable, not as auditors of paperwork, but as interpreters of systems, facilitators of judgement, and stewards of institutional integrity.
Robust governance in this new world demands a framework that runs from policy writing to policy enactment, from regulatory alignment to lived behaviour. It is at this intersection, between machine intelligence and human judgement, that the future of governance will be decided.
A future worth building
There is some gloom around AI – will its promise hold, will its promise radically alter our world for the worse not better, will AI make me, my job, my expertise irrelevant?
Used well, AI offers something genuinely hopeful for the world of governance. It creates the possibility of earlier diagnosis, faster learning, and more consistent access to high-quality insight. It can help reduce the cost of entry to expert support, allowing smaller, poorer, or more marginal organisations to benefit from governance intelligence that would previously have been out of reach.
It can support a shift away from episodic crisis-driven reviews towards continuous improvement, where governance arrangements are monitored, tested, and refined over time. In doing so, it offers the prospect of fewer governance failures, not because systems are perfect, but because they are better understood.
But this future is not inevitable. It will depend on AI being embedded within ethical frameworks, guided by expert judgement, and oriented towards learning rather than blame.
The machine can surface patterns. The mirror can reveal uncomfortable truths. But it is the human mind — curious, courageous, and accountable — that must decide what to do next.
In common with all GGi articles, this piece has been peer-reviewed by a second GGi expert.