AI Governance may not be sexy, but it needs addressing
by Aaron Flack on Mar 26, 2026

AI is already out of control in most organisations. Not because the technology is dangerous, but because leadership has allowed it to spread without ownership, rules, or visibility. Staff are pasting sensitive information into ChatGPT, Claude, Perplexity, Grok, and whatever comes next because it is fast, convenient, and unofficially encouraged through constant pressure to “move quicker”. Even where Microsoft Copilot exists, it is often licensed but not governed, which means the business is still guessing what data is being shared, what decisions are being shaped, and what risk is quietly becoming normal.
AI governance is not just important; it is absolutely essential to prevent these issues.
What AI governance actually is
AI governance is a critical framework that establishes clear controls for the responsible use of AI technology. It defines what is acceptable, assigns ownership, determines how risks are evaluated, governs data management, outlines decision-making processes, and demands tangible evidence that governance is effective rather than just a theoretical concept. This framework is essential and operates above the tools and models used in AI implementation, ensuring accountability and transparency in all AI operations.
A mature AI governance framework usually includes:
-
An AI inventory: what tools, models, agents, and use cases exist, and why they exist
-
Roles and responsibilities: who approves, who reviews, who owns the risk, who is accountable
-
Risk assessment methods: consistent criteria tied to business impact
-
Policies and guardrails: rules people can follow without becoming paralysed
-
Review and change control: how use cases evolve without drifting into chaos
-
Impact considerations: where AI affects people, decisions, customers, or sensitive processes
-
Escalation routes: when something is too risky, who stops it, and what happens next
-
Evidence and audit readiness: records that prove governance is being applied in practice
When implemented effectively, this approach is poised to enhance the very outcomes that board members prioritise: control, resilience, reputation, customer trust, and pace. Looking ahead, ISO/IEC 42001 is already a pivotal standard that guides organisations in establishing and operating an Artificial Intelligence management system. This will likely include robust risk management and governance practices, preparing businesses to navigate future challenges with confidence.
The uncomfortable truth: AI governance is mostly a data problem
People keep treating AI governance like a model problem.
It is not.
The model is usually the least interesting part. The real risk sits in the data that flows into prompts, plug-ins, connectors, and chat histories.
Most organisations miss four data realities:
1) Staff are pasting sensitive information into consumer tools because the business made it easier than the “right” way
Shadow AI is a convenience response. People are being pushed to “be more productive” without being given the tools, training, or clear rules to do so. So they solve the problem themselves.
This is a leadership failure issue.
If leadership wants productivity gains, it has to own the risks that come with unmanaged AI use. Pretending otherwise is just outsourcing accountability to the least protected part of the organisation.
2) Data classification does not survive contact with prompts
Many organisations have data classification policies. Very few have classification behaviours that hold under time pressure, and an AI tool is one browser tab away.
If a user can paste contract terms, customer information, financial forecasts, or internal incident notes into a public model, it is clear that the classification system has failed. The real issue isn’t whether a policy exists; it’s about ensuring that the operating model mandates compliance as the path of least resistance.
3) “Copilot is managed” is not the same as “Copilot is governed”
Licensing a tool does not ensure effective governance. Microsoft Copilot can be deployed to respect access controls and organisational boundaries, but this requires intentional configuration and connection. If not done properly, it risks exposing sensitive content to the wrong individuals at the wrong time and in the wrong context.
Boards must recognise this: if Copilot were rolled out quickly, significant governance gaps are likely present. The only responsible action is to verify the setup rigorously. Anything less is unacceptable.
4) Most AI risk registers ignore what happens after adoption
While organisations may effectively monitor the initial deployment of AI, many fail in fostering long-term strategic foresight, allowing security to slip by the wayside.
-
New use cases appear informally
-
Prompt libraries get shared without review
-
Data sources expand
-
People build small automations that become business-critical
-
Output gets used in customer-facing processes without validation
Without an AI inventory and routine review, none of this is visible until something breaks.
Why AI governance matters more than most cyber controls right now
Cybersecurity has developed robust practices over the decades, including patch management, phishing training, the principle of least privilege, and incident response. In contrast, the adoption of AI is still in its early stages and lacks the same level of discipline, even as it engages with sensitive information.
Cyber Essentials is a government-backed program that helps organisations defend against common online threats. In parallel, ISO/IEC 27001 is a globally recognised standard for information security management systems, centred on risk-based controls and continuous improvement.
AI governance does not replace these established frameworks; instead, it fills a critical gap they were never intended to address: the risks associated with human-initiated data disclosure to third-party models and the uncontrolled influence of AI-generated outputs on decision-making.
For this reason, AI governance ought to be prioritised on the board agenda, if it is not already.
The board questions that expose whether governance exists or if you are blowing smoke
If the business has real AI governance, leadership should be able to answer these without hesitation:
-
Which AI tools are being used across the organisation today, including unsanctioned tools?
-
What categories of data are being placed into AI tools, and by whom?
-
What is the approved list, and what happens when someone uses something else?
-
Who signs off on AI use cases that touch customers, pricing, hiring, legal, or financial reporting?
-
How is AI-related risk assessed, and what triggers escalation?
-
What evidence exists that the controls are followed, not just written?
-
If a customer asks how AI is governed, can the organisation answer confidently in one meeting?
If those answers are unclear, governance is not in place. It is wishful thinking with a policy header, which will lose you business and put your business at serious risk.
A practical view of AI governance aligned to ISO/IEC 42001, ISO/IEC 27001, and Cyber Essentials Plus
A governance system that works in real life.
A sensible approach looks like this:
Establish visibility first
Start with a mapped view of AI use, including shadow use. Without this, everything else is guessing.
Define ownership and accountability
AI governance fails when everyone is “involved” but nobody owns the decision. Clear roles fix that.
Create a repeatable risk method
Risk decisions should be consistent, not personality-driven. The method must tie to impact: data sensitivity, decision criticality, customer exposure, regulatory exposure, and operational dependency.
Put guardrails where behaviour happens
Policies matter, but controls matter more. Guardrails should be embedded into identity, access, device controls, approved tools, and training that is specific enough to be usable.
Build evidence as part of the process
Audit readiness is the natural byproduct of properly executed governance: inventories, approvals, reviews, exceptions, and rationale.
This aligns well with the intent of ISO/IEC 42001, which is to manage AI responsibly through a structured management system that includes risk assessment and risk treatment.
It is imperative for organisations to recognise that the regulatory landscape is evolving decisively. Companies must have a clear understanding of how AI processes data, influences decisions, and ensures fairness, transparency, and accountability. The Information Commissioner’s Office has provided explicit guidance on AI and data protection, clarifying how data protection laws apply when AI systems manage personal data.
No board should ever find itself explaining to regulators, customers, or investors that employees were carelessly inputting sensitive information into public models just because it seemed convenient. Such a justification is simply unacceptable; it amounts to nothing more than negligence. Organisations must prioritise compliance and responsibility in their use of AI.
Where Conosco’s Managed AI Governance Support fits
Conosco’s Managed AI Governance Support is built for the reality most organisations face: AI is already in the business, and leadership needs control without killing momentum.
A Conosco AI Governance Workshop gives leadership a clear view of current AI use, the real data risks, and the controls required to make AI adoption safe and defensible.
Speak to our team about ISO42001 and AI Governence in your business
You might be interested in our portfolio of solutions
You May Also Like
These Related Stories

The AI Problem
Artificial intelligence (AI) is no longer just a theoretical concept; it's now an integral part of our daily work lives. …

Long Read: NCSC 2025 Review: What CIOs Must Do as Major Attacks Surge
Empty shelves at M&S were not the real warning sign. The 50 per cent rise in nationally significant attacks was.

Achieving effective AI governance: a practical guide for small and medium businesses
As small and medium businesses (SMBs) begin to incorporate artificial intelligence (AI) into their operations, AI govern …
