<img src="https://www.visionary-agile24.com/801599.png" style="display:none;">

The AI Problem

by Aaron Flack on Mar 17, 2026

<span id="hs_cos_wrapper_name" class="hs_cos_wrapper hs_cos_wrapper_meta_field hs_cos_wrapper_type_text" style="" data-hs-cos-general-type="meta_field" data-hs-cos-type="text" >The AI Problem</span>

The AI Problem
8:43

Artificial intelligence (AI) is no longer just a theoretical concept; it's now an integral part of our daily work lives. In organisations across the UK, people are engaging with a variety of AI tools, including copilots, chatbots, coding assistants, automated workflow solutions, and decision support systems. While some of these tools are officially approved, many employees are drawn to them simply for their convenience. This shift towards convenience is crucial; it tends to spread quickly, often leading to AI being implemented without a fully established governance framework.

If you're in a leadership role, it's a common misconception to think of AI as merely a technology that can be adopted gradually or as a compliance requirement to address later. Unfortunately, this perspective doesn't reflect the reality of how AI functions in the workplace. It fundamentally changes the way decisions are made, how information circulates, and how work is processed. For you, understanding this shift is essential, as it directly impacts your bottom line, operational efficiency, customer satisfaction, and overall reputation.

Recent Deloitte research for 2026 shows that businesses are rapidly increasing their access to AI technologies, with many leaders eager to understand the return on their investments and the ethical implications of AI in practice. Similarly, a McKinsey survey from 2025 points to a critical insight: businesses that truly harness the power of AI have clearly defined processes and practices for its implementation and validation. It's not merely about having a larger budget or more sophisticated models; it's about establishing a strong framework that directs your use of AI.

So, what does this mean for you? Bridging the gap in AI adoption is about establishing effective governance that aligns with how AI is actually used in your business. By focusing on creating clear guidelines and processes, you can harness AI's full potential while ensuring it delivers real value to your business, enhances productivity, and fosters a culture of innovation. Embracing this approach can lead to substantial improvements in your operational success and competitive edge in the market.

AI is increasingly becoming a vital component of our operational model, and its integration across various functions presents both opportunities and challenges. For example, a support copilot can enhance customer outcomes and help manage complaint volumes, while a sales assistant can refine messaging and improve pricing strategies. Workflow automation streamlines approval processes and enhances documentation accuracy. CV screening tools not only impact hiring decisions but also have implications for internal trust. Additionally, financial forecasting tools can evolve into critical decision-making inputs that warrant thorough consideration. Code assistants can expedite software delivery, but they also necessitate vigilance to avoid introducing insecure practices into production.

The effectiveness of AI does not stem from the establishment of a formal "AI program," but rather from people's natural inclination to seek the most efficient path to achieving outcomes.

This reality highlights the need for a nuanced understanding that extends beyond the narrative of "just another SaaS tool." Unlike most SaaS tools that behave predictably, AI possesses the ability to affect an organisation's performance from day to day, even with a consistent interface. The outcomes can be persuasive, sometimes incorrect, and challenging to detect quickly. As businesses integrate into processes, they influence organisational decision-making, making them a governance concern rather than a mere IT issue.

To foster responsible AI usage, it's essential to prioritise governance from the outset. Many organisations in sectors such as legal services are already widely utilising AI, yet they often lack formal AI policies. This gap can lead to sensitive data being entered into public tools while controls remain reactive. Recognising that AI risk can stem from subtle changes in workflow allows us to be proactive in managing these challenges.

It is also important to acknowledge that while IT departments can effectively manage platforms and access, they cannot be solely responsible for every business decision influenced by AI. Likewise, while legal teams can provide guidance on obligations, leadership must define what is considered "acceptable" across contexts, including customer interactions, credit decisions, and other critical areas of the business. These responsibilities should be embraced as part of the leadership's role within the operational model.

It is vital to have a comprehensive view of how AI is implemented within an organisation. The belief that "we are not using it in a high-risk way" may overlook the subtler forms of AI in mainstream tools, AI provided by suppliers, and individual use of public services.

Understanding that risk encompasses not just intent but also data pathways, influenced decisions, and the ability to provide evidence when needed will strengthen our governance and foster a more resilient operational framework.

Governance is becoming the price of speed

Procurement teams are proactively seeking to understand how suppliers implement AI, the destinations of data, organisations' roles, and the responses to potential model misbehaviour. Internal audit functions are embracing AI as a core accountability issue, moving beyond novelty. Boards are stepping up by asking essential questions: where AI is implemented, which organisations, what data is involved, and what the consequences could be if an output leads to harm or regulatory scrutiny.

Simultaneously, regulatory landscapes are tightening. Businesses in the UK, too, will feel the ripple effects through their international operations, clients, and suppliers. With the European Union's AI Act now in effect and phased implementation underway, the urgency for organisations to comprehend and govern AI use throughout their supply chains and internal operations has never been greater. The Information Commissioner's Office has provided practical guidance and a risk toolkit on AI and data protection, reinforcing the principle that accountability must be demonstrated.

This is the moment where governance transforms from a hindrance into an organisational catalyst for progress. When leadership defines decision rights, sets standards for acceptable use, clarifies ownership, and establishes ways to evidence controls, teams can move quickly and confidently. In the absence of these foundations, momentum may turn into borrowed time; it may seem like progress until procurement stalls, a client demands reassurance, an audit raises flags, or an incident triggers a frantic response.

There's a profound commercial imperative at play. AI is increasingly integral to delivering normalised outcomes; when usage is inconsistent, outcomes vary dramatically. One team adeptly leverages a copilot to achieve exceptional results quickly, whereas another might mishandle it, producing results that, while plausible, require decentralised work. Instead of scaling productivity, the organisations are amplifying inconsistency.

This is why the most effective governance approaches emphasise management of conventional policy documents. International frameworks like the NIST Artificial Intelligence Risk Management Framework strategically align AI risks with governance, mapping, measurement, and management, treating risk as a continuous journey rather than a one-off checklist. AI is being woven into the fabric of security, quality, and continuity.

The present state of AI is clear: adoption is on the rise, use is decentralising, and accountability for leadership is tightening. Businesses that view governance as optional may still engage with AI, but they will do so with unclear ownership, inconsistent controls, and fragile assurances. In contrast, those that prioritise robust governance will not only maintain their pace but will also eliminate uncertainty, reduce rework, and confidently address board inquiries with solid evidence.

AI is already embedded in the operating model. The question that remains is whether leadership can prove it is effectively under control.

Speak to our team about ISO42001 and AI Governence in your business

You might be interested in our portfolio of solutions