81% of companies have AI in production. Only 15% consider its governance effective.
That gap is not a minor detail. It is the core issue for any organisation that has scaled technology without scaling control alongside it.
When a data or AI initiative grows, complexity does not disappear. It shifts. It moves away from implementation and reappears where no one expected it: in who makes decisions, on what basis, and with what level of real understanding of what the system is doing.
We observed this in our recent survey. We asked where complexity re-emerges when scaling technology. The most common answer was not system integration or internal coordination. It was governance and risk control.
The problem is not technical. It is governance
What happens when AI is scaled without control.
Governance failures rarely occur during the pilot phase. They arise once the system is already in production, under operational pressure, when teams assume the model works simply because no one has said otherwise.
The case of the Apple Card by Goldman Sachs is illustrative. Its scoring model systematically assigned lower credit limits to women with financial profiles equivalent to those of men. This was not a technical performance issue. Instead, no one could explain how the model worked or demonstrate the absence of bias. In practice, the organisation lacked the controls needed to know.
This is not an isolated case. 97% of organisations that have experienced AI-related incidents lacked specific access controls for these systems. 63% had no formal governance policy at all.
The pattern is always the same: technical capability grows faster than the ability to govern it.
Regulation is already responding to this gap
The European AI Act, DORA and the Basel frameworks are not theoretical responses to future risks. They are regulatory reactions to failures that have already occurred. They require traceability, explainability, continuous monitoring and clear accountability for every decision a model automates.
For organisations in banking, insurance or retail, this translates into a clear reality: governing data and models is no longer an optional architectural decision. It is an operational and regulatory requirement, with direct implications for audits, sanctions and reputation.
Governance is not about slowing down. It is about knowing
Governance is not about slowing down. It is about knowing exactly what is happening when systems move fast.
Organisations that do this well do not have less technology. They have greater clarity on how that technology makes decisions, who oversees it, and what happens when something goes wrong. That clarity is not built afterwards. It is designed before scaling.
The question is not whether your organisation has governance. It is whether that governance grows at the same pace as the system it governs.
AI Governance FAQs
Why is AI governance important?
AI governance enables organizations to control how models make decisions, reduce operational risk, and comply with regulations such as the EU AI Act and DORA.
What happens when AI is scaled without governance?
Complexity shifts to decision-making, increasing the risk of errors, bias, and lack of control over models in production.
What does AI regulation in Europe require?
It requires traceability, explainability, continuous monitoring, and clear accountability for automated decisions.