top of page
Search

The AI Governance Vacuum



When OpenAI launched Frontier this month—offering to treat AI agents like employees with identities and permissions—it landed differently in medium-sized companies. You do not have a dedicated AI governance team. You have an operations manager who also handles data protection, an MD who reads about AI between meetings, and perhaps an external IT provider who mentions "agentic workflows" without explaining what that means for your ISO 27001 audit.


Within days, industry reports confirmed what you probably suspected: eighty-eight percent of organisations deploying agents had experienced security incidents this year. Not because the technology failed. Because the clarity to specify what these systems should do—and where the boundaries lie—was missing.


This is the governance vacuum. It is not a problem reserved for large enterprises with board-level risk committees. It is a structural gap that opens whenever capability arrives faster than the clarity to use it safely.


I have observed how medium-sized companies respond to this moment. Not all responses work.


The Posture of Delay


Some organisations decide to wait. The reasoning is practical: without dedicated compliance resource or in-house AI expertise, the safest path is to observe how larger players navigate the risks. GDPR already demands careful handling of automated decision-making. ISO 27001 requires documented controls. Adding autonomous agents without understanding the implications feels like borrowing trouble.


This posture preserves your existing compliance position. It prevents premature commitment to architectures that may not fit your risk profile. The limitation is that agentic capability is becoming available to competitors who move with clarity. Constrain yourself too heavily and you risk ceding ground not because you lack capability, but because you lack the clarity to deploy it proportionately.


Organisations that choose this path are not wrong. They are waiting for governance they can implement. The question is whether waiting produces clarity or merely deferred competition.


The Posture of Trust


Other organisations accept what the platform provides. The vendor manages permissions; the agents operate within those constraints; governance becomes a feature you tick rather than a design you specify. This posture delegates architectural questions to product documentation and hope.


The risk is insidious. Platforms like Frontier are well-designed for enterprise scale. But their default governance assumptions may not match your operational reality. What constitutes an auditable action under ISO 27001? Where does processing occur relative to your GDPR territorial obligations? What happens when the model updates and its behaviour shifts in ways your information security register does not anticipate? These questions yield to operational clarity, not to vendor assurances.


Trust without clarity produces governance that feels sufficient until your next audit—or until an incident requires you to explain who specified the agent's authority. The eighty-eight percent incident rate suggests this gap is common across organisations of every size.


The Posture of Specification


A third response is emerging in medium-sized companies with limited resource but clear operational understanding. These organisations pause before platform selection to specify what autonomy means for their particular context. They design policy envelopes—bounded authority, explicit constraints, kill switches and audit trails—before the agents wake. They understand that accountability must attach to specification, not to outcome.


This is not enterprise-grade architecture. It is proportionate preparation that enables deployment. An agent operating within a clearly specified boundary can be granted genuine autonomy without requiring continuous oversight. Your ISO 27001 controls already require documented decision-making; this extends that discipline to autonomous systems. The eighty-eight percent incident rate applies to those who skipped this specification step.


What Clarity Reveals


The governance vacuum is not a warning about dangerous technology. It is a description of a missing step in organisational design. Autonomous agents are becoming reliable enough that technical constraint is no longer the limiting factor for medium-sized companies. The limiting factor is the clarity to specify what they should do, where the boundaries lie, and who carries accountability for the design.


This shifts the question from *can we afford to deploy?* to *what are we deploying into?*


Your regulatory obligations are already clear. GDPR requires transparency about automated decision-making and the right to human intervention. ISO 27001 demands documented information security controls and risk assessment. The EU AI Act, with obligations taking effect from August 2025 and comprehensive standards by August 2026, will classify many autonomous agent deployments as high-risk—requiring conformity assessment and post-market monitoring. The UK approach through its five principles—safety, transparency, fairness, accountability, and contestability—gives you a framework even as specific regulation evolves.


What this clarifies is that autonomy does not reduce your accountability. It concentrates it onto whoever specified the system's authority. This is not a deterrent. It is a prompt to be deliberate about that specification. The organisations that treat this as an opportunity are designing for proportionate advantage rather than waiting for permission.


OpenAI Frontier and similar platforms make agentic capability available to any organisation with the clarity to use it. The value flows not to those with the largest budgets, but to those who have specified their operational boundaries, their error tolerance, their data constraints, and their accountability structures. This specification is disciplined work. It happens before deployment. It enables capability within your existing governance framework rather than requiring you to build one from scratch.


The Useful Question


The organisations that will benefit most from the third paradigm are not those with dedicated AI teams. They are those with the clarity to specify what autonomy means in their specific operational context.


This clarity is not theoretical. It is practical and documented. What customer or proprietary information will agents access? Where will processing occur—within your existing infrastructure and GDPR territorial boundaries, or in jurisdictions that create additional compliance exposure? What happens when models update? What constitutes an auditable action under your ISO 27001 controls? Who in your organisation is accountable for the design decision that produced the outcome?


These questions yield to examination. They produce specifications that platforms can implement and that your existing governance can accommodate. They enable the posture of specification rather than the postures of delay or trust.


The governance vacuum is closing. Platforms are arriving. Regulation is clarifying. The remaining question is whether your organisation has specified what governance should look like before the tools arrive to implement something else.


Autonomous agents are reaching reliability thresholds where the technical question is solved for organisations of your size. The operational question remains: what do you want them to do, and where do the boundaries lie?


Clarity on this question is what enables the capability within your existing compliance framework. Without it, you have purchased autonomy you cannot safely deploy or explain to an auditor. With it, you have specification that enables proportionate speed.


What does your operational clarity reveal about how autonomous agents could function in your environment?


 
 
 

Comments


Naked AI Logo Reversed

Contact us

Thomas Ford House, 23-24 Smithfield St, London EC1A 9LF

+44 (0) 7769 530 558

Resources

Follow Us

Linkedin icon

© 2026 NakedAI. All rights reserved.

bottom of page