Speculative Procurement: When Motion Masquerades as Readiness
- Malcolm Maxwell
- Mar 16
- 5 min read

I have watched organisations buy technology for years, and I have learned to recognise the moment when activity becomes a substitute for clarity. It is rarely announced. There is no deliberate deception; in fact, the pressure to move often comes from the right instincts—competitive alertness, fiduciary responsibility, genuine curiosity about what is possible. But good intentions do not prevent predictable mistakes. There is a familiar sequence playing out now in AI procurement, and it is worth recognising before you find yourself inside it.
The Moment Activity Replaces Clarity
It is rarely announced. There is no deliberate deception. But there is a familiar sequence, and it is playing out now in AI procurement with remarkable consistency.
A board becomes interested. A leadership team becomes alert. A vendor becomes available. A budget line becomes possible. And suddenly an organisation feels that it is moving.
When Procurement Formalises Ambiguity
This is often described as progress. It is not always progress. Quite often it is the point at which an organisation, having failed to resolve its own ambiguity, begins waste time and money.
I should be precise here. This is not experimentation. Not learning. Not a bounded pilot. Those are different things entirely.
This is speculative procurement.
Thee Real Question Behind AI Adoption
The question is not whether to adopt AI or defer it. The question is whether your organisation has achieved sufficient clarity about its own operations to specify what capability should actually do. Most AI waste does not begin with a weak model. It begins earlier, when a firm that has not yet decided how it works starts buying technologies that assume it already has.
Here is the truth concealed by the procurement process: organisations think they are buying a tool. Usually they are not. They are buying an intervention into an operating environment they do not properly understand.
Capability Language vs Operational Reality
That is why so many AI conversations become theatrical. The language is full of capability, acceleration, automation, transformation. It all sounds energetic. None of it tells you very much. A capability claim on its own is strategically thin. The relevant question is not whether a system can do something interesting. It is whether the organisation has defined the environment into which that capability is being introduced.
What is the task? Where does judgment sit? What constitutes error? Which forms of inconsistency are tolerable? Who remains answerable when outputs are wrong, unstable, or opaque?
Without answers to these questions, the organisation is not procuring capability. It is importing possibility into ambiguity and hoping the ambiguity resolves itself on contact.
It will not.
I have observed this pattern repeatedly. AI does not enter a vacuum. It enters authority structures, review processes, data boundaries, informal workarounds, and unresolved organisational politics. If those are unclear, the system does not remove that ambiguity. It inherits it, amplifies it, and begins producing outputs inside it.
The Need for Clarity
This is why clarity work matters. Clarity is not a preliminary discussion held before the real decisions begin. It is the real decision-making process by which an organisation becomes legible enough to intervene in. A clarity session is not a pause before action. It is the point at which action stops being speculative.
The opposite mistake is also common. Some organisations hear criticism of premature buying and conclude that seriousness means postponement. It does not.
Experimentation is necessary. In many cases it is the only serious way to learn. But experimentation and speculative procurement are not the same thing. A disciplined experiment is designed to reduce ambiguity. Speculative procurement is what happens when ambiguity is left unresolved and then given a contract, a licence, or a platform.
The difference is not whether money is spent. The difference is whether the spend is attached to a defined learning objective, a bounded task, a clear owner, and a threshold for continuation, redesign, or termination.
The Real Constraint
Many organisations still behave as though their main problem is access to AI. It is not. Access is increasingly trivial. The market has made tools easier to obtain than judgment. The constraint has shifted. What most organisations now lack is not access, but prior clarity.
What exactly should be delegated? What must remain reviewed? What level of variance is acceptable? What data should never cross a given boundary? What would constitute a real operating gain rather than a local novelty?
These are not secondary questions. They are the real questions.
AI is an Organisational Mirror
AI does not merely automate work. It exposes the extent to which a firm has specified its own work in the first place. Where workflows are real, bounded, and understood, AI can create leverage. Where responsibilities are blurred and judgment is unevenly distributed, AI does not create coherence. It accelerates production inside incoherence.
That is why some deployments feel superficially successful and strategically empty. A team can generate more notes, more summaries, more recommendations. But if the organisation has not defined what those outputs are for, increased production may simply mean accelerated ambiguity.
The Hope Behind Early Buying
The hidden fantasy underneath most speculative procurement is that clarity will arrive downstream. Let us buy now. Let us get moving. Clarity will emerge through use.
Sometimes it does. More often what emerges through use is politics. Teams defend their chosen tools. Local workarounds harden into norms. Governance arrives late and is treated as obstruction. Commercial commitments become harder to unwind. What looked like a technology initiative reveals itself as an unstructured redesign of authority.
The Organisational Problem
At that point the original failure becomes easier to see. The organisation did not lack ambition. It lacked prior agreement about what sort of institution it was trying to become.
AI did not create that problem. It made it legible, and then expensive.
The firms that benefit from AI are usually clearer before they are louder. They are not always the ones with the strongest signalling or the largest budgets. They are the ones that have made parts of the organisation legible enough for intervention. They know where judgment sits and where it should not be replaced. They know what a useful output is. They know the difference between experimentation that teaches and adoption that drifts.
That is why Clarity Matters
That is why clarity matters in the stronger sense of the term. It is not managerial tidiness. It is not caution dressed up as thoughtfulness. It is an operating asset.
Once an organisation is clear, it can test better, buy better, govern better, reject faster, and scale with less self-deception. Clarity does not slow movement. It is what makes movement cumulative.
Without it, AI becomes one more mechanism by which a firm confuses visible activity with actual readiness.
The question before procurement is not whether to use AI. That question is already too broad to be useful. Nor is the right question which platform to buy. That merely assumes the organisation has already earned the right to ask the market for solutions.
That Old Leadership Question
The better question is less flattering and more serious: *What have we made clear enough inside this organisation that a technological intervention can now be specified against it?*
That may lead to a pilot. It may lead to a limited purchase. It may lead to the recognition that the next requirement is not procurement at all, but the harder internal work of deciding how the organisation thinks, delegates, reviews, and governs.
That should not be treated as delay. It is often the first serious commercial move available.
AI advantage does not begin when a contract is signed. It begins earlier, when an organisation becomes clear enough to know what should be tested, what should be bought, and what should be left alone.
Until then, procurement is often just a way of giving institutional uncertainty a budget.




Comments