The Third AI Paradigm
- Malcolm Maxwell
- Feb 12
- 7 min read

I've been watching artificial intelligence evolve since the early 1990s, and I've learned to recognise the noise that builds when a genuine shift is approaching. The experts who were wrong last time grow loud with warnings. The experts who were right last time grow quiet—busy mapping the new territory onto the old map, reassuring themselves that the fundamentals hold. And meanwhile a new crowd arrives, breathless, certain that nothing like this has happened before. The only ones who seem to hear through the noise are the practitioners who simply used what worked and kept their eyes open. They are not asking whether this is real. They are asking whether it is ready.
The first decade and a half of my observation has been thoroughly overtaken by events. ELIZA—that sympathetic psychotherapist who turned out to be a few hundred lines of pattern-matching code. Expert Systems, which turned out neither to be Expert nor Systems. Neural networks that remained impractical curiosities until, ironically, adolescent gamers told by parents to "get a job" funded the GPU revolution that made AI viable.
But since that inflection point, I have seen three distinct paradigm shifts in AI capability, each as fundamental as the last. What interests me is the pattern of institutional response—the way each has been dismissed by precisely those who failed to recognise the previous one. Let me describe these three shifts, and why I believe most organisations are structurally unprepared for what the third one requires.
This is not a technical discussion. This is about as high-level as I can get while still remaining tethered to practical reality.
The first paradigm was autocomplete and suggestion engines. These early systems were sophisticated pattern matchers, completing sentences without comprehending them. Impressive technical achievements, certainly. Even now they can be irritating—the way they finish your thoughts before you've fully had them, often incorrectly. And yet I would wager that most reading this uses autocomplete at least daily. We have grown accustomed to the irritation, as we grow accustomed to many irritations, because the alternative—typing entire words ourselves—suddenly seems a burden.
But the systems were limited, and their limitations were obvious. For enthusiasts, they were useful within narrow bounds. For skeptics, they seemed like parlour tricks—clever demonstrations that impressed at conferences but didn't transform business operations.
The second paradigm arrived with the chatbot. This transformed token prediction into something resembling a two-way conversation conducted through a messaging interface. We have travelled some distance since GPT-2, the first model to attract attention beyond research laboratories. We are now at GPT-5.3, Opus, Sonnet, Kimi, DeepSeek—the names multiply. These systems are useful to many people. But useful, I must insist, is not identical to helpful in a business context. The distinction matters, and it gets lost in the general noise.
What chatbots accomplished was making AI accessible to non-technical users. I initially regarded chatbots as incremental improvements. They proved to be transformative in ways that became clear only through use.
They hallucinated, certainly—asserting facts with confidence, citing sources that did not exist. But users who extracted value from them considered hallucinations a manageable cost for what was, effectively, messaging with an intelligence that could search the web, identify problems in documents, draft correspondence. The hallucinations were caught at the human step. You remained, as engineers say, in the loop.
This defines the second paradigm: the handoff: Human to AI, AI to human, back and forth. If something goes wrong, the consequences feel contained. Perhaps you have wasted a few minutes and gained ten.
Add to this the increasing reliability in reasoning, tool use, and what the industry terms "agentic modelling"—which is simply a fashionable name for sequential process execution, where each step follows from the previous. Suddenly we have a genuinely powerful tool.
Startups have profited from this agentic approach because, for the first time, the technology approached genuine utility. Recently, considerable engineering effort went into containing inevitable errors in agentic systems: checks and balances, oversight agents that examine other agents' outputs, auditing agents, even "courts" of agents that vote on correct answers.
For business owners, however, agentic systems represent a partial solution applicable only where errors are recoverable. If you deploy an agentic model to route customer support calls and it misdirects someone, nobody loses their position. There are no legal proceedings, no regulatory consequences. You correct and iterate. An eighty percent success rate constitutes a win.
But anyone observing the pace of AI architecture improvement—and here I ask you to attend closely, for this is where we arrive at the heart of the matter—those who observe the pace will recognise where this leads. The third paradigm: autonomous agents.
These systems wake themselves, execute tasks without human triggers, and interact with other agents in ways no individual designer anticipated. I should be precise here. Autonomy does not require unbounded access. Well-designed autonomous systems operate within policy envelopes, with bounded authority, kill-switches, and auditability. The third paradigm is not recklessness. It is capability without continuous human initiation.
I have watched colleagues who understood the agentic architecture dismiss autonomous agents as merely "agentic workflows with better reliability." They are not wrong, exactly. They are precisely wrong enough to miss the threshold. If you're one of the practitioners who built workflows on the second paradigm and now dismiss the third, you are making the error the opening described.
The first two paradigms augmented individual productivity. The third augments organisational capability—systems that coordinate, decide, and act without waiting for permission. This is the difference between giving an employee a faster typewriter and giving them an autonomous department.
Recent experiments demonstrate what happens when technical constraints are relaxed. One project granted language models extensive system access to test capability extremes. Independent developers replicated complex workflows within days at negligible cost—suggesting the technical barriers to autonomous deployment are lower than organisational readiness. Select appropriate models, and reliability approaches thresholds where human verification becomes the exception, not the rule.
Other experiments have shown agents communicating amongst themselves, establishing unexpected coordination patterns, generating strategies their designers did not specify. This is the Wild West—researchers testing boundaries. But the experiments illuminate the paradigm: when agents can negotiate, coordinate, and execute without human mediation, the constraint on business process becomes imagination rather than headcount.
If you have built workflows on the second paradigm, you are now facing the question the opening promised. Not whether this is real. Whether your organisation can absorb it before your competitors do.
I can be specific about what absorption looks like.
Goldman Sachs has confirmed the use of AI agents to automate internal banking operations such as trade accounting, client onboarding, and due-diligence workflows, with executives citing substantial reductions in processing time for operational tasks ([Reuters](https://www.reuters.com/business/finance/goldman-sachs-teams-up-with-anthropic-automate-banking-tasks-with-ai-agents-cnbc-2026-02-06/)). In supply-chain operations, Fujitsu reports that autonomous AI systems cut warehousing costs by approximately $15 million, while Lenovo uses similar systems to detect and respond to supply disruptions earlier than human-driven processes ([CIO / World Economic Forum coverage](https://www.cio.com/article/4122937/davos-from-hype-to-ai-transformation-in-the-economy.html)). Across finance and back-office workflows, peer-reviewed enterprise pilots report cycle-time reductions ranging from roughly 40 percent to over 80 percent, with humans retained primarily for exception handling and oversight rather than routine execution ([arXiv study 1](https://arxiv.org/abs/2506.01423), [arXiv study 2](https://arxiv.org/abs/2505.20733)).
The organisations that move decisively do not merely reduce costs. They compress decision cycles. They operate with information advantages measured in hours rather than weeks. They are, in essence, learning faster than their competitors.
The technology is not the problem. The organisation is.
Cybersecurity teams regard autonomous agents with justified concern. Attack surfaces expand: prompt injections from compromised skills, execution of unverified commands. These risks are real and manageable, but only with governance architectures most organisations have not built.
Legal and financial barriers compound technical risks. Insurers struggle to underwrite deployments. Without actuarial data, they cannot price risk, creating exposure publicly traded companies cannot tolerate.
Chief legal officers recommend caution. Chief financial officers demand ROI calculations excluding qualitative benefits. Chief information security officers document shadow IT usage—legal teams employing unauthorised chatbots, HR experimenting with unapproved tools—while lacking authority to prevent it.
No single executive owns the outcome. The CTO understands the technology but not the risk. The General Counsel understands the risk but not the technology. The CEO, often, understands neither well enough to adjudicate.
I have observed that successful implementations share a pattern. Leadership mandates weekly AI usage reports, praising effective utilisation even of simple chatbots. Founders ask each employee: what did you use AI for this week? This builds culture through repetition. It treats AI as infrastructure, not experiment. The message is unambiguous: this is not optional, not temporary, not peripheral. It is as fundamental as computers and internet connections.
The contrast with failed adoption is stark. In information-dense industries, competitive viability is shifting. Examine recent stock values for information service companies: Salesforce, Thomson Reuters, Atlassian, Gartner. Revenue continues. Customers have not yet migrated. But advantage has moved to organisations that moved deliberately, with clarity.
Those with longer memories will recognise this pattern from web adoption. Yellow Pages. Borders Books. Circuit City. Sears. Blockbuster. The fatal decisions preceded their deaths by years.
This pattern repeats. For firms competing on cycle time, absent regulatory insulation, dismissing chatbots as irrelevant, seeing no business value in language models, treating autonomous agents as science fiction—their strategic position erodes. Revenue continues temporarily. But advantage has shifted.
Likewise, those who consider adoption as an organisational change thrive.
The three camps of adoption—the enthusiasts, newly arrived; the rejectionists; the patient observers—all claim certainty. All are mistaken.
The enthusiasts purchase shelfware: subscriptions with sub-fifteen-percent utilisation, acquired because nobody defined what specific problem, in what specific workflow, the tool would solve.
The rejectionists refuse categorically, dismissing each paradigm as hype, and are left behind.
The observers wait while others map the territory. Observation fails here not because caution is unwise, but because this is not a technology choice. It is an operating model change. You cannot observe your way into a new way of working.
The posture that succeeds is none of these. It belongs to those who paused, understood their operations, identified genuine friction reduction, and deployed selectively with measurable returns.
The question is not whether to adopt or reject AI. The question is whether you approach it seriously, and with clarity. This question yields to analysis, not ideology. And in that analysis lies the difference between the companies we will discuss in a decade, and those we will have forgotten.




Comments