CONTEXT IS KING. BUT WHY?
- Oct 20, 2025
- 5 min read
Updated: Apr 8

The frontier AI providers - OpenAI, Anthropic, Google - have spent more than two years in a capability arms race. Models that could barely pass secondary school exams in 2024 are now Gold Medal mathletes, outperforming most PhDs. Reasoning capabilities have leapt forward. There is chatter that all of maths may ‘be solved’ in the next few years.
But hold on to your hats. Because that was just an opening act. The platform war is shifting to different terrain entirely.
And with intelligence now table stakes, the decisive factor won't be which Big Tech bro builds the smartest model. It will be which becomes - or can be the most influential part of - a platform that can access, orchestrate and govern context. For which read all the data, documents, conversations and activities that AI systems currently can't see. Your emails. Your Slack threads and Teams messages. Your calendar. Your company's institutional memory, scattered across a dozen cloud-based.
More capable models are brilliant news. But a genius with amnesia is still useless. And right now, even the most sophisticated AI is essentially amnesiac about everything specific to your work, your company and your actual situation.
The platforms that solve this - that can safely and legally connect AI to the scattered context of real work - will dominate the battleground, potentially for many years to come. Not because they have the best model - but because they will be able to control what the model can see and use.
What is context anyway?
Strip away the jargon and 'context' is a straightforward idea: it's the information and permissions an AI needs to do useful work rather than produce plausible-sounding nonsense.
Right now, when you ask an AI to draft a client update, it doesn't know:
That you killed the project last month in a Slack thread
That the client relationship is tense, evident from email tone
That your colleague already committed to a different approach via email
That company policy changed last week in a document comment
This lack of context is part of what leads to hallucinations. The AI makes up something that sounds professional but is completely wrong because it's working in a void. Which means you have to spend twenty minutes fixing it. And the promised productivity gains evaporate.
This isn't a minor inconvenience. According to Forrester, knowledge workers already lose up to 30% of their time navigating fragmented systems. If we can shift both cognitive effort and context-creating to digital intelligence, it will free the human in the loop to achieve far more. System and wetware constraints will disappear.
Where does context live?
Enterprise context doesn't only exist in tidy data warehouses. It's also scattered across surfaces where the actual work happens. And each surface contains information the others don't.
Chat platforms such as Slack and Teams hold the real-time decisions and conversational context that often don’t make it into formal documentation. The ‘let's kill that project’ decision can happen here, in passing, without ceremony.
Email and calendars contain the tacit commitments, relationship histories and coordination that constitute the actual operating system of most organisations. Who's actually accountable? Check the email thread. Who needs to be in the room? Check the calendar patterns.
Productivity suites - Google Workspace and Microsoft 365 - concentrate the formal documentation - reports, presentations, plans. This is where decisions eventually get recorded, but usually well after they've been made elsewhere.
Meeting recorders contain the details of the most intricate and revealing discussions, transcripts and their storage are one of the biggest pieces of the context puzzle.
The power of unified context is potentially extraordinary. A system that can see across these surfaces - that knows the Slack decision was confirmed on a call, queried in the email trail but then confirmed in a final reply - is becoming close to omniscient.
And this has implications for Big Tech. A fact they’re already alive to. The battle for omniscience and the extreme stickiness it may create has commenced. Once a platform 'knows' an enterprise - its patterns, its divisions, its culture, its rules, its country segments, its markets, its projects, its people - starting over with an amnesiac system will impact the bottom line for as long as it takes to retake the same ground.
How will providers drive context forward?
The realisation that context is strategic has created a stark choice for platform providers. Do you hoard your data to maintain competitive advantage? Or do you host it, providing governed, auditable access, and become infrastructure for everyone else's AI?
Salesforce initially restricted third-party access to Slack data, attempting to hoard it as a competitive moat. Enterprise search vendors like Glean pushed back publicly, arguing that the context belongs to customers, not platforms, and that Salesforce was 'hampering your ability to use your data with your chosen enterprise AI platform.'
Then Salesforce partially reversed course, reopening Slack for external AI integration and repositioning Slack as an 'Agentic OS' - a context platform where competing AI entities - OpenAI, Anthropic, Google - can operate. The shift was explicit: better to be the infrastructure where AI happens than one contestant trying to build everything yourself.
This pattern is repeating. The platforms with daily engagement - chat, email, productivity suites - are realising their strategic position isn't building the best AI features. It's becoming the trusted layer that safely exposes context to whatever AI the customer chooses.
The gravity of where people actually work is pulling the industry decisively toward the host model. Control the work surface, expose the context safely and let model providers compete on capability. What’s the likely outcome?
The platform battle is unlikely, in the short-term at least, to produce a single winner. It will likely create a layered ecosystem, where power distributes based on control over identity, capability and work surface. Here are the current runners and riders.
Work-surface apps and orchestrators - productivity suites and chat hubs are well positioned. They own the digital habitats where context naturally flows. They win on governance quality, speed and total cost by hosting auditable context for multiple agent capabilities.
OS providers - Microsoft and Apple control identity and private, on-device compute. They win in scenarios demanding device-first privacy for personal context but don't replace governed access to deep organisational data.
Frontier model producers - OpenAI, Anthropic and Google remain essential suppliers, winning in high-stakes scenarios requiring the best reasoning capability. But they depend entirely on orchestrators to provide the necessary enterprise context to act.
This is the basis of our hypothesis that whilst model capability matters, the ability to control the context fabric makes work-surface orchestrators and OS providers the most likely owners of enterprise AI flow in the coming years.
Next - Mapping the battlefield.







