Are you paying attention? Every announcement from OpenAI and Anthropic points in the same direction. They aren’t just building better models. They’re building moats around context.
I start with one core assumption:
The quality of an AI experience is largely driven by the quality of the context.
OpenAI and Anthropic know this. Look at what they’re doing.
In April 2025, OpenAI rolled out a massive upgrade to ChatGPT’s memory. It now references all of your past conversations. Not just snippets you explicitly saved, but everything you’ve ever discussed with it. ChatGPT maintains detailed profiles of your work patterns, communication style, and ongoing projects across months of interaction.
Anthropic followed suit, expanding Claude’s memory to all paid subscribers in October 2025. Like ChatGPT, Claude now builds persistent context across all your conversations, learning your preferences and work patterns over time.
But memory is just the beginning.
OpenAI’s connector ecosystem now includes Google Drive, Gmail, Google Calendar, Google Contacts, Microsoft OneDrive, SharePoint, Outlook, Teams, Slack, Dropbox, Box, Linear, and Model Context Protocol (MCP) for custom integrations.
Anthropic’s integration platform offers Jira, Confluence, Asana, Linear, Zapier (connecting thousands of apps), Slack, Notion, Stripe, PayPal, and MCP for custom connectors.
They’re building browsers, integrating with every enterprise tool, and syncing your entire digital workspace. As Nick Turley, head of ChatGPT, described it at DevDay 2025: users might “start your day in ChatGPT, just because it kind of has become the de facto entry point into the commercial web.”
This is the war for context. Whoever has the most context wins the user experience.
At OpenAI’s DevDay 2025, the company explicitly described ChatGPT as “an operating system where developers can build and distribute their own applications.”
Salesforce CEO Marc Benioff wants his own OS as well. He told analysts: “All these next-generation AI companies ranging from OpenAI to Anthropic to everyone are on Slack. It is incredible how they’ve used that as their operating system and as their platform to run their companies.”
He’s right. We’re moving up an abstraction layer.
People are going to live in ChatGPT, Claude, and Slack. These are becoming the modern operating system.
Here’s what I think this means for startups building AI products:
We are not going to win this war.
We can’t provide the best general AI experience on our platform. We don’t have the resources to build connectors to every enterprise system. We can’t match the memory infrastructure or the ecosystem effects that come from having hundreds of millions of users.
Even if we could, why would we want to? We’d be fighting an unwinnable battle while neglecting what actually makes us valuable: the piece of context we can provide that they don’t have.
So our strategy should be simple: Don’t fight the war for context. Instead, provide your own unique context and live within their world of aggregated context.
Every company has (or should have) something unique. A proprietary dataset. Specialized algorithms. Domain expertise.
And eventually? Start pushing our UI into theirs. That’s the most recent development you’re seeing with the Apps SDK.
I believe the companies that win will be the ones that can focus on their own unique value proposition within these platforms.
But this requires strategic thinking about your product surface area. Not everything belongs in ChatGPT or Claude.
Complex configuration and administrative workflows? Keep them native. At Workera, managing your instance and creating custom assessments with Compose™ need the depth and control of a dedicated interface. These are specialized, infrequent workflows that benefit from rich, focused environments.
But high-frequency, conversational experiences? Those should live where your users already are. Learning is a perfect candidate. It’s ongoing, conversational, and benefits from living alongside everything else someone’s doing in their day. Why force a context switch when someone’s already in ChatGPT planning their week, reviewing their goals, and checking in on projects?
The line will keep shifting. What seems too complex for a general platform today might be trivial tomorrow. Assessment-taking could move there eventually too.
The strategic question isn’t “should we integrate or not?” It’s “which parts of our product benefit from the context aggregator, and which parts need to remain in our specialized environment?”