Article

There is a quiet war being waged in boardrooms and Slack channels across the tech industry. Not between startups and incumbents. Not between open source and proprietary. It is between AI companies and every service provider that sits between a company and its goals.

OpenAI and Anthropic are no longer just building models. They are building consultants.

The Enterprise Play

Look at what has happened in the last year. In April 2026, Anthropic launched Claude Security, a cybersecurity product that scans enterprise codebases, detects vulnerabilities, and generates patches in a single session. Powered by Opus 4.7, the tool is already integrated into platforms from CrowdStrike, Microsoft Security, Palo Alto Networks, and others. Consulting firms like Deloitte, Accenture, and PwC are deploying Claude-integrated solutions for vulnerability management and incident response. The timing was not accidental. It came after a 72% increase in AI-assisted cyberattacks across industries, with 87% of organizations reporting having experienced an AI-driven attack in the past year.

That same month, Anthropic launched Claude Design, a standalone design tool that generates polished prototypes, design systems, and interactive websites from natural language. Mike Krieger, Anthropic's CPO, resigned from Figma's board three days before the launch. Figma's stock fell 7%. The tool is accessible to founders, product managers, and marketers who have never opened Figma. Their Cowork product targets non-technical users who need to automate daily tasks without writing a single line of code.

OpenAI, on the other hand, has been pushing deeper into enterprise workflows with custom GPTs, assistants, and API integrations that promise to replace entire consulting engagements with a single prompt.

The message is clear: every piece of the cake is on the table.

The Zero-Sum Game Nobody Named

In game theory, a zero-sum game is one where every gain by one player is a loss for another. The total value doesn't grow. It just changes hands. And while the AI industry loves to frame this moment as a rising tide that lifts all boats, the reality on the ground looks a lot more like a fixed pie being carved up by fewer and larger hands.

Here is the paradox: Anthropic and OpenAI are not playing a zero-sum game against each other. Both are growing. Anthropic surged from $1B to $30B in annualized revenue in barely over a year. OpenAI is racing toward its IPO. Investors hold stakes in both and openly say they want multiple winners. In May 2026, both companies launched joint ventures for enterprise AI services in the same week. They are not eating each other. They are eating together, at a table that used to seat consultants, agencies, designers, and security firms.

The zero-sum game is not between the AI labs. It is between the AI labs and everyone else.

Every enterprise dollar that flows into Claude Security is a dollar that does not flow into a penetration testing firm. Every prototype generated by Claude Design is a project that a design agency will never invoice. Anthropic now captures over 73% of spending among companies buying AI tools for the first time. That spending is not new budget. It is redirected budget. Money that used to pay for human expertise now pays for API calls.

The industry frames this as "expanding the market." But when you look at where the money actually moves, it is not expansion. It is transfer. The total amount companies spend on getting problems solved has not doubled. The share going to human service providers has simply shrunk.

That is zero-sum. And the people on the losing side of that equation are not startups competing with OpenAI. They are the consultants, the junior teams, the mid-size agencies that built their business on being the bridge between a company and its goals. That bridge is being replaced by an API.

The Squeeze on Services

For companies that sell services to other companies (agencies, consultancies, software houses) the math has already changed. Teams are smaller. Budgets are tighter. And the expectation is that AI fills the gap.

Clients want experienced developers who can use AI to move at twice the speed. They don't want to pay for a junior learning on the job. The result? There is no space for juniors anymore. Not because they lack talent, but because the economics no longer justify the investment in their growth when a senior with Copilot can deliver the same output.

This is not a theoretical concern. It is happening right now. In just the first six months of 2025, 77,999 tech jobs were directly tied to AI-driven layoffs. After the launch of ChatGPT, job postings for roles involving structured and repetitive tasks decreased by 13%. Silicon Valley's venture capital community is flagging 2026 as the year AI stops being a productivity tool and starts replacing workers outright.

The Research Nobody Wants to Read

Here is the uncomfortable part. Several studies have surfaced showing that heavy AI usage for every task, from code generation to strategic thinking, leads to measurable declines in performance. Not improvements. Declines.

Anthropic themselves published a randomized controlled trial with 52 mostly junior software engineers. The result: developers using AI assistance scored 17% lower on mastery tests than those who coded by hand. The productivity gains? Not statistically significant. Developers who delegated code generation to AI scored below 40% on comprehension, while those who used AI for conceptual questions scored 65% or higher. The researchers' own conclusion: "AI-enhanced productivity is not a shortcut to competence."

An MIT study from 2025 found that participants who exclusively used AI to help write essays showed weaker brain connectivity, lower memory retention, and a fading sense of ownership over their work. A separate study on AI tools in society found a significant negative correlation between frequent AI tool usage and critical thinking abilities, with younger participants showing the highest dependence and the lowest critical thinking scores.

When people offload cognitive work to AI too consistently, their ability to think critically, spot edge cases, and make nuanced decisions begins to atrophy. The convenience becomes a dependency. The shortcut becomes the only route.

It is the same pattern we have seen with every tool that promised to make thinking unnecessary: calculators didn't make us better at math. GPS didn't improve our sense of direction. And AI-assisted work is not making us better thinkers. It is making us faster at producing outputs that look like thinking.

The Perspective Problem

We cannot stop this race. That much is clear. The technology is too useful, too profitable, and too deeply embedded in every competitive strategy to slow down.

And yes, there are genuine benefits. Faster prototyping. Better accessibility to knowledge. Reduced barriers for small teams tackling problems that used to require entire departments.

But here is the thing: these benefits only make sense when you are on the other side of the table.

If you are the company buying AI-powered security audits, the value proposition is obvious. Cheaper, faster, always available. If you are the security consultant being replaced, the same innovation looks like a countdown clock on your career.

If you are the startup founder who can now ship a product with three people instead of twelve, AI is a miracle. If you are one of the nine who didn't get hired, it is a structural shift that no one is helping you navigate.

The Uncomfortable Symmetry

The companies building these tools (Anthropic, OpenAI, Google) are staffed with some of the most highly paid, deeply experienced professionals in the industry. They are not replacing themselves. They are building products that replace everyone else.

Anthropic's own research proves that AI coding assistants reduce skill formation by 17%, and yet their business model depends on selling those same tools to every enterprise on the planet. Their security product was born from a model, Claude Mythos, that can "surpass all but the most skilled humans at finding and exploiting software vulnerabilities." The same capability that creates the threat is sold back as the solution.

There is an uncomfortable symmetry here: the people who build the automation are the last to be automated. The people who design the systems that eliminate roles are the most secure in their own. And the narrative they sell, "AI empowers everyone," is technically true, in the same way that a storm empowers the ocean. The water rises. Not everyone has a boat.

What Comes Next

None of this is an argument against AI. It is an argument for honesty about what is happening.

The junior developer problem is real. Anthropic's own data shows it. And it will create a gap in the industry that we will feel in five years when there are no mid-level engineers because nobody invested in growing them. The WEF Future of Jobs Report projects 92 million jobs displaced by 2030. And while it also projects 170 million new ones, those new roles require skills that the displaced workforce does not yet have.

The cognitive atrophy problem is real. MIT, the MDPI, and multiple peer-reviewed studies confirm it. It will manifest in the quality of decisions made by people who have forgotten how to think without a prompt.

The consulting displacement is real, and it will reshape entire industries in ways that the current optimism does not account for. When 49% of companies using ChatGPT report having already replaced workers, the trend line is not ambiguous.

We can acknowledge all of this while still using the tools. We can benefit from the speed while being honest about the cost. But that requires looking at the full picture — not just from the side of the table where the benefits stack up, but from every seat in the room.

Because progress that only serves the buyer is not progress. It is just a transaction.