Skip to main content

Why AI Labs Are Starting to Look Like Sports Teams

With compute more abundant, talent is the new bottleneck in AI. Star players are signing pay packages that resemble those of professional athletes.

I wrote extensively about AI last summer, starting with AI’s $600B Question, The Game Theory of AI CapEx, AI is Shovel Ready, Steel, Servers & Power, and ending with AI’s Supply Chain Tug of War.

Reflecting one year later, it’s interesting how little has changed: AI’s $600B Question is now roughly AI’s $840B Question, assuming that Nvidia reaches something like $210B in run-rate data center revenue by year-end 2025. OpenAI remains the lion’s share of AI’s overall revenue, crossing $10B recently, according to Reuters. The total revenue in the AI ecosystem still pales in comparison to the dollars that have been put into the ground — if anything, my estimates last year around how much revenue the Big Tech companies were making on AI were probably too high.

There have been three major improvements in AI over the last year: First, coding AI has really taken off. A year ago, the demos for these products were mind-blowing, and today, the coding AI space is generating something like $3B in run-rate revenue. The second change is that reasoning has found product-market fit, and the AI ecosystem has gotten excited about a second scaling law, around inference-time compute. By repeatedly querying these models and by using reinforcement learning, we can get better, more robust results. And finally, there seems to be a “smile curve” around ChatGPT usage, where this new behavior is really getting ingrained into day-to-day life.

The app layer ecosystem for AI has continued to improve, leveraging cheap compute and integrated workflows to build durable businesses. More abundant compute has been good for the startup ecosystem, and companies like Harvey, Sierra, Abridge, SmarterDx, Perplexity, OpenEvidence, Clay, Sesame and many others have made great strides around packaging this capability for their customers. Our ambition one year ago was to back as many of these companies as possible, and that continues to be our ambition today.

There is one dynamic, however, that has really changed since last year. A year ago, everyone was talking about cluster size and pre-training scaling, and now that seems to have quietly disappeared from the public dialogue. Perhaps this is because new clusters are taking longer to get online, or because as Ilya Sutskever said in December, “Pre-training as we know it will end.” By the same token, one year ago we were seeing a consolidation in research labs because of the daunting costs of building a foundation model lab. Microsoft/OpenAI, Amazon/Anthropic, Google, Meta and xAI emerged as five “finalists” in the AI race as others folded, because these companies had reached GPT-4 level models and had sufficient capital to continue scaling. Now, by contrast, a new cohort of players has sprouted up, including SSI, Thinking Machines, and DeepSeek, who claim talent, not raw compute scale, as their primary differentiators.

Whereas one year ago the narrative was around pre-training compute requirements driving consolidation, today the narrative is all about talent advantages being critical in a world of increasing compute abundance. This is especially true for Google and Meta. Google is under siege from a product positioning perspective and is doing everything in their power to reverse this dynamic. Meta’s bold decision to acquire a 49% stake in Scale and bring in CEO Alex Wang to lead their new “founder mode” AI lab is an even more clear move in this direction. For both companies — and for the ecosystem at large — the message of 2025 is that large-scale clusters alone are insufficient. Everyone understands that new breakthroughs will be required to jump to the next level in the AI race — whether in reinforcement learning or elsewhere — and talent is the unlock to finding them.

With their obsessive focus on talent, the AI labs are increasingly looking like sports teams: They are each backed by a mega-rich tech company or individual. Star players can command pay packages in the tens of millions, hundreds of millions, or for the most outlier talent, seemingly even billions of dollars. Unlike sports teams where players often have long-term contracts, AI employment agreements are short-term and liquid, which means anyone can be poached at any time. One irony of this is that while the notion of AI race dynamics was originally popularized by the AI safety community as a boogeyman to avoid, this is exactly what has been wrought across two distinct domains: First compute and now talent.

I think this is a function of human nature. When have humans ever seen something wonderful, and then said, “Now we have enough, it’s time to cool off?” It is an intrinsic property of humanity that once critical thresholds are passed, we take things all the way to the extreme. We cannot hold ourselves back. And when the prize is as big as the perceived AI prize is, then any bottleneck that gets in the way of success — especially an illiquid bottleneck like talent — will be pushed to staggering levels.

And yet, even amidst this fierce competition, the broader AI ecosystem also feels calmer today than at any point in the last three years. That’s because for most people, the race itself is now a constant and the market structure feels stable. People are getting comfortable with the status quo. The unstable competitive equilibrium is itself stable — for now.