Data / AI / ML

This November marks three years since the launch of ChatGPT. That moment brought AI into the mainstream, with large language models (LLMs) seen as the breakthrough technology powering it. Since then, innovation in AI has been relentless — perhaps one of the fastest cycles we’ve ever witnessed in tech. It’s worth pausing to reflect on […]

This November marks three years since the launch of ChatGPT. That moment brought AI into the mainstream, with large language models (LLMs) seen as the breakthrough technology powering it.

Since then, innovation in AI has been relentless — perhaps one of the fastest cycles we’ve ever witnessed in tech. It’s worth pausing to reflect on where we are today and what we’ve learned.

Fast vs. slow takeoff.
One of the big debates over the past few years was whether we’d see a fast or slow takeoff toward “super-intelligence” — a state where AI continuously improves itself. Today, it seems clear that super-intelligence is arriving more slowly than the most optimistic predictions suggested.

The limits of LLMs.
Early on, many assumed LLMs were the decisive step toward AGI, with progress simply a matter of scaling models and fine-tuning. While LLMs remain an extraordinary advance, the pace of improvement (as measured by benchmark scores) is now more incremental. We may be nearing the ceiling of what this paradigm can achieve.

Good news for startups.
Both trends are encouraging for founders and the broader ecosystem. We’re unlikely to see a single model dominate everything. Instead, we’ll have multiple strong horizontal models, with thousands of consumer and enterprise applications layered on top. The structure may end up resembling the Internet: “winner-take-all” dynamics on the consumer side, “winner-take-most” in certain enterprise verticals, but ultimately a fragmented and diverse landscape.

Where value will accrue.
Most value will likely concentrate at two ends of the spectrum:

  • Close to the metal: chips and infrastructure for training and inference.
  • Close to the customer: products with sticky adoption and deep integration into workflows.

The land grab.
This is a land-grab moment. Everyone recognizes it, which means competition is fierce. Execution matters more than ever, and the quality of founding teams will be decisive in such an intense environment.

Today’s bottlenecks.
Right now, AI is “middle-to-middle,” not yet end-to-end. Prompting and verification remain the two biggest bottlenecks. Startups that solve these pain points will be able to carve out real differentiation.

Technology + business model shifts.
The most powerful disruptions happen when technology shifts align with new business models — think ad-supported Internet or SaaS subscriptions replacing perpetual licenses. We’re starting to see this in AI, with models like pay-per-resolution in customer service. Expect more experimentation ahead.

AI isn’t sprinting to AGI — it’s pacing into the mainstream. That means more room for founders to build, compete, and ultimately define what AI becomes. The bottlenecks today — better prompting, smarter verification, novel business models — are where startup teams can win. Keep your eye on both the tech and the go-to-market — that’s where value is hiding.

Portfolio

Robotics has been a core focus of my time in deep tech. I recently wrote a post sharing our considerations on robotics and we’ve made three investments in the category within the past year, including Optimotive. Today, I am thrilled to be able to share the news that we’ve led the $5m seed round of […]