Since its launch a few weeks ago, OpenAI’s ChatGPT has captured the imagination of users, entrepreneurs and investors alike. The quality of the chatbot’s content and conversational speaking style have convinced many that AI is finally ready for primetime. Some are even comparing ChatGPT’s launch to the iPhone App Store – a pivotal moment in the creation of the next big platform.
So, should entrepreneurs think of AI as the next new platform opportunity?
The most important question is how much value the underlying platform will keep for itself. History has shown that most platforms extract the bulk of the value for themselves, leaving only crumbs for the companies and developers building on top of the platform. If OpenAI remains the only game in town, we can expect to see a similar dynamic, even if their profit goal is capped given their unique deal with investors.
However, there is a good chance that several companies will offer similar AI platforms…in the same way that cloud computing is divided up between AWS (Amazon), Azure (Microsoft) and GCP (Google). OpenAI (with Microsoft as a partner), Google and Facebook are leading contenders to play an important role. Competition between these three companies (and potentially more) should limit a single company’s potential to maximize profits at the expense of companies building on top.
The second question is who will capture most of the new value being generated: incumbents or start-ups? The mobile platform heavily favored incumbents who went aggressively after the opportunity with smart M&A (Facebook and WhatsApp) and who shifted their products to a mobile-first interface (Google, Facebook).
We can expect incumbents to move just as aggressively in the AI opportunity. They’ll leverage their unique data sets, large user bases and balance sheets to get an edge, especially on the consumer side. But it might be a different story when it comes to enterprise software as there’s an opening for start-ups to create more value for their customers with vertical-specific AI products.
If that’s the case, how can an AI enterprise company create a differentiated product on top of large language models (LLMs) like OpenAI? They can differentiate with UI, data, underlying AI model levels…for example:
- By abstracting away prompt generation work and offering more user-friendly interfaces. One observation is that many start-ups are still designing user-facing AI in a pre-ChatGPT style. There is a real opportunity to help a user through the idea maze (in the same way that Google search helps with “related searches” and “did you mean”)
- By designing interaction models where false positives and false negatives don’t cause high friction or risk (e.g. by making it easy to bring a human into the loop). We have already seen a bunch of examples where the AI is “hallucinating.” This would be obviously unacceptable for an enterprise customer, whether it’s making up wrong facts for an AI lawyer or wrong accounting standards for AI-accounting systems.
- By inserting proprietary data into the LLMs. Differentiating with unique data (original datasets, as well as data from feedback loops) is brought up most frequently as a key strategy.
- By silencing parts of the LLM that are less relevant for a specific enterprise use case.
- And by addressing regulatory/privacy constraints. This is particularly important for companies to protect their brands.
The next decade will bring exciting innovation in AI. While incumbents will capture lots of the value, we will also see many new billion-dollar startups created in this field.
P.S.: below is the ChatGPT response to my prompt – our human-generated blog might still have a fighting chance for some time 🙂