Siddharth S. Jha

Apple Intelligence, The Economics of Fast-Moving Markets and OpenAI

Jun 24 2024

The Promise of Apple Intelligence

This week from Benedict Evans:

Apple has built an LLM with no chatbot. Apple has built its own foundation models, which (on the benchmarks it published) are comparable to anything else on the market, but there’s nowhere that you can plug a raw prompt directly into the model and get a raw output back - there are always sets of buttons and options shaping what you ask, and that’s presented to the user in different ways for different features. In most of these features, there’s no visible bot at all. You don’t ask a question and get a response: instead, your emails are prioritised, or you press ‘summarise’ and a summary appears. You can type a request into Siri (and Siri itself is only one of the many features using Apple’s models), but even then you don’t get raw model output back: you get GUI. The LLM is abstracted away as an API call.

To me, Evans’ observations exhibit how Apple’s design strategy around AI, launched at their Worldwide Developers Conference (WWDC) a couple of weeks ago, differs so much than that of any entrants so far. While chatbots were the earliest forms of productizing Large Language Models (LLMs), I think we’re officially at the point in the AI technological shift where the mainstream product narrative could move away from raw chatbots and back towards user interface patterns people are much more comfortable and familiar with.

This isn’t just a question of better, more curated UI. What lie herein are realities about first-mover economics. See this matrix from The Half-Truth of First-Mover Advantage:

FMA Matrix

Suarez F. & Lanzolla G., Harvard Business Review, April 2005

While OpenAI has enjoyed an early lead in the AI space, its first mover advantages are short-lived. This is because the AI market of today, consumer & enterprise, is characterized by rapidly advancing technological innovation and consumer acceptance. As noted in the 2005 paper published in Harvard Business Review that presents the above matrix, in such “rough waters”, first-mover advantages are likely short-lived and very unlikely to be durable. And in every single one of the key resources you need to succeed in such a market — large-scale marketing, distribution, production and strong R&D — Apple is arguably the best in the world. This is why I believe Apple Intelligence will likely have the best shot at leading widespread & sustained consumer AI adoption, with domino effects for the enterprise.

This goes to show that OpenAI’s partnership with Apple, while seemingly desperate (and possibly strong handed by Eddy Cue, Apple’s SVP of Services & Steve Jobs’ favourite negotiator) might actually be critical for OpenAI — and could end up being key to ensuring their survival in the AI arms race.

From an MIT Technology Review interview with OpenAI CEO Sam Altman last month:

Altman described the killer app for AI as a “super-competent colleague that knows absolutely everything about my whole life, every email, every conversation I’ve ever had, but doesn’t feel like an extension.” It could tackle some tasks instantly, he said, and for more complex ones it could go off and make an attempt, but come back with questions for you if it needs to.

While Altman might be obsessed with creating true Her-like AGI for over a decade now, it’s ironic that Altman’s vision for future of AI actually feels closer in execution to iOS 18 with Apple Intelligence than anything else in the market. Often times in business, as was the case with Microsoft’s fast-follow of Apple’s computing moves in the early 80s and resulting market dominance they enjoyed in the 90s, it’s pragmatic business strategy instead of lofty futurism philosophy that ends up capturing the profits from new innovations effectively.

From When First Movers are Rewarded, and When They’re Not, Harvard Business Review (Aug 2015):

A key difference between successful pioneers and followers was how many innovations they launched. Because followers miss out on blockbuster returns, they cannot afford to fail as often as pioneers. Instead, they must weed out projects carefully and launch only a narrow scope of innovations they are (relatively) sure will succeed.

While it’s too early to predict whether present and future Apple customers will give a much more intelligent Siri another shot, in Apple’s favour, thankfully there’s much more to Apple Intelligence than just Siri. What’s clear though, if simply deducing from the points made in the HBR paper, is that Apple Intelligence as a whole certainly can’t afford to fail. Having been a successful follower / late entrant in markets time and time again in the last two decades, Apple as a company is most certainly prepared for this challenge.

OpenAI’s Future

At rumoured annual revenues of $3.4B and an $80B valuation from Feb 2024, OpenAI is presently the world’s most valuable private company. It’s hard to speculate on private companies, but it’s easy to tell that OpenAI has taken a multi-channel revenue strategy — freemium consumer product, organic enterprise pilots, large-scale strategic partnerships, and a ChatGPT for Enterprise. So far, all of it is probably working. Especially for a product just a couple of years old, it’s pretty impressive.

But, will it continue to work?

It’s useful to consider Ben Thompson’s Aggregation Theory. From Aggregation Theory:

This has fundamentally changed the plane of competition: no longer do distributors compete based upon exclusive supplier relationships, with consumers/users an afterthought. Instead, suppliers can be commoditized leaving consumers/users as a first order priority. By extension, this means that the most important factor determining success is the user experience: the best distributors/aggregators/market-makers win by providing the best experience, which earns them the most consumers/users, which attracts the most suppliers, which enhances the user experience in a virtuous cycle.

Aggregation Theory is arguably the Porter’s Five Forces for the internet era. How do the economics of aggregators apply to AI? And would they apply just as much to AI as they’ve applied to e-commerce, search, or video streaming?

Well, considering AI software products are distributed & consumed via the internet, there seems like there would be zero marginal costs. But, “inference” costs for AI providers are significant, which is essentially the cost of buying and running GPU chips, currently expensive and backordered, cementing Nvidia as the world’s most valuable company briefly last week. That said, whether inference costs will turn out to be fixed costs or marginal costs over time, I’m unclear on. What semiconductor industry patterns have always shown though, along with remarks from Nvidia CEO Jensen Huang at conference keynotes, is that chips will get cheaper.

Still, let’s look at the aggregator plays of OpenAI.

Firstly, ChatGPT essentially appears as an aggregator of knowledge. If I wanted to know how many Tim Hortons coffee shops are there in Canada, I can get an answer from either Google or ChatGPT. With Google, I may have to click on a link or two and scour through the pages to conclude that the information is trustworthy. ChatGPT on the other hand is designed in such a way that the answer is inline and you just have to kind of trust it (often, at your own peril). It removes a big level of friction, but it does that at the cost of a learning curve and trust. Knowing how to ask ChatGPT questions seems like a skill that requires training to actually be good at it — so much so that consulting companies like McKinsey consider “prompt engineering” a skill and a job function. Sure there were books and classes in the early 2000s about how to use Google too, but having witnessed both technological innovations — search and LLMs — as a user in real-time, there certainly is a substantial difference in learning curves between the two. The trust issue is obvious.. every now and then ChatGPT will spew out a whole lot of bullshit that looks correct but isn’t once you investigate. With Google, as I view a search result, I either trust the source or don’t.. it doesn’t really lie in the technical sense.

Secondly, as Ben Thompson outlines, OpenAI’s GPT Store launched in Jan 2024 is very much an aggregation play as well. But, I find this play a bit more deliberate and forceful. In terms of user experience, trying to use a third-party GPT from within ChatGPT feels a bit confusing as to what value is the secondary GPT adding. It seems rather redundant — why can’t the primary GPT simply run the third-party one if the prompt is better answered by it without me installing it or interacting with it separately? All it needs to do is ask me permission to run that other GPT.

Interestingly, Apple’s WWDC 2024 keynote demonstrated that this UI pattern is the approach Apple Intelligence is taking with user queries. As a result, the OpenAI integration shows that Apple has made a strong AI aggregator play, but it seems to be more thoughtfully designed than that of OpenAI’s. When Apple Intelligence thinks ChatGPT is better suited to serve a user request, it will ask you whether it can send that request to OpenAI — this is precisely the limited scope of the OpenAI integration within Apple Intelligence today. Apple will most certainly add the option to let a user pick another LLM instead of OpenAI’s — the WSJ report from yesterday is proof enough.

That said, I find that aggregating LLMs is different from aggregating user-facing AI products. Apps in the 2010s succeeded in big part because of the ease of discovery and their containerized consumption experience, thanks to app stores. The reality of AI though is that so far it’s either been productized in the form of a chatbot or simply bundled as a cool new feature in existing software. It’s unclear to me what a great aggregation play in AI will eventually look like. I think it’s too early to tell. As of Q2 2024, the industry still hasn’t graduated from the “LLM-as-product” phase.

The always insightful Benedict Evans offers some points around this in his essay earlier this month on ways to build AI products:

But the other approach is that the user never sees the prompt or the output, or knows that this is generative AI at all, and both the input and the output are abstracted away as functions inside some other thing. The model enables some capability, or it makes it quicker and easier to build that capability even if you could have done it before. This is how most of the last wave of machine learning was absorbed into software: there are new features, or features that work better or can be built faster and cheaper, but the user never knows they’re ‘AI’ - they aren’t purple and there are no clusters of little stars. Hence the old joke that AI is whatever doesn’t work yet, because once it works it’s just software.

That could very well be. Apple’s marketing team has cleverly positioned their personal intelligence system as “AI for the rest of us” with the promise that “things you do everyday [will] become more magical”. This to me feels like Apple potentially wins irrespective of which overall product direction AI most resonates with customers. After all, their App Store runs on well over 2 billion devices and apps themselves are today still pretty great consumption experiences into which AI could simply be integrated and eventually fade away as simply software.

For a player like OpenAI riding on a lot of hype, this is certainly an inflection point to figure out sound long-term business strategy. Deep-pocketed consumers toying with a fun $20+/mo subscription they can expense, extra API usage billing and Fortune 500 simply piloting products is cool for a startup, but can only go so far.

But I’ve covered consumer AI enough in this essay and how Apple’s upcoming AI stuff threatens consumer ambitions of OpenAI while also potentially ensuring its survival, in what seems to be OpenAI simply hedging its bets.

One could argue that Apple’s opportunity here could perhaps see echoes of Microsoft’s in the 90s. Then, Netscape productized the internet through the browser. But, it was Microsoft’s Internet Explorer that took the lion’s share of the market. But today there are so many LLMs coming out weekly that unlike internet browsing in the 90s, this might not turn out to be a winner-takes-all market. Either way, I feel Apple is positioning themselves neutrally and setting themselves up for success.

On Enterprise AI, briefly

In the enterprise space, it could only be a matter of a few quarters till the Salesforces, SAPs, Oracles, IBMs and Intuits of the world train their LLMs enough on massive amounts of corporate data and sell beefier subscription software back to their users abstracting AI features away in old-school GUI. As per this survey of AI adoption in the enterprise by Bain, surveyed firms are spending $5 to $50 million annually on generative AI. In this report, organization readiness and lack of in-house expertise are cited as the most growing obstacle in their AI investing strategy. Also, what’s worth noting is the desire to build AI in-house seems to be on par with the desire to buy from vendors. Even though some techno-optimists argue that generalized AI will heavily impact consulting as a career, it makes total sense that consulting giants like Accenture sure as hell would bring in a boatload of AI revenue helping enterprises make sense of it all.

While it’s all too early to tell and there are no promising AI revenues so far for the enterprise incumbents outside of cloud infrastructure and chips, remember that these are rough waters and in rough waters, technology innovation and consumer adoption both grow rapidly, making any first-mover advantages highly unlikely to be durable.

With all that said, I’ll end with a personal belief I’ve always had. The eventual winners in any fast-moving industry are the ones who are best at productizing. Building the best technology is definitely not enough. You need to build the best products and you need to keep building them. The product with the best UX will outshine its competitors. There’s a big difference in creating a technology and creating a product — and the number of companies who excel at these two arts in tandem are a mere handful.