The AI revolution will look like a failure 

Recorded by:

Written by:

Senior Director and Principal Analyst

Last week, we got the breathless headline “AI companies lose $190 billion in market cap after Alphabet and Microsoft report,” which, like all things in AI, was totally overblown. “AI companies” in this context is basically just Google, which with its USD 1.8 trillion market cap, can pretty easily fluctuate in value by USD 200 billion. The context is interesting, though: A handful of public companies involved in AI reported earnings, and they were mostly mediocre. This shouldn’t have been a surprise — there are no major commercial applications yet for tools like the large language models (LLM) that are taking up so much investment — so how were they supposed to boost the earnings of a big company like Google? Despite that, this news was a shock to markets. This reaction was notable to me, though, as I think you should expect a bunch more headlines like this going forward — not because AI won’t succeed, but because the nature of that success will be hard to spot. 

First: It’s very unlikely that the AI use-cases generating hype right now will be the ones that drive a lot of revenue. In April last year, I wrote a piece comparing the advent of AI to the advent of the internet. Digital media has made the business of running a newspaper far cheaper, but the effect has been catastrophic: Newspaper margins collapsed as online ads replaced column-inches. Companies like Google have generated a lot of revenue by absorbing this shift from print to digital, but total spend on advertising is down as a fraction of the economy. I expect something similar for the use of AI to generate text and images for media (like movies, TV, and video games) marketing, and news — it will likely crush revenues for working artists and creators. Companies like OpenAI and Google may absorb some of this revenue (possibly a lot of it), but the total amount of spending here will likely go down. Just as with ads, the limiting factor is total demand: Falling costs didn’t increase demand for advertising, and the demand for content is basically at a saturation point. Cost reduction in content production isn’t going to create huge new markets, it’s just going to reduce margins. As with the internet, the value will come elsewhere. 

Second: The current wave of hype has been focused on generative AI, but the term is poorly defined and pretty badly abused: Almost any kind of AI system can be described as generative as long as it outputs data that doesn’t exactly match what it was trained on. A more useful distinction is between LLMs and smaller, specialized AI models. LLMs are distinct due to their size (trained on trillions of data points, with billions of parameters), their cost (GPT4 cost something in the range of USD 100 million to train), and their flexibility — ChatGPT can write code about as easily as it can write a recipe for chicken soup. Small AI models, by contrast, are trained on far less data (tens of millions of data points or even less), are much cheaper, and while there’s a huge diversity of models, each one does one thing. When people ask me about generative AI, they are mostly asking about LLMs, because that’s where the hype is; they often (incorrectly) view the other AI methods as outdated. 

Third: LLMs are flexible, but they mostly do one really valuable thing: Reduce the difficulty of interfacing with computers. This is most obvious with code: It’s pretty easy to express an idea to another person, e.g., “I want a website that lets me share pictures of cats,” but pretty difficult to do that in code if you’re starting from scratch. But you can just ask LLMs to write you a cat-sharing website, and they can basically do it with no human intervention. The barrier to entry on PicturesOfCats.com has been dramatically lowered. Of course, the market has already solved this problem (it’s pretty easy to make a website without coding at this point), but there’s a ton of other, much  trickier applications that haven’t been solved: For example, giving instructions to robots in natural language or interpreting and simplifying large volumes of complex natural language data into a machine-readable form

Code is the language that sits between all digital and physical systems and between all software systems and users. LLMs’ ability to make interfacing with digital systems easy for nonexperts will have a dramatic impact on a huge range of technologies that have struggled with adoption due to this lack of expertise. Among them are 3D printing, generative design, cobots and other forms of small robotics, as well as machine learning for a variety of basic optimization problems, from factory floors to marketing budgets. It will also dramatically accelerate any application where parsing a lot of natural language data is a major barrier: Small AI applications are particularly important here;things like materials informatics and predictive maintenance will really benefit from this capability in addition to being easier to interface with. 

All told, the AI revolution will mean that all these technologies, many of which have been around for more than a decade with relatively slow adoption, will suddenly take off. These technologies will create a lot of value, which will mostly be captured by the companies that own them: the generative design companies, the predictive maintenance companies, the cobots companies. Use of LLMs will scale, but these applications won’t drive a ton of volume: Generative design just doesn’t have that many users relative to, say, web search. LLMs themselves will remain a very expensive, economically risky service, provided by a handful of major companies. Headlines will say the AI revolution has failed and really we should have been excited about materials informatics, or predictive maintenance, or cobots, but this really wasn’t the case: LLMs will have played a crucial role in the adoption of a huge range of technologies. 

The invisibility of LLMs’ likely successes behind the scenes — along with their high visibility for trivial or annoying applications, like wonky customer service chatbots — will cement AI’s reputation as a failure. Don’t cry for the developers, though: Once these applications start having real impact, big LLM players will likely end up capturing a lot of the value created by these application developers, either through acquisition or app-store-style monopoly pricing. But there will still be a lot of value to be gained by the users of these systems, reaping benefits of greater implementation of efficiency- and productivity-boosting tech. 

What do you want to research today?