I’ve written about OpenAI on Innovation Matters before (and we’ve talked about it on the podcast), but this weekend we got a truly incredible episode of Silicon Valley court intrigue. I think it was inevitable, however, and speaks to a much deeper phenomenon about the mythology surrounding Silicon Valley.
My theory is this: the amount of mythologizing you have to do about your technology or company is directly proportional to the amount of money you need to raise and labor and material you need to consume. You have to justify the expenditures of your venture, and thus it takes on whatever significance it needs to in order to match the costs. As a founder, or investor, or high-level employee, you believe in the significance of your work because your paycheck depends on you believing in it, because your paycheck is based on your ability to raise money (or sell equity down the line). The current commercial performance, profitability, or value of your product does not really matter – it’s probably a huge money pit now; you are mostly concerned with what it could be. This is very different from a manufacturing company, which also has institutional mythmaking, but of a decidedly more prosaic kind, focusing on profitable operations, excellence in servicing customers, and other boring, practical qualities.
The thing about AI is that it’s extremely expensive. Training the models costs a lot of money – some $100 million for ChatGPT4 – and running them costs even more money. Plus, you need a huge amount of labor (clickwork to develop training data, which OpenAI sources from exploited labor in developing countries) and material in the form of data centers, GPUs, and more. OpenAI has raised USD 13 billion (of which USD 10 billion is from Microsoft), its celebrity cofounder and CEO Sam Altman had apparently been talking about raising a lot more – both for OpenAI itself, and number of related ventures (to produce those chips, for example).
Also, Sam, the board, and most of the senior leadership believe that AI is an existential risk to human life. I could spend a huge amount of time exploring this, but it’s a mythology that gives AI – and specifically, the prospect of artificial general intelligence (AGI) – complete centrality in the outlook for human civilization going forward, for good or for ill. There’s really no rational basis for this – I have seen no clear explanation of how AI actually ends the world. If it seems contradictory for the guys building the AI to believe it’s the next nuclear bomb, well, it is – but that’s not the point. They believe this because it means that AI is the most important thing, and thus any amount of expense, human suffering, and waste in its creation are justified. They sincerely believe this mythology because it’s necessary to justify their spending – and thus, their paycheck.
On Friday, the board of directors of the charity that controls OpenAI suddenly fired Sam Altman (more on this structure later). The reasons were not immediately clear: the board stated that Altman had not been “consistently candid” with them, causing them to lose confidence; they later clarified that there had been no malfeasance by Altman. This led to widespread speculation that the source of the rift was existential AI risk, which has since been backed up by most reporting. Over the weekend, the board has seemingly backtracked and buckled, reopening negotiations with Altman, as OpenAI’s investors and employees revolted. Altman has reportedly been hired by Microsoft, and most OpenAI employees are threatening to leave and join him unless he comes back to OpenAI. There’s also been a lot of very entertaining and terribly cringe court drama between the people involved, most of it playing out on Twitter; check it out if you want a laugh.
The rift between Altman and the board reflects that the contradictions between the reality and the mythos have become untenable. The board has been bought into the nonprofit mythos (that OpenAI is spending billions to save the human race) while Altman was acting on the capitalist reality (OpenAI is spending billions to improve Microsoft’s Bing search engine and other products). If you act according to the mythos, there needs to be some caution; Altman’s drive to scale AI as aggressively as possible isn’t aligned with that.
We can see the obvious conclusion when the mythmaking collides with the capitalist incentives: the capitalist incentives win easily because the mythos serves the capitalist incentives. People believe the myth because it justifies capitalist outcomes; everyone in OpenAI, even if they were attracted by (and perhaps on some level believe in) the idealist mission, supports Altman because his ability to raise money secures their paycheck. This reveals the thinness of OpenAI’s nonprofit structure – however sincere or cynical the motivations behind it might have been (probably a mix of both), it was never going to be able put a meaningful check on the company’s activities; not once billions of dollars came to be at stake. As soon as the nonprofit structure was a hinderance it was discarded; this was obvious in 2019 (when the for-profit org of OpenAI was created) and it continues to be true.
What does all this mean for the future trajectory of AI? Honestly, probably nothing. All the same players are still working on this same tech, whether they end up doing so under Altman at Microsoft or under Altman at a revamped OpenAI with a more conventionally capitalist structure. The seeming failure of the non-profit control of OpenAI won’t accelerate AI deployment because that control was never a meaningful check. We’re at no greater existential risk from AI than before because that risk isn’t real. None of the players in this fracas – Altman, the board, Microsoft – meaningfully disagree about the true issues of AI (the environmental costs, labor exploitation, or the profitability of AI products). What we witnessed this weekend was nothing more than court intrigue in our era of tech billionaire kings.