D'où vient la valeur de l'IA ?

Enregistré par :

Écrit par :

Directeur principal et analyste principal

OpenAI removed from its policies the prohibition of AI for “military and warfare”; soon after, it became clear it is working with the U.S. Department of Defense on multiple projects. This should not come as a surprise to anyone: Silicon Valley as a whole has had a symbiotic relationship with the U.S. military from the very beginning, and military applications for AI are inevitable (though OpenAI is working with the Pentagon, mainly on cybersecurity, and says it still maintains its ban on using its AI for weaponry or any destructive capabilities).  

I’ve written before about how much of the way companies like OpenAI talk about their products is mythmaking — they position AI as central to human life going forward because that justifies the huge amount of money they are raising and spending. OpenAI’s policy prohibiting the military use of AI was born of this same mythmaking. “Oh no,” says OpenAI, “our AI is too powerful, too important, to ever be applied to base uses like war” — this obviously serves the myth of AI. Once the money is there — and as OpenAI continues its awkward evolution from a nonprofit into a commercial business — the story quietly changes. 

More interesting, though, than the erosion of OpenAI’s lofty principles is exactly what value the military is going to get out of its partnership with OpenAI. I hosted a webinar on AI “You, Me, GPT: Unpacking the impact of AI on innovation” (if you missed it, you can catch the recording here). In this webinar, I proposed two main sources of value: automation, the value created by reducing the need for human labor, and knowledge, the value created by accelerating human understanding with AI’s ability to model complex problems. There are examples of both in military applications, especially if you set aside any ethical concerns; processing large amounts of data with AI to identify potential targets is a clear example of automation (though to be fair, one that OpenAI is saying, at least for now, it would not allow its tech to be used for). OpenAI’s proposed project on understanding and preventing veteran suicides will hopefully be an example of the knowledge value of AI — we’ll hopefully develop a better understanding of why these events occur using AI and thus be better able to help prevent them. I discuss these two forms of value at length in the webinar, so check that out if you want to hear more. 

What OpenAI’s military work got me thinking about was the secret third value of AI: non-attribution and non-traceability. Decisions made by AI or output produced by AI cannot be attributed to a specific person or fully explained. AI of course uses enormous amounts of human-generated work in its training data, but it’s basically impossible trace any reason or action back to any specific piece of training data in an AI system and say that that was the source or basis for a specific output. Such non-attribution and non-traceability are potentially valuable to the military — often in bad ways. There’s a longstanding legal battle between the American Civil Liberties Union and the CIA to uncover information about the targeting of drone strikes. The criteria that these agencies use to put individuals on kill lists remain a secret; the use of AI in assessing threats would make it far more difficult to determine why any particular person was targeted — and likely part of the reason why the CIA is already developing its own ChatGPT-style tools

Non-attribution and non-traceability are potentially even more valuable to businesses. The non-attribution of any kind of AI-generated media is a huge value proposition, as it means that companies do not have to pay royalties or any kind of compensation to the creators of any kind of content, often even in situations where AI is mimicking a particular artist’s style. Even though DALL-E was trained on copyrighted images, those original creators won’t get any kind of revenue. This is true for other forms of media as well — the New York Times is suing OpenAI because ChatGPT was trained on its articles “without permission or payment”; the lawsuit demonstrates that ChatGPT is capable of recreating passages from New York Times articles verbatim. The New York Times is limited, however, to comparing its articles and the output of ChatGPT because there’s no way to say for sure how the New York Times data were actually used. Such non-traceability and non-attribution are extremely valuable to OpenAI — in any other context, a program that recreates New York Times articles would be a slam-dunk copyright violation, but here, there’s a real shot that OpenAI will successfully defend its practices. The value of non-attribution or/and non-traceability doesn’t just end with copyright: For companies that want to skirt laws on hiring practices and discrimination, IP infringement, or royalties, being able to say their decisions were based on an AI output could provide a shield. 

The use of digital technologies to skirt regulations or outright break laws has been a key feature of at least the last decade of Silicon Valley innovation. One of the gig economy’s core value propositions is to transform full-time employees (with all the legal protections that entails) into unprotected contractors, despite the same work being done. Uber both knowingly and intentionally broke many laws as it sought to scale as quickly as possible. Blockchain technology, and Bitcoin in particular, has facilitated illegal drug trade, fraud, and far more; the whole crypto ecosystem is built on an ethos of evading financial laws and regulations. Many uses of AI are not so different — they provide a mechanism for evading accountability and skirting copyright or another law. This is not a mistake, or an accidental outcome of OpenAI’s desire to uplift humanity — it’s a core feature of the system, a key value proposition of AI, and one of the key drivers for its adoption. 

Que voulez-vousrechercher aujourd'hui ?