Industry leaders say AI will be as dangerous as ‘pandemics or nuclear war’


The ChatGPT chatbot logo is shown on a smartphone in Washington on March 15, 2023.

Reducing the risk of extinction [de l’humanité] offered by artificial intelligence [IA] It should be a global priority, along with other large-scale risks such as pandemics or nuclear war. » This single sentence, with an alarming tone, constitutes the entire content of the new petition launched on Tuesday, May 30, by 350 personalities from the AI ​​sector.

Read decoding: The material is reserved for our subscribers Elon Musk and hundreds of experts have called for a “pause” in the development of artificial intelligence

Led by the San Francisco-based NGO Center for AI Safety, this initiative calls for the March 28 open letter requesting ” break ” Advanced research in the field, signed by more than a thousand personalities, including Tesla CEO, Elon Musk. But Tuesday’s text was also endorsed by industry leaders: Sam Altman, CEO of OpenAI, creator of the ChatGPT chatbot, and Demis Hassabis, CEO of Google-DeepMind, James Manica, Senior Vice President Responsible for AI regulatory and ethical issues at Google, Eric Horvitz, Chief Scientific Officer of Microsoft, or Dario Amodi, formerly OpenAI and founder of Anthropic, a Google-powered youth photography.

Among the other signatories are several promoters of the letter asking for a six-month break, including Max Tegmark, of the NGO’s Future of Life Institute, or Stuart Russell, of the Center for Human Compatible Artificial Intelligence, a lab affiliated with the university. . from Berkeley. Some notable researchers have also recently turned to the idea that AI poses an existential threat to humanity, including Jeffrey Hinton, who quit Google, and Yoshua Bengio of the University of Montreal.

Also read the interview with Yann Le Cun: The material is reserved for our subscribers Yan Le Kun, Director of Meta: “The idea of ​​wanting to slow down research in AI is akin to neo-obscurantism”

They are both considered the “fathers” of modern artificial intelligence and recipients of the prestigious Alan Turing Award along with Yann LeCun. But the latter, Meta’s head of artificial intelligence research, wants to be more reassuring and optimistic, and does not understand why AI programs attack humans.

OpenAI attack

Why would hierarchical entities in a thriving sector invite countries around the world to consider their technology a major risk and regulate accordingly? The initiative seems counterintuitive, but it can be explained, if we recall the beginnings of OpenAI.

At the time, in 2015, Elon Musk had already been alerting the media for months to the dangers of artificial intelligence, and he was rightly judged. “Potentially more dangerous than nuclear bombs.” a “artificial general intelligence”And Some argued that higher than humans, they could, by chance or mistake, become hostile to them. That didn’t stop Mr. Musk from co-founding OpenAI, whose original goal was to achieve such artificial intelligence. “beneficial to mankind”.

You have 53.84% of this article left to read. The following is for subscribers only.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *