
Reducing the risk of extinction [de l’humanité] offered by artificial intelligence [IA] It should be a global priority, along with other large-scale risks such as pandemics or nuclear war. » This single sentence, with an alarming tone, constitutes the entire content of the new petition launched on Tuesday, May 30, by 350 personalities from the AI sector.
Led by the San Francisco-based NGO Center for AI Safety, this initiative calls for the March 28 open letter requesting ” break ” Advanced research in the field, signed by more than a thousand personalities, including Tesla CEO, Elon Musk. But Tuesday’s text was also endorsed by industry leaders: Sam Altman, CEO of OpenAI, creator of the ChatGPT chatbot, and Demis Hassabis, CEO of Google-DeepMind, James Manica, Senior Vice President Responsible for AI regulatory and ethical issues at Google, Eric Horvitz, Chief Scientific Officer of Microsoft, or Dario Amodi, formerly OpenAI and founder of Anthropic, a Google-powered youth photography.
Among the other signatories are several promoters of the letter asking for a six-month break, including Max Tegmark, of the NGO’s Future of Life Institute, or Stuart Russell, of the Center for Human Compatible Artificial Intelligence, a lab affiliated with the university. . from Berkeley. Some notable researchers have also recently turned to the idea that AI poses an existential threat to humanity, including Jeffrey Hinton, who quit Google, and Yoshua Bengio of the University of Montreal.
They are both considered the “fathers” of modern artificial intelligence and recipients of the prestigious Alan Turing Award along with Yann LeCun. But the latter, Meta’s head of artificial intelligence research, wants to be more reassuring and optimistic, and does not understand why AI programs attack humans.
OpenAI attack
Why would hierarchical entities in a thriving sector invite countries around the world to consider their technology a major risk and regulate accordingly? The initiative seems counterintuitive, but it can be explained, if we recall the beginnings of OpenAI.
At the time, in 2015, Elon Musk had already been alerting the media for months to the dangers of artificial intelligence, and he was rightly judged. “Potentially more dangerous than nuclear bombs.” a “artificial general intelligence”And Some argued that higher than humans, they could, by chance or mistake, become hostile to them. That didn’t stop Mr. Musk from co-founding OpenAI, whose original goal was to achieve such artificial intelligence. “beneficial to mankind”.
You have 53.84% of this article left to read. The following is for subscribers only.