Artificial intelligence poses an “extinction risk,” worrying researchers and senior officials


Twenty words which, however, say a lot: Mitigating associated extinction risksAmnesty International It should be a global priority along with other risks at the community level, such as pandemics and nuclear wars.

According to the non-profit organization Artificial Intelligence Security Center (CAIS), behind this new explosion, the brief statement aims to open the discussion about the most serious risks posed by artificial intelligence, and a brief and clear summary of some of them.

the CIAS wish to inform the general public b A growing number of experts and public figures are taking seriously some of the most serious risksAmnesty International advanced.

The declaration unites the voices of more than 350 signatories, most of whom are researchers and industry leaders.

A new episode of a long saga

For months, pressure has been building for world leaders to better regulate the meteoric rise of artificial intelligence.

Earlier in May, US President Joe Biden and Vice President Kamala Harris met with the heads of OpenAI, DeepMind, and Anthropic to discuss overseeing these technologies.

We believe government interventions will be necessary to mitigate the risks associated with increasingly robust modelsOpenAI’s Sam Altman said at the time, proposing in particular to create an agency responsible for licensing large-scale product development usingAmnesty International.

I think if something goes wrong with this technology, it can go very wrong. And […] We want to work with the government to prevent that from happening. »

Quote from Sam Altman, Head of OpenAI

Last March, some 1,000 leading global figures in the field signed a letter requesting a six-month suspension of any AI research that would be part of developing systems stronger than OpenAI’s GPT-4 model. Yoshua Bengio, businessman Elon Musk and Apple co-founder Steve Wozniak were among the signatories.

In recent months, AI Labs have been engaged in a frantic race to develop and deploy ever more powerful digital systems that no one—not even their creators—can reliably understand, predict, or control.-Can we read there.

The questions asked in the letter published in March:

  • Should we let machines flood our media channels with propaganda and lies?

  • Should we automate all tasks, including executed ones?

  • Should we develop non-human minds that could eventually outnumber us, outsmart us, and replace us?

  • Should we risk losing control of our civilization?

Also in March,UNESCO and the miles Co-author a book that aims to reflect on the role we want to give AI within society.

Joshua Bengio had already mentioned, at the time of launch, a danger to democracies and dangers that could carry the magnitude of the nuclear threat. What if only a few people possessed this power? Asked.

This week, it will be the turn of the G7 leaders to come together to reflect on this issue. A new working group meets for the first time on Wednesday to discuss, among other things, governanceAmnesty International and protection of intellectual property.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *