The head of OpenAI warns of the dangers of AI and calls for more regulations


Sam Altman, CEO of OpenAI, says it himself: AI is a danger to our society. In a blog, the latter calls on governments to create a regulatory regime for this burgeoning industry, on the same model that was set for nuclear power.

We no longer count the loud voices denouncing the inherent dangers of artificial intelligence to our society. However, it was unexpected that one of these alerts would come directly from the CEO of OpenAI, the most famous company in the industry at the moment. In fact, the reason for creating a formal ChatGPT: It is necessary to quickly establish a system to regulate this industry on a global scale.

“It is possible that within the next ten years, AI systems will surpass the skill level of experts in most fields, and perform many productive activities as one of the largest companies today.”writes the latter in a blog post. According to him, what he calls “superintelligence” will soon be the most powerful human technology.

An AI leader is calling for more regulation of AI

He stresses that for all the promise this entails, we must not underestimate the risks such energy generates. Also, Sam Altman proposes three lines of thought to better organize this sector. First, it is considered that companies working in the field of artificial intelligence must coordinate to develop working methods that ensure user safety.

Hence he imagines a call for projects from governments, where all of these companies could work together, or even a joint agreement to set limits on AI growth so that they are not crossed every year. “Companies must be responsible for maintaining a very high level of security.”he adds.

Next, Sam Altman argues that AI needs a regulator similar to the International Atomic Energy Agency, whose role is to ensure the peaceful use of nuclear energy. Any voltage that exceeds a certain threshold of amplitude […] It must be submitted to an international authority that can examine systems, order audits, check compliance with security standards, impose restrictions on deployment scores and security levels, etc. “.

Related – ChatGPT: You’ll never believe how much Chatbot OpenAI costs each day

To promote his idea, the CEO of OpenAI points out that companies can think together about how they can work together with this international agency, making it possible not to place all this responsibility on governments. Finally, the last point relates to companies’ technical capabilities to make AI safe. Sam Altman ensures that Open devotes a lot of resources to research in this area.

However, the latter concludes by declaring that it is necessary to allow companies to develop their projects freely. “But the management of the most powerful systems, as well as decisions about their deployment, must be subject to rigorous public scrutiny.”And he stresses, before emphasizing that AI is a matter of democracy, that people also have a say.

“Existing systems will create enormous value in the world, and while they present risks, the level of these risks appears to be commensurate with that of other Internet technologies and the company’s potential approaches seem appropriate.” , I finish.

Source: Open AI

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *