Artificial intelligence should be regulated like nuclear weapons

Sam Altman

He has been leading the AI ​​start-up OpenAI since 2019.

(Photo: Bloomberg)

san francisco According to OpenAI CEO Sam Altman, artificial intelligence (AI) is so powerful that it should be regulated like nuclear weapons. “We need a global supervisory authority,” Altman demanded on Wednesday. Under him, the start-up OpenAI presented the speech AI system ChatGPT and thus fueled the global race for solutions based on artificial intelligence.

“I would like something similar to the Atomic Energy Agency for artificial intelligence,” Altman said at a panel discussion during payment processor Strip’s annual conference in San Francisco. No atomic bomb has been used as a weapon since 1945. “Hardly anyone would have thought that was possible at the time,” Altman said. This success is also due to the Atomic Energy Agency IAEA.

The Vienna-based authority was founded in 1957 and is made up of more than 150 countries. The organization sends inspectors to countries that use nuclear materials to generate energy. It has established and is reviewing security requirements for the deployment and use of nuclear material. She also wants to ensure that the knowledge from atomic energy is not used for military purposes, for example to build atomic bombs.

Microsoft integrates many OpenAI technologies

OpenAI is considered a leading provider of language models based on artificial intelligence. Altman financially supported the company’s founding in 2015 and took over as CEO in 2019. Originally, OpenAI was founded as a non-profit foundation, but Altman built a commercial subsidiary that provides access to the company’s AI systems for money.

A number of companies use the AI ​​systems from OpenAI. One of the close partners is the hardware and software developer Microsoft, which according to industry estimates has invested around 13 billion dollars in OpenAI. Microsoft is currently integrating OpenAI’s AI systems into almost all of its products.

Expert: AI could be a powerful tool for terrorists

Data protectionists and AI experts have been warning of misuse of the technology for months. For example, elections could be manipulated by millions of misinformation spread, criticized the cyber security expert Bruce Schneier. In addition, the AI ​​systems could be a powerful tool in the hands of terrorists.

The EU is currently preparing a law to regulate AI. The draft of the “AI Act” currently being discussed by the EU Parliament provides for a classification according to the degree of risk. For example, AI for recruiting staff or for operating critical infrastructure should be classified as high risk.

>> Read here: AI before regulation – The most important questions and answers

This obliges developers and users to closely monitor the functioning of their system. In lower risk classes there are lower requirements.

OpenAI boss Altman: Universal AIs by the end of the decade

There are currently huge leaps in development in artificial intelligence, said Altman. He expects that there will be an “Artificial General Intelligence” (AGI) by the end of the decade. This is a term that has been common among AI experts for decades, but has so far only been used theoretically. It denotes an AI that can understand and master any intellectual task that a human being is capable of.

Altman defined AGI as a system that can dramatically accelerate scientific progress. However, he added that he does not expect the system to operate fully autonomously.

Altman identified the most interesting new developments in the application of artificial intelligence in two areas. Firstly in the education sector: Here AI can be used as a learning aid that tailors the teaching content to each student. Second, Altman called for better interaction with AI systems: “We have to get away from chatbots.” However, he did not give any concrete examples.

More: OpenAI founder Sam Altman plans eye scan

source site-12