Group around OpenAI boss warns of threat from AI

Sam Altman

The CEO of OpenAI, along with hundreds of other AI experts, warns of the dangers of unregulated machine intelligence.

(Photo: dpa)

Dusseldorf The statement is only 22 words long, but it packs a punch. “Reducing the risk of annihilation by AI should be a global priority alongside other risks of societal proportions, such as pandemics or nuclear war.” The American non-governmental organization “Center of AI Safety” based in San Francisco published the call.

The list of signatories reads like a who’s who of artificial intelligence (AI) science. There are Demis Hassabis and Sam Altman, the bosses of Google Deepmind and OpenAI, which are among the most important AI companies in the world. You can also find Geoffrey Hinton and Yoshua Bengio there, who won the 2018 Turing Award – a kind of Nobel prize for computer science.

Among the 376 signatories are a handful of German AI luminaries: Frank Hutter, Professor of Computer Science at the University of Freiburg, Joachim Weickert, Professor of Mathematics and Computer Science at Saarland University, or Ansgar Steland, Professor of Statistics and Business Mathematics at RWTH Aachen.

OpenAI boss Sam Altman signs call to regulate AI

The call is reminiscent of the open letter from the Future of Life Institute in March 2023, which called for a research break of six months and was signed by well-known tech personalities such as Elon Musk and Steve Wozniak.

>> Read also: “AI will reach an intelligence that will be greater than that of humans”

The call met with mixed responses, not only because of the warning itself, but also whether a moratorium might make sense. For example, AI start-ups take a critical view of the call. They fear regulation based on such warnings, which would prevent them from catching up with established players.

Reducing the risk of AI annihilation should be a global priority alongside other societal-scale risks such as pandemics or nuclear war. Center of AI Safety

However, there are differences between the new and the old petition. So the current one is kept more general. While the appeal puts the dangers of AI on par with nuclear war, it does not talk about a “runaway race” and AI that “nobody – not even their creators – can understand, predict or reliably control”.

Almost only AI researchers have signed

According to Dan Hendrycks, director of the Center of AI Safety, the statement was made on purpose. The aim was to avoid differences of opinion about the danger or about solutions such as a six-month research break.

Rather, one would have wanted to encourage a kind of “coming-out” of scientists. “There is a widespread belief, even within the AI ​​community, that there are only a handful of doomsayers,” Hendrycks told the New York Times. “But in fact there are many who privately express their concerns.”

In fact, apart from a few exceptions such as pop singer – and ex-girlfriend of Elon Musk – Grimes or Jaan Tallinn, co-founder of the streaming service Skype, there are almost only AI researchers on the list. There are more than 30 employees from Google Deepmind or 16 from OpenAI, as well as countless professors and scientists.

However, some names are also missing, such as Yann LeCun, chief AI scientist at Facebook parent company Meta.

More: The Frankenstein Moment – If we don’t control artificial intelligence, it controls us.

source site-12