Is AI existentially dangerous or not?

Mr. Bostrom, Mr. Socher, ChatGPT moves the public, but also stirs up fears. You, Mr. Bostrom, already warned nine years ago in your book “Superintelligence” that AI could become more intelligent than humans and even threaten us. Do you feel validated?
Nick Bostrom: I would recommend looking at the development with a mixture of fear and hope. We are moving towards transformative AI capabilities that will potentially shape the future of our species and our life on this planet. AI can carry existential risks, but it can also help to solve many problems. It would be wrong to meet it with indifference or complacency.

Richard Socher: It makes perfect sense to approach new technology with caution. If the inventors of the two- or four-stroke engine had thought about the effects on the climate, electric cars might have been around longer.

Still, I don’t think there are the existential risks Nick described. This is an unnecessary fear that distracts from the real problems. AI can write countless untruths – but not necessarily spread them.

Experts speak of hallucinations, AI sometimes tells nonsense. It also often gives distorted results and transports prejudices. Shall we indeed no longer concern ourselves with these difficulties?
Bostrom: Certainly there are many important issues. Some are more obvious and need to be solved earlier. But there are also those that are further afield and carry great risks. We should focus on both types of problems. Just like others have to deal with cancer drugs, traffic accidents or world peace. We don’t have the luxury of focusing on just one problem. We wasted 20 years.

socher: I can write a lot of cool books about time travel or about a cloak that makes you invisible. In a book like this I can think about all the problems that might arise. That’s cool and interesting – but ultimately it’s still science fiction.

Richard Socher

The German computer scientist researches applications for artificial intelligence in Silicon Valley.

(Photo: Urs Bucher/St.Gallen)

Bostrom: But there’s a big difference if you’re worried about time travel or AI, which has many billions of dollars invested in it. As far as I know, no one works on time travel machines or anything like that. But all the big tech companies are fighting over AI talent and Nvidia semiconductors. Countries are developing national strategies to be at the forefront.

There’s a reason so many in the AI ​​community are signing petitions. From the top AI labs like Anthropic, Deepmind and OpenAI, all the leaders and more than a hundred professors have recently said that the threat of AI annihilation is tantamount to the threat of nuclear war.

Did you sign the letter, Mr. Socher?
socher: Of course not. There are so many different petitions that come out and some AI experts sign while others find them annoying. The scenarios conjured up by Nick, like the one involving the paperclips, are highly unlikely.

You are playing on the well-known “paperclip” scenario in Bostrom’s book: an AI is programmed with the aim of making as many paperclips as possible. The software has nothing against people, nor does it want to seize world domination – but it wants to achieve the goal by any means necessary. If the AI ​​is intelligent enough, it can acquire resources and other machines en masse, or even convince humans to help it – until the world only makes paperclips.
socher: This is a completely negligible scenario. How can a machine be intelligent enough to achieve such complex sub-goals, but not intelligent enough to know that these quantities of paper clips also have to be sold?

Why should a company hand over all the resources, quite apart from the computing power, that would be necessary for this? And why can’t you turn off the AI, why does it work completely independently, with infinite resources and its own power source, but no servers or anything like that? It’s like a badly written screenplay where, upon closer inspection, you find many contradictions.

Nick Bostrom

The book “Superintelligence” by the Swedish philosopher is part of the literary canon in AI.

(Photo: Sportsfile/Getty Images)

Nevertheless, many believe in it.
socher: Nobody works on it – because you can’t make money with it. You don’t make money building a super smart car that decides by itself to go somewhere to watch a sunset with its cameras. Even the likes of Elon Musk want the car to be smarter, but not for it to decide to go for a drive — avoiding anyone in its path.

Bostrom: But there are many investors who want to make machines smarter. And if they could make them super-intelligent, I’m sure many would be very happy to do so. That is the goal of many research laboratories. And it’s not as if someone would explicitly build an AI that produces as many paperclips as possible – and then unleashes them on humanity. The example means that you could specify any goal.

And for almost every conceivable goal, there are understandable reasons for the AI ​​to secure more resources. Because with more resources, she can better achieve her goal, whatever it should be. That could be computing power or increasing your own intelligence. The AI ​​could fool its developers about its true capabilities to avoid being modified or knocked out to achieve its goals.

Nick Bostrom: Superintelligence.
Suhrkamp Verlag
Berlin 2016
480 pages
24 euros
Translation: Jan Erik Strasser

socher: The language models don’t even have a complete idea of ​​what a word is. They recognize individual parts, the tokens, and learn interesting things about the world by predicting the next word. This is really meaningful and very exciting. But will the language model now wipe out humanity to make sure the next word is always aaaa? The language models do not. And never will.

Why the world has waited in vain for flying cars

Mr. Socher, you made an interesting bet about Artificial General Intelligence (AGI), which is the ability of a computer program to learn any intellectual task that a human can perform – which would be a kind of precursor to superintelligence. Tell me more about it.
socher: Yes, it has been running for a few years with a founder of OpenAI. They believe that we will reach AGI in 2027. We have defined three criteria for this. First, a robot must be able to clean the entire house, sort all the socks, wash the dishes, and everything else. To do this, the AI ​​must solve a previously unsolved mathematical problem and be able to translate a book as well as a human being.

I don’t know, maybe if someone put a lot of effort into it, AI really could translate a book as well as a human. But currently there are still many inconsistencies that would not happen to a human translator. But that could work. But we’re a long way from a robot cleaning my house. AI will also help us to come up with creative new ideas in mathematics. But inventing a whole new field of mathematics to solve an unsolved problem is going to be difficult.

When do we reach AGI, Mr. Bostrom?
Bostrom: 2027 is indeed very ambitious. But whether it happens then or a few years later doesn’t matter. We must not lean back. I don’t have a fixed year in mind, but I’d take single-digit annual predictions seriously. But the uncertainty is great; Predicting technological breakthroughs is difficult. I started working on my book “Superintelligence” in 2008, it came out in 2014 – and since then the deep learning revolution has progressed remarkably. There’s no sign of slowing down.

socher: With the same hubris and excitement as for AGI, some people have predicted self-driving cars. But all the start-ups have failed. Typically, we anticipate the wrong things. There are only a few decades between the beginnings of aviation with the Wright brothers and flying at the speed of sound. People were convinced that we would all be driving flying cars by the 1980s or 1990s. But nobody has them.

We have a very intelligent AI that is obviously easy to talk to. If you set it up correctly, it can read all texts and include statistics. But it has no mind of its own.

AI needs to be regulated in the use case, not in general

Mr. Bostrom, should we introduce a six-month moratorium on AI research, as Elon Musk and others are calling for?

Bostrom: I’m torn about that. I didn’t sign the petition. It might be useful to pause at a critical juncture and slow down for six months, a year, or even two years. Then whoever develops the superintelligence would have the opportunity to proceed cautiously and set up safeguards. That’s a lot better than having 20 different labs and countries vying to be the first to cross the finish line. That would be the most dangerous way.

And what speaks against it?
Bostrom: What’s the point of a six-month moratorium? What would happen after that? Extend the break by a year? In that time, a major regulatory body could potentially be established and start stigmatizing AI research. It’s unlikely in my opinion, but it could go so far that AI becomes taboo and banned. That would also be an existential risk for humanity that we do not develop anything greater than what we already have.

This text is part of the large Handelsblatt special on artificial intelligence. Are you interested in this topic? All texts that have already appeared as part of our theme week

You will find here.

socher: We should regulate AI. But we should regulate the applications, for example when AI is used in a self-driving car or when a neurosurgeon operates on a tumor in the brain. Misplaced fears should not lead us to regulate AI in general, such as prescribing a certain number of parameters. That makes as much sense as the idea of ​​slowing down the internet – so the AI ​​doesn’t learn to develop a brain virus so quickly. Or the chips should run slower so the AI ​​can’t think as fast. All this makes no sense.

What’s your suggestion?
socher: There should be no authority over AI, just as there is no authority over computers. But there should be application-related authorities, such as the FDA in the USA, which oversee drug development. Can you use AI to discover new proteins? Absolutely. Can you give researchers money to develop a deadly virus using AI and protein engineering? No way.

All the fears that people associate with AI are primarily fears of people wanting to abuse this new tool for their own benefit. In many ways, AI holds up a mirror to us. Mankind is doing many bad things to mankind – and we should specifically regulate them.

More: How Sam Altman and Jensen Huang are shaping the AI ​​boom.

source site-12