AI expert and philosophy professor Nick Bostrom warns of the dangers of AI

Dusseldorf The book “Superintelligence” made Nick Bostrom famous in 2014. The Swede teaches philosophy at Oxford University, has a master’s degree in physics and “computational neuroscience”, which brings together findings from neuroscience, medicine, mathematics, computer science and physics.

In his bestseller, Bostrom warns that an artificial intelligence (AI) system can be a superior and ultimately hostile force to humans. Using an example that became famous, he showed that this idea does not necessarily have to follow the screenplay of a Hollywood film in the style of the Terminator: an AI is programmed with the aim of producing as many paper clips as possible. The software has nothing against people, nor does it want to take over the world – it just wants to do its job.

If the AI ​​is only smart enough, it can grab resources like metal or energy by the gallon, and convince other machines or even humans to help it with its goal – until everyone and everything in the world just makes paperclips.

In an interview with the Handelsblatt, the 50-year-old warns of the dangers of artificial intelligence (AI), which he believes will soon be much more intelligent than humans. The head of the “Future of Humanity Institute” in Oxford does not want to give a specific date, but is already surprised by the rapid development of AI.

In order to prevent misuse of AI, Bostrom calls for more so-called alignment research, with which neural networks are given certain regulatory specifications during training. Another problem for him: The leading AI developers don’t talk to each other enough. The professor also warns against a market economy-driven race that does not leave enough time for alignment.

Regulating AI is important, says Bostrom. But it would progress too slowly while the area would develop at breakneck speed. The countries would have to deal with it at the highest state level.

Read the full interview with Nick Bostrom here

In a letter, tech giants warn like Elon Musk, Peter Thiel or Steve Wozniak before artificial intelligence (AI): It poses “profound risks for society and humanity”. It calls for a six-month research moratorium. Did you sign?
No. I’m not a fan of signature drives, they can set you up in unforeseen ways, and I want to be open and objective. But current language models are evolving very rapidly. It was a big jump from GPT-3 to GTP-4 and nobody knows what GPT-5 will be able to do. So the letter makes sense.

Her book Superintelligence, published nine years ago, influenced many of the signatories. In it you warn against AI developing superhuman intelligence and becoming a threat. Is this the reality?
We are moving towards transformative AI capabilities. The era of machine intelligence brings with it significant risks, including existential risks – like the annihilation of humanity. Because we are creating something that will be far more powerful than we humans are.

Isn’t that scaremongering? AI models are “stochastic parrots” that use probabilities to decide which word and which pixel to follow next in a sentence or image. GPT-4 can write poetry, look for flights or draw pictures – how is it supposed to destroy humanity?
GPT-4 does not pose an existential risk now. But other, much more powerful AI models are being built, and they may have a general intelligence that will be greater than that of humans. This can be achieved with current token sequence prediction.

By that you mean how current AI models work. They break data into tokens or “characters” to make them readable by machines. The AI ​​learns, for example, which word is most likely to follow another. An exercise in statistics, one might think. How is an “Artificial General Intelligence” (AGI) supposed to emerge from this?
In order to be really good, it is not enough for the AI ​​to remember certain letter frequencies. For example, in order to make the right prediction as to which word a person will say next, the model has to get a kind of picture of the other person. Where the person is from, where they work or how the economy is doing in their home country. This all helps the model in predicting the next token.

ChatGPT

The start-up OpenAI published ChatGPT in autumn 2022 – and thus brought artificial intelligence to the general public. The AI ​​is based on the GPT-4 language model.

(Photo: dpa)

The models are thus “multimodal”, processing not only speech and text, but also images and other information at the same time.
Ultimately, a truly accurate model of the world will be built. Because that will help plan, strategize, research and invent technology – all the things that humans do. But these models operate at digital speed and with brains the size of a department store. The results will be transformative.

>> Read also: What the AI ​​of OpenAI can do

It’s a fascinating notion. Because AI doesn’t understand the world today, it follows guidelines. Such a “world model” would be a breakthrough, which is also frightening.
There are some concerns there. One would be for the model to freak out and kill everyone. But before that, it is much more realistic that people will do bad things with this powerful tool. A repressive regime can use AI to wipe out the opposition even more efficiently. Everyone’s communication can be monitored much better than with current data mining. One could create accurate models of people and their political beliefs, even with the things they wrote between the lines or hinted at over the phone. One could create propaganda bots or robotic police drones.

How can we avoid such horror scenarios?
Well, a good place to start would be to invest significantly more in alignment research. There is already enough money and other means, but we need to create more recognition and status for the task. We have to get the brilliant young people to deal with the theory.

By alignment you mean the theory of the “alignment” of AI models, with which neural networks are steered in certain paths at an early stage. What else do you suggest?
All leading AI labs should have better communication channels with each other. It is crucial not to achieve machine superintelligence through fierce market competition. In a race of 20 countries or development teams, nobody will stop to build additional safety measures into the model out of sheer caution – because you are immediately behind. Therefore, it would be good if the three leading providers OpenAI, Deepmind and Anthropic talk to each other. That would be a good start.

What can politics do?
There is great opportunity in regulation. However, my fear is that it will take years to reach an agreement and that a lot will have changed in the AI ​​by then. In general, it would be a good thing for governments to deal with it at the highest level. Because if a dramatic intervention suddenly becomes necessary, for example to close an AI laboratory or shut down a computer cluster, then it will not be a bunch of bureaucrats making this decision over the weekend. It will be top politicians and it will have to be immediate. It would be like having a foreign aircraft carrier suddenly appear off the coast.

>> Read also: Strict requirements for AI – Members of the European Parliament want to regulate ChatGPT more strictly

Elon Musk wants to build the “Truth-GPT” AI model, which he believes has no prejudices and tells “the truth”. What do you make of it?
Elon is genuinely concerned about AI. He takes on big media companies and tech companies. They can also pose a risk. If they control the AI, they could censor the models or ideologically twist them. But I speculate I cannot speak for Elon.

Nick Bostrom

The philosophy professor and director of the Future of Humanity Institute in Oxford recognized the rise and problems of artificial intelligence very early on.

(Photo: Sportsfile/Getty Images)

GPT-4 has performed flawlessly so far. OpenAI boss Sam Altman always emphasizes that the “alignment” training of the language model makes it more real and less toxic.
GPT-4 is good. Some try to outsmart the model, but this “jailbreaking” is not easy. Interestingly, it didn’t behave correctly at first. There was advice on how to most efficiently kill most people or create a biological weapon.

But that soon stopped. Later, some users only managed to elicit such strange statements from the model after many hours of questioning. Isn’t that something constructed?
The fact remains that the AI ​​was behaving in a way the developers didn’t want to see. Microsoft limited the number of questions and may have taken other measures to block unwanted answers. But there are always ways to get around that.

When do you think artificial intelligence will surpass human?
I don’t have a specific year in mind, there are too many and too uncertain factors. But I think there’s a good chance that we’ll see that in the near future. When I wrote “Superintelligence” in 2014, I would not have thought that the development would progress so quickly.

What has changed since then?
At that time, the so-called Deep Learning Revolution began. It was a time of change. Today we are faced with another change. But this time it is one that is not only recognized in a small, specialized research area. Back then, Deep Blue and Alpha Go would win in games like Jeopardy or Go, and that was kind of amusing in a way. Yes, impressive too, but people didn’t think it would change their lives. That has now changed.

Alpha Go

The AI ​​program beat top Chinese player Ke Jie in Go in 2017 – one of the last games where software wasn’t better up to that point.

(Photo: AP)

Microsoft-Developers have already seen signs of “consciousness” in GPT-4 in a research paper. Do you agree?
There are obviously different philosophical views on what consciousness is and isn’t. In my opinion, the term is more complex than you think. The naïve view is that consciousness works like a switch: you either have it or you don’t. But are we really always conscious? For example, when we are in a room with many prominent details, neuroscientific experiments show that people are not fully aware of their visual perception. You could say we are only aware of a specific small object that we are dealing with at the moment. We zoom in and out of our existence. It seems continuous, but is actually more of a flickering.

And what does that have to do with AI?
It is not impossible that there are different levels of consciousness in some current AI systems. As the systems acquire more and more the functions of human brains through sophisticated systems of attention, it becomes less and less possible not to ascribe to them a degree of consciousness.

Mr. Bostrom, thank you for the interview.

More: Google Software Specialist Fired – Claims to Have Recognized Consciousness in AI Program

source site-12