Elon Musk’s call for a research break in artificial intelligence shows helplessness

Elon Musk

Even a patron of the tech industry should be aware that the proposal for an AI moratorium can hardly be implemented.

(Photo: dpa)

The advances in artificial intelligence (AI) are breathtaking – and they will make history. The latest example is a research paper by 14 Microsoft scientists in which they report on the new model GPT-4 and the first signs of superintelligence.

That was probably the reason for Elon Musk and other AI experts to publish a petition – in which they call for a research moratorium of six months. Otherwise we would “risk a loss of civilization”.

Musk should also be aware of how pointless the proposal is. But it shows one thing above all: one of the people most familiar with AI is at a loss.

China will not participate

The requirement requires two things: firstly, all relevant companies and research institutes would have to participate, and secondly, effective regulation would have to be implemented during the break.

There are doubts about both points. Given the geopolitical tensions, it is extremely unlikely that China would participate. It is the declared aim of the People’s Republic to be number one in this key technology. The country is already the clear number two behind the USA in terms of the number of research papers – and in some areas such as “computer vision” it is even the leader.

If the moratorium is only partially observed, there is a risk that we will end up with a Chinese instead of a Western “Artificial General Intelligence” (AGI). The thought of a superintelligence is uncanny, but an AGI shaped by a totalitarian regime would be even more uncomfortable.

You can hardly regulate AI

This is not science fiction stuff. Experts speak of the danger of an “intelligence explosion”: As soon as an AI can think and act like humans, it will build a new, improved version of itself. They will in turn build a new version and so on. An unstoppable process.

But let’s assume that China and the rest of the world think better of it and stop research for half a year. Can we regulate AI in time? The EU has been working on regulation since 2021, so far without any result. It doesn’t take that long because the EU would work slowly or bureaucratically. It takes so long because it’s a highly complex, some say impossible, endeavor. For example, you can regulate data entry so that certain prejudices or biases are not trained into the model. That makes sense, but in view of the large quantities, it is hardly feasible.

graphic

The Persona Musk doesn’t help much

Regulation appears to be an illusion, as Musk and the authors of the “Future of Life Institute” themselves write: The “mighty digital brains no one – not even their creators – can understand, predict or reliably control”.

Not regulations, only technology can help us. Certain techniques such as Reinforcement Learning from Human Feedback (RLHF) not only help and improve responses dramatically, but also prevent manipulation and bias.

It also doesn’t help that the proposal is so closely linked to Musk. Although he’s officially from the Future of Life Institute, everyone in the tech community knows that the institute was founded by Musk and his friends and is financially dependent.

And Musk represents his own interests. Tesla uses AI for its autonomous driving systems more than any other car manufacturer. Tesla’s supercomputer is the third largest in the world by the number of graphics processors – on which AI is trained – after the projects of Meta and the EU with Leonardo.

Musk has been warning of the dangers of AI for many years. You have to give him credit for that. Does he want to promote his own interests, or does he really mean what he says? It’s hard to say what doesn’t make things better.

More: Tech elite around Elon Musk calls for a break in development of ChatGPT and Co.

source site-13