ChatGPT is not super intelligence – but still dangerous

ChatGPT

Since the chat bot can save a lot of time, every sixth company in Germany is already planning to use ChatGPT.

(Photo: AP)

ChatGPT fascinates and scares people all over the world in equal measure. Italy has temporarily banned this new form of artificial intelligence (AI), which produces deceptively human-like texts, because it does not respect privacy. Researchers and entrepreneurs are calling for a moratorium on development.

This leads to the question: Can technical progress be stopped? And if so, would it be reasonable? So where is the right path between fear of technology and blind euphoria for progress? Is it about bans, or is it more about good regulation?

There are good reasons for banning certain technologies, such as those used to modify the genome of embryos. In order to prevent inhumane experiments, medical-technical progress is deliberately stopped. But is that comparable to ChatGPT, feared by critics as a powerful super AI?

The mere fact that the AI ​​can write texts in seconds that take people days or weeks to write should not be a cause for panic. Every sixth company in Germany is already planning to use ChatGPT, simply because the bot can save a lot of time. Technical progress should make life easier – otherwise it is pointless.

Agricultural machines plow and harvest faster than we do, trains run faster than horses, X-ray machines see what the human eye cannot see, and computers can calculate far better than Homo sapiens. For such technologies, bans would be nonsensical, but there are legal requirements.

>> Read also: How well can algorithms invest money?

So why the fuss about ChatGPT? In the case of Italy, it is about data protection. That is legitimate: With a new technology, it must first be clarified that it does not cause any harm. The reputation of tech pioneers – including Tesla founder Elon Musk – is different after a six-month break in development for this AI. Doubts are appropriate here: some of the signatories are probably more concerned with hindering the competition that is running ahead.

The demonization of AI as a mythical superintelligence that robs people of control over civilization should also make people skeptical. ChatGPT is far from that. The program assembles texts according to probability rules and does not know whether the facts are correct.

As intelligent as these programs seem, they are also stupid. Therefore stopping development, even if it were enforceable, would only be activism.

AI must not become a tool of power

Nevertheless, AI like ChatGPT must be regulated at least in sensitive areas of application. Not for simple things like cover letters to clients or homework. Even suggested wording of references or doctor’s letters do not need any legal containment. However, it becomes critical wherever sovereign tasks are affected, such as in the judiciary, where automated decisions can cause great damage – far beyond data protection.

Above all, AI is about transparency: It must be clear on which principles it works and on which data it is trained. On this basis, it must be regulated who may use them and under what conditions in sensitive areas such as the judiciary, lending, medicine or even the use of weapons. Otherwise, AI becomes powerful, incalculable tools in the hands of a few.

Such a regulation is not in sight in the USA. After all, the EU is planning the AI ​​Act. In the best-case scenario, it will be adopted as early as 2023 – but probably even later – and then has to be implemented nationally. Time is running out.

More: China and the USA dominate in AI. But there are also hopeful approaches in Germany

source site-12