Discussion about regulation in Germany

Berlin As in Italy, the AI ​​software ChatGPT could also be temporarily blocked in Germany due to data protection concerns. “In principle, a similar approach is also possible in Germany,” said a spokeswoman for the Federal Commissioner for Data Protection, Ulrich Kelber, to the Handelsblatt. However, this would fall under the jurisdiction of the state data protection authorities, since the US-based operator OpenAI is a company.

The Federal Ministry of the Interior explained the actions of the Italian authorities that media reports on it were being followed “carefully”. The test standard for official intervention is the General Data Protection Regulation (GDPR), which applies directly throughout Europe.

Kelber’s authority has now asked the data protection supervisory authority in Italy for “further information” on the blocking of ChatGPT. These would then be passed on to the responsible state authorities and media authorities, the spokeswoman said.

The Digital Ministry, headed by Volker Wissing (FDP), spoke out clearly against blocking ChatGPT in Germany. “We don’t need a ban on AI applications, but ways to ensure values ​​such as democracy and transparency,” said a spokesman for the ministry. According to the ministry, the currently planned EU legal framework could make “Europe a global pioneer for trustworthy AI”.

Since Friday there is no longer access to ChatGPT from Italy. The country’s data protection authority justified the blocking with violations of data protection and youth protection rules. US-based operator OpenAI has been ordered to stop offering the service in Italy “effective immediately” under threat of a fine.

The Italian Data Protection Authority is considered the first authority in the world to block ChatGPT based on data protection regulations. The reason for the advance is a data breach. Some ChatGPT users had seen information from other people’s profiles. According to OpenAI, the problem was due to an error in a software used for ChatGPT.

The data protection officials complained that OpenAI did not provide users with sufficient information about the use of data. In addition, the chatbot lacks a youth protection filter, although the company recommends its use only for young people aged 13 and over.

Data protection sees no reason for a ChatGPT ban

ChatGPT relies on the software capturing massive amounts of text. On this basis, she can formulate sentences that can hardly be distinguished from those of a human being. The program estimates which words could follow next in a sentence. Among other things, this basic principle entails the risk that the software will “hallucinate facts”, as OpenAI calls it, i.e. misrepresent incorrect information as correct.

The Italian data protection authority instructed OpenAI to report within 20 days on the measures taken to ensure the protection of user data in the country. Otherwise there is a penalty of up to 20 million euros or four percent of global annual sales.

>> Read also: What ChatGPT can do

The former data protection officer of Baden-Württemberg, Stefan Brink, sees no reason to slow down AI software like ChatGPT for data protection reasons. “Although AI regularly uses personal data for training purposes,” said the director of the scientific institute for the digitization of the working world (wida) of the Handelsblatt. “However, as far as the data is obtained from the Internet, the legitimate interests of the developers regularly outweigh the protection needs of those affected.”

Stefan Brink

The former Baden-Württemberg data protection officer does not see youth protection issues as AI-specific.

(Photo: dpa)

This applies in any case if AI is developed with research goals. No particular risks were identified in this regard. As far as questions of the protection of minors are in the room, these must always be considered. But that is by no means AI-specific.

“The German supervisory authorities should therefore – unlike the Italian ones – observe the development, but not develop precautionary counter-positions for the sake of sensationalism and publicity,” said Brink. It is not the task of supervisory authorities to stop new technologies of digitization. “Education certainly helps, but not doubting.”

The head of the federal agency for disruptive innovations, Rafael Laguna, is also against a ban. “You can’t stop such digital technology developments with bans, then they just take place somewhere else,” he told the Handelsblatt. In fact, it only prohibits oneself from taking part in the opportunities. “We’re in the middle of a disruption, it’s better to do some research, take part and develop it into what you want and then specifically prevent what you don’t,” says Laguna.

AI regulation by the EU is still pending

Experts have far greater concerns than data protection with regard to the lack of regulation of AI in general. An AI regulation, also known as the AI ​​Act, is currently being discussed at EU level and should ideally be passed this year. Member States would have two years to implement it. The Council of Europe is also working on an AI human rights treaty.

A major problem with AI applications that work with language is the so-called “bias”, i.e. the distortion of statements due to stereotypes or prejudices in the training data. The director of the Center for Information and Language Processing at the Munich LMU, Hinrich Schütze, draws a parallel to weapons or genetic engineering. In these areas there are fixed boundaries and access rules. “Just as human cloning is prohibited by law in genetics, there must also be a set of rules that sets limits on language models,” demands the computer linguist.

But even the planned AI Act would classify AI like ChatGPT as a “low-risk” system for which only general transparency rules are provided, explains Silja Vöneky, international law expert and AI expert at the University of Freiburg. The program would have to identify itself to users as a chatbot. The problem is that “regulation has so far been thought of too ‘static’ and cannot react quickly enough to new risk situations caused by new AI systems”.

In addition, the AI ​​Act ignores the possible further development of a so-called “Artificial General Intelligence”, i.e. a more extensive AI that acts as well as humans – or even better. This is exactly what companies like OpenAI are working on, the professor warns. It is therefore a matter of “whether and how we want to regulate this and minimize its risks – or whether we want to leave it up to research companies like OpenAI to decide”. If programs like ChatGPT were classified in the high-risk category, this would allow stricter regulation for sensitive areas such as the world of work, lending or the judiciary.

More: What slows down AI research and start-ups in Germany

source site-14