German companies fear AI regulation that is too strict

facial recognition technology

How far can artificial intelligence go? That’s what Brussels is all about.

(Photo: imago images/Ikon Images)

Frankfurt The German economy fears that the planned EU regulation on artificial intelligence (AI) could make the development of AI applications more difficult. With its proposed regulation, the EU Commission wants to ensure that AI systems used in the EU are trustworthy.

However, Jörg Asmussen, CEO of the insurance association GDV, warns that Europe can only hold its own in the field of AI alongside the USA and China if the future legal framework also enables innovations. “It is important that, in addition to the risks, the immense opportunities of the technology are seen,” Asmussen told the Handelsblatt.

The GDV and other German business associations, including the employers’ association BDA, the digital association Bitkom and the trade association HDE, have formulated joint demands in order to achieve what they believe to be balanced regulation. Asmussen wants to discuss a corresponding position paper this Monday with representatives of the German Bundestag.

Developments based on artificial intelligence are often still in their infancy. However, new applications are being researched and tested in many areas of the economy. Examples are self-learning robots, networked driving or faster claims settlement by insurers.

Top jobs of the day

Find the best jobs now and
be notified by email.

The companies see “tremendous potential for increasing innovation, growth, productivity and job creation,” the paper says. Europe should therefore promote AI in a targeted manner instead of making it more difficult to develop and enter the market.

However, many people are suspicious if, in the future, an AI will decide whether they can get a loan or a job, for example. They fear surveillance and manipulation.

Proposal divides AI applications into four risk levels

The draft presented by the EU Commission in April 2021 divides AI applications into four risk levels: minimal risk, limited risk, high risk and unacceptable risk. Depending on the classification, there should be different admission requirements and controls.

The risk-based approach is fundamentally correct, emphasizes Asmussen, also on behalf of the other trade associations. The definition of what counts as AI in the future – and what doesn’t – is particularly important. It is therefore to be welcomed that the Commission is orienting itself towards the Organization for Economic Co-operation and Development (OECD), as this definition also allows future AI technologies to be added.

At the same time, the associations criticize that the definition is currently too broad. They claim that only “real” artificial intelligence should be regulated. This means, for example, machine learning. Here the system recognizes patterns and regularities in existing data and can then possibly also assess unknown data.

But by no means all statistical models and algorithms that companies use have anything to do with machine learning. According to the associations, such applications should not fall under the AI ​​regulation. Otherwise, you run the risk of creating software regulation instead of AI regulation, emphasizes Asmussen.

In addition, the associations are calling for the ordinance to name high-risk AI applications more specifically, avoid double regulation and double structures in supervision and ensure proportionality. In addition, developers should be given an application-oriented guide. The right design of the AI ​​regulation is currently being debated in the EU Parliament and EU Council.

More: Innovation brake or surveillance state: When it comes to AI regulation, visions of the future collide

source site-11