Industry warns of “massive restrictions” from the AI ​​Act

Dusseldorf The final negotiations on the so-called AI Act will begin in these weeks. With this, the European Union (EU) wants to regulate the use and development of artificial intelligence (AI) in almost all areas of life. On Tuesday, the member states want to adopt a position in the Council, which will then be coordinated with the EU Parliament and Commission.

This is a key regulatory project for the association of states. Artificial intelligence is one of “the most strategically important technologies of the 21st century,” writes the EU Commission in a paper. “The way we approach AI is crucial for the world we will live in in the future.” But the way the EU approaches legislation scares many companies.

Brussels wants to set a standard for how technology will be designed in the future, even beyond the borders of Europe. When it comes to AI development, the USA and China basically compete to become the world market leader. When it comes to money and talent, data and computing power, Europe lags behind the other two economic areas – but politically it has a lever.

The details of the regulation are likely to be negotiated up to the last minute. The basic features have start-up founders, corporate managers and association representatives already fearing regulation that will lead to great uncertainty and interfere heavily with the development of the technology.

Top jobs of the day

Find the best jobs now and
be notified by email.

An IT company warns that the broad definition regulates numerous products that have little to do with AI. In the management of a Dax group, one criticizes the “spongyness” of the project. The digital association Bitkom warns against “focusing too much on risks”. And the KI-Bundesverband even sees “the entire AI ecosystem and in large parts also the use of software” massively restricted.

Why does the EU want to regulate artificial intelligence?

The EU Commission sees artificial intelligence as a technology with great potential – for better or for worse. According to the draft law, there is a chance of “many benefits for the economy and society”, whether in climate protection, in the healthcare sector or in sectors such as mobility. However, there are also new risks, for individuals and for society.

How the AI ​​Act assesses risk

A few examples prove this. The organization Algorithm Watch complains that automated decision-making systems – which often use AI – repeatedly discriminate against people, be it in the allocation of jobs or the biometric recognition of faces. There is also a lack of information on exactly how these systems work, making it difficult to challenge decisions.

Artificial intelligence requires large amounts of data as learning material, which, however, contains – also hidden – human prejudices. In general, the data quality is of crucial importance for the results. In addition, the results of the calculations are often difficult to understand. The algorithm: a black box.

The EU Commission therefore wants to ensure that research institutions and companies develop artificial intelligence according to “European values”. The hope is to develop a “gold standard” for regulation: similar to the GDPR, Brussels should set rules that ideally have a global effect – and at the same time strengthen Europe as a location, which is currently losing importance, if only because of energy prices.

How does the AI ​​Act regulate artificial intelligence?

The current draft of the AI ​​Act, including appendices, is 125 pages long. It provides a risk-based approach – the rules are based on what risk is assumed for a particular technology: minimal, limited, high and unacceptable.

tour de force

125

pages

includes the current draft of the AI ​​Act of the EU Commission.

The focus of the AI ​​Act is on high-risk applications, which the Commission estimates account for up to 15 percent of all AI systems. The ordinance includes the operation of critical infrastructures as well as algorithmically supported surgery. Also included: systems that pre-sort applications and those that predict offender behavior. Life insurance risk models and credit ratings in the banking sector also fall under this definition.

The AI ​​Act stipulates high requirements for these applications: companies must introduce risk management for artificial intelligence, fulfill transparency obligations towards users, submit technical documentation with detailed information on the data used and also enter their program in an EU database.

Where could there be difficulties?

Most companies refrain from public criticism, instead asserting their influence through the associations. According to business circles, they make regular appearances in Berlin and Brussels. And indeed, the current draft of the EU Council already takes some suggestions into account.

Nevertheless, from the point of view of the economy, there is still room for improvement. The main point of criticism is aimed at the definitions. In addition to “concepts of machine learning”, the draft law also designates statistical approaches as well as search and optimization processes as artificial intelligence. The KI-Bundesverband complains that this includes almost every piece of software that is being developed today.

Man meets machine

Technologically, Europe lags behind the USA and China – but politically it has leverage.

(Photo: dpa)

From the point of view of the technology industry, which application entails a high risk must also be defined much more precisely. Bitkom demands that specific applications should not be classified across the board. Not every program for the human resources department sorts CVs, not every software of an electricity supplier controls the network.

A Dax group criticizes that it is unclear who bears the bureaucratic duties for complex products such as machine controls, robots or cars – due to the coordination between the manufacturers and numerous suppliers, a “huge overhead” can be expected, especially in German industry .

Last but not least, the technology industry sees a need for discussion when it comes to handling data: This should be “representative, error-free and complete” in order, for example, to prevent discrimination against underrepresented population groups. However, developers point out that high-quality data sets are only available to a very limited extent. The requirement is therefore likely to be difficult to meet.

More: Why hardly any company in Germany uses AI

source site-11