How much regulation can Europe afford?

Brussels, Washington There is no word that politicians and officials in Brussels like to say more than “Brussels Effect”. The term describes how countries around the world imitate EU legislation, be it in climate protection, data protection or labor law. “Without a doubt, our rules will create a ‘Brussels Effect’,” says Dragos Tudorache.

He is one of two MEPs with the main responsibility for the AI ​​Act, with which the EU wants to mitigate the risks of artificial intelligence (AI). Tudorache says he gets calls from India, Brazil, Australia, from all over the world. Everyone wants to know: How can AI be used and regulated at the same time?

The EU has long been ridiculed for the AI ​​Act. While the Americans develop and invent, the Europeans only come up with new rules. But since ChatGPT has been on the market, the warnings of end-time scenarios in which the AI ​​becomes independent and turns against humans have been ramping up. The demand for a development stop comes from the industry itself.

For many, the transparency regulations and risk analyzes planned in the EU no longer seem exaggerated, but rather too timid and like a good first step.

At least if they don’t hinder the development of AI in Europe. That’s exactly what companies fear. “The AI ​​Act leads to uncertainty,” says Jörg Bienert, President of the German AI Association. Under the planned rules, European start-ups that deal with so-called foundation models would have an even harder time against competition from the USA. These are basic models with gigantic amounts of data for machine learning. Bienert believes that there will continue to be opportunities outside of Europe to develop AI applications without having to meet any requirements.

This text is part of the large Handelsblatt special on artificial intelligence. Are you interested in this topic? All texts that have already appeared as part of our theme week

You will find here.

Will the AI ​​Act become the global standard for AI rules? Or will it slow down Europe as a tech location? The legislative process is not yet complete. The two legislators of the EU, i.e. the Parliament and the Council, have each worked out their ideas for the text.

Few bans, many conditions

By the fall, they could agree on a version that everyone can wear. Due to the necessary transition periods, the rules will probably not take effect until 2026.

Many cornerstones are already clear: The EU will divide AI application fields into four risk classes: low, medium, high and unacceptable. Transparency obligations will apply to medium-risk applications – users should therefore know that they are using an AI product.

Classified as “unacceptable” and thus prohibited, only a few systems, such as “social scoring”, are used to prevent the state from evaluating people based on the Chinese model. It is disputed whether “predictive policing” should also fall into this category. This refers to systems designed to predict crime. Facial recognition in public spaces by the police could also be restricted.

>> Read more: How artificial intelligence should save the state

Until the release of ChatGPT, debates surrounding the AI ​​Act revolved around such civil rights issues. The bans now planned would affect the efficiency of state power, but it would hardly be relevant for the AI ​​industry in Europe.

On the other hand, the regulations for applications in high-risk areas are relevant for developers. The list is long. According to Parliament, these include biometric identification, management of critical infrastructure, education, worker management, provision of public services, interpretation of laws, law enforcement, asylum, border controls and more.

In many of these areas, AI can play a useful role. The AI ​​Act is not intended to displace AI there, but developers and users of the systems should use their new possibilities responsibly.

Risk management systems are therefore prescribed, which must be continuously updated. There are also specifications for the training data and for the technical documentation. The systems must be able to store their own output. They must be sufficiently transparent to allow users to interpret the results. And they must allow for human oversight, including a “stop” button.

Competition Commissioner Margrethe Vestager

The EU Competition Commissioner has often shown harshness in dealing with large tech companies in the past.

(Photo: via REUTERS)

According to the Parliament, generative systems that generate images and text should not be classified as high-risk applications across the board, but should receive similar regulations. Only then will the systems receive the CE mark, which is known from many other products, and may be marketed in the EU.

It is true that AI in Europe must also reflect European values, says Kristian Kersting, Professor of Machine Learning at TU Darmstadt. The AI ​​Act will help with that. “If the EU gets it right, the AI ​​Act can even become a locational advantage, because ‘made in Europe’ then stands for tested quality,” he says.

“ChatGPT should not be on the market in this form according to our rules,” says Tudorache. First of all, it must be proven that the prescribed protective measures have been implemented. “If you’re smart enough, you can use ChatGPT in malicious ways,” says Tudorache. “We want to prevent that.”

Suddenly it should go very quickly

Margrethe Vestager is all too slow. Deputy Commission President Ursula von der Leyen played a key role in the first drafts of the AI ​​Act. Now she sees herself overtaken by reality: “We feel a great urgency,” she says, because generative AI is developing so quickly. “I’ve been told that the next generation is only months away. And a single digit number of months.”

>> Read more: SAP boss Klein: “Generative AI will fundamentally change how people work with our software”

So waiting for the rules of the AI ​​Act would not be a solution. The EU Commission therefore wants to implement the most important rules as quickly as possible, ideally by the end of the year and in an internationally coordinated manner.

The EU wants to conclude an “AI Pact” with the USA and then jointly present this to the G7 states, which would also include Canada, Japan and Great Britain. The “Brussels Effect” could already occur before the AI ​​Act is even in the Official Journal.

The negotiations on this take place in the Transatlantic Trade and Technology Council (TTC). When the forum was founded in 2021, AI was still a marginal topic – but at the most recent meeting in Stockholm at the end of May, AI suddenly took center stage.

If the EU gets it right, the AI ​​Act can even become a locational advantage, because “made in Europe” then stands for tested quality. Kristian Kersting, Professor of Machine Learning at the TU Darmstadt

The fact that the USA wants to move closer to the EU, which is more willing to regulate, when it comes to future technologies, is considered a big step by connoisseurs of the AI ​​industry. At least on paper, the US is committed to wanting to agree on common standards with the EU “including AI and other new technologies”.

However, laws cannot be agreed at international level, only rules to which companies can voluntarily submit.

“It will take a while for the US Congress or our regulators to catch up,” US Secretary of Commerce Gina Raimondo said in Stockholm. The transatlantic framework is an opportunity to offer companies an initial orientation.

Last year, the USA lobbied side by side with the tech companies against the European AI rules.

Joe Biden is driving regulation

It was only in April, as the US broadcaster CNN recently described, that Biden was sitting with his top advisers in the Oval Office. An employee typed in a command: “Summarize the Supreme Court ruling in New Jersey v. Delaware and turn it into a Bruce Springsteen song.”

>> Read also: Seven graphics show where in the world the development of artificial intelligence is concentrated

When he saw the result, Biden was fascinated and worried at the same time. “I don’t think there has ever been in human history such a fundamental potential technological shift as the one that artificial intelligence represents,” Biden says publicly.

Now it is said that the US President is personally very keen to regulate the new technology – and at the same time to give it enough space so that the USA is not left behind by Chinese innovations. According to the White House, there are special meetings on AI “two to three times” a week.

So far, say insiders in Washington, it’s more about a basic understanding of the risks and opportunities. However, it is also being discussed whether AI models will have to go through a certification process before they are published in the future.

Years may pass before there are actually mandatory standards in the USA, also because the US Congress is split between Democrats and Republicans.

Even if the Americans quickly follow suit, AI association leader Bienert sees a disadvantage for the Europeans: “The developers will have to anticipate all possible use cases in order to eliminate the risks. American corporations can afford that, but European start-ups cannot,” he says.

More: Why AI companies are demanding stricter laws and politics aren’t delivering

source site-12