Too strict regulation nips innovations in the bud

ChatGPT

The global development of AI-based software will not slow down just because Europe sets tighter framework conditions.

(Photo: mauritius images / David Talukdar / imageBROKER)

Artificial intelligence, or AI for short, has made massive progress in recent years and months – and that scares some people. Members of the European Parliament are therefore now discussing stricter regulation of language-based AI, such as the ChatGPT program.

However, too strict requirements for AI-based applications are wrong. Too strict regulation nips innovations in the bud and would put Germany and Europe at a competitive disadvantage in the medium term.

The global development of AI-based software will not slow down just because Europe sets tighter framework conditions. In fact, the opposite is the case: Basic technologies used in speech-based AI such as ChatGPT are available to everyone – and in a highly competitive environment, with the greatest competition from the USA and China.

The EU experts now want to determine which applications can be classified as risky technologies. But even a risk-based approach to regulation can thwart valuable innovations. For example, if a risk classification is ill-defined or too broad, trivial applications can also be included.

A case-by-case assessment of the technology, such as that demanded by German industry, would be the better way. In addition, the regulation of AI must be designed in great detail and offer a lot of space for different purposes.

There is no need to discuss the fact that transparency in AI-based applications must be guaranteed and that these must not lead to discrimination. There are already research approaches to definitions that show how AI regulation against discrimination can work in detail.

Read more about artificial intelligence

However, the time is not yet ripe for a further risk classification. Because at the current time, neural network techniques in particular, which also include language-based programs such as ChatGPT, cannot make their risk of errors measurable – and they cannot even estimate the probability of correct results. It is therefore questionable how controlling institutions want to make such precise classifications when assessing risk.

In addition, even clearly defined risk classifications lead to numerous bureaucratic uncertainties. This is likely to inhibit companies, especially start-ups, but also small and medium-sized companies, even more than is already the case.

The technological leaps we are seeing are not a reason to condemn and tightly regulate AI-based applications as a whole – but rather a spur to explore how uncertainties of AI can be quantified. There is already research on how this could be made possible.

With time comes understanding and knowledge – and with more understanding and knowledge regulation should follow.

More: Strict requirements for AI – Members of the European Parliament want to regulate ChatGPT more strictly

source site-11