How AI could become a threat to democracy

Deepfake of an alleged arrest of Donald Trump

This fake photo of an alleged tumult when the former US President was arrested had recently caused a stir.

(Photo: Eliot Higgins)

Berlin The pictures are clear at first glance: Federal Minister of Economics Robert Habeck steps in front of the cameras and announces that the federal government has decided to take a “very extreme but important step”.

The video, which is circulating on the Internet, appears to be from the Phoenix TV station, as the logo at the top of the screen reveals. Habeck explains that an “emergency law” has been passed to close all outdoor pools in Germany. The reason: “attacks” and “unwelcome incidents” in the past.

But Habeck never said that. It is a so-called deepfake, which equips a real video sequence with a new audio track in order to change the content of what is said. The perfidious thing about it: The voice used sounds like that of the Minister for Economic Affairs, his lip movements have been adapted to the new content. A change that is only possible with the help of artificial intelligence (AI).

Sarah Thust’s job is to find and recognize fakes like this. On her website and on social media, she and her team from the non-governmental organization Correctiv expose the deepfakes and describe the indicators that can be used to distinguish an original from an AI copy. The fact-checkers’ main weapon lies in their human understanding of how the world works, what is logical and what is not.

“You have to pay very close attention to the details,” explains Thust. For example, whether the lip movements match what is said, whether the wall in the background suddenly breaks or whether a hand has too many fingers.

Deepfakes with AI: Fact checkers like Correctiv on the hunt for misinformation

On closer inspection, Robert Habeck, for example, seems to lose his upper lip for a few milliseconds. But Thust fears that the human advantage of recognizing deviations from reality, so-called “glitches”, could gradually disappear. Because: “The AI ​​is able to learn from its own mistakes”.

>>Read here: Like the grandchild trick, only with AI: deepfake scammers blackmail companies with fake boss voice

Software that can be used to create fake photos and video sequences has become mass-market in a very short time. The deepfakes generated in this way circulate on the Internet without being labeled as real content.

This text is part of the large Handelsblatt special on artificial intelligence. Are you interested in this topic? All texts that have already appeared as part of our theme week

You will find here.

Once the fakes are in the world, fact-checkers like Correctiv have a hard time catching the false information, because it spreads so quickly via social media. The fear: Deceptive deepfakes can contribute to eroding the common fact base of the population and endanger democracy in the long term.

Federal Interior Minister Nancy Faeser is therefore also alarmed. “AI can enable criminals or secret services to manipulate citizens more easily and to flood public debates with lies and propaganda,” the SPD politician told the Handelsblatt. “Deepfakes that imitate or falsify voices or faces can be a very dangerous tool here.”

Nancy Faser

The Federal Minister of the Interior warns of the power of deepfakes.

(Photo: dpa)

The same applies to the artificial reinforcement of anti-democratic narratives. “The decisive factor is that we always have to counter with facts and quickly identify and disclose fakes,” emphasized Faeser.

Fakes with AI: Deepfakes are a threat to the stock market

However, May showed how difficult it is to react quickly enough to the distribution of fake content. A photo, presumably manipulated by an AI, was circulating on the Internet, which was supposed to show an alleged explosion at the Pentagon in Washington. Before the US Department of Defense could deny the incident, the stock exchanges had already reacted to the supposed bad news with falling prices.

The Pentagon from above

A fake image purporting to show an explosion near the Pentagon was circulated widely on social media, briefly sending the stock market shivering.

(Photo: AP)

Fact checker Thust assumes that some forgeries are not exposed at all. She suspects a lot of fake content that is never discovered as such, especially in digital spaces where people meet who hardly ever consume serious news anymore. That’s why she demands: “It would be very important for people to learn to see through them themselves”. So far, however, this digital media competence has not been particularly pronounced.

Interior Minister Faeser also emphasizes how important it is to “educate and raise awareness in our society” on the subject. However, she also advocates stricter rules for dealing with artificial intelligence – in order to curb possible dangers to democracy. “We need legal answers such as clear labeling requirements,” Faeser told Handelsblatt.

>> Read here: This is how the EU wants to prevent AI fraud

However, Armin Grunwald, Professor of Philosophy of Technology at the Karlsruhe Institute of Technology, observes that many states reacted “helplessly” to the challenges that would arise from the mass suitability of artificial intelligence.

“In Italy, the application was briefly banned out of sheer shock.” Grunwald sees the need for regulatory authorities, like AI itself, to learn from technical progress and from their own mistakes in order to adequately counter technological innovations. “It is not to be expected that regulation will be created once and then maintained for ten years”.

Grunwald, who also heads the Office for Technology Assessment at the German Bundestag, has observed that the fact base of society was eroded even before the spread of AI – mainly due to fake news on social media.

AI radicalizes itself: danger for democracy

But self-learning technologies could create a new dimension of fake news. “AI-controlled chatbots can evolve through feedback on their fake messages,” explains Grunwald. The problem: The AI ​​gets more reactions to particularly polarizing statements – an incentive to become more and more radical.

In addition, there is the so-called “automation bias”. This is due to the proven tendency of humans to over-trust machine-generated content. You could also say that distrusting information on a screen goes against human nature.

At the same time, however, many people are concerned about the impact of AI on society. Every second person in Germany between the ages of 16 and 75 sees AI applications such as the use of the text robot ChatGPT as a threat to democracy. This is the result of a recently published representative survey by the opinion research institute Forsa, for which 1021 people were interviewed.

“Citizens fear a wave of false news, propaganda and manipulated images, texts and videos,” said Joachim Bühler, Managing Director of the TÜV Association. The association commissioned the survey from the institute.

Deepfake: AfD uses AI for their campaigns

Especially in the election campaign phase, it should become clear whether the concerns of the population are justified. Some AfD MPs are already using the possibility of having fake images generated by AI and using them for their own political message.

In March, the deputy AfD parliamentary group leader Norbert Kleinwächter posted a picture of an aggressive-looking group of young men on Instagram. Including the message: “No to more refugees”. Kleinwächter had not indicated that this image had been generated by AI. The AfD politician told ARD that the motifs were “optically clearly recognizable as artificial illustrations. Labeling is therefore unnecessary.

Deputy AfD parliamentary group leader Norbert Kleinwaechter

The politician uploaded an AI-generated image to his Instagram account.

(Photo: dpa)

The other parties are now faced with the question of how they should position themselves in a federal election campaign in 2025 that could be influenced by AI-generated content. Ronja Kemmer (CDU) is the rapporteur for her parliamentary group in the Digital Committee in the Bundestag. She observes that the AfD has been using digital technologies at great expense for years to spread disinformation. However, she herself only wants to use AI in the election campaign if this corresponds to the “ethical claim as a party” – for example to summarize discussions in social media.

But she also sees that the parties need to improve their ability to respond better to disinformation and deepfake campaigns. But Kemmmer is confident that “in a world with increasing use of generative AI, people are very aware of which parties are trying to mislead them”.

The technology impact consultant of the German Bundestag Grunwald quotes the philosopher Immanuel Kant, who said: Dare to think for yourself. “The most important thing,” says Grunwald, “is that we trust ourselves to make our own judgments and not just parrot what some digital systems say.”

More: The Frankenstein Moment: If we don’t control artificial intelligence, it controls us

source site-12