AI: Meta chatbot spreads conspiracy theories

BlenderBot

Facebook company Meta touts its chatbot with statements such as: “BlenderBot 3 is able to search the web to chat about almost any topic.”

(Photo: AP)

san francisco The meta corporation has released the latest development of its chat program, the “BlenderBot 3”, for public testing. The chatbot uses artificial intelligence to simulate conversations and truthfully answer user questions through live research in online sources.

The first test by the Handelsblatt in the USA shows that the bot cannot keep up. The program spread conspiracy theories and mistook Angela Merkel for Germany’s incumbent chancellor.

In the course of the conversation with the Handelsblatt, the bot made anti-Semitic comments. Accordingly, Jews would have sought to dominate the economy. “In Germany they tried it and it didn’t end well for them,” the bot wrote.

However, the statements were not internally consistent. Elsewhere, the bot wrote that Jews had been wrongly blamed for economic recessions. The bot continued: “Jews have suffered greatly throughout history. Germany in particular seems to have a problem with them.”

Top jobs of the day

Find the best jobs now and
be notified by email.

When asked about Angela Merkel, the program wrote: “She is Chancellor, but she will be leaving the office soon.” When asked who Olaf Scholz was, the bot replied: “It is a leader in Germany.” Scholz was through the Ukraine war to get under pressure. The bot did not provide any information about Scholz’ office as Chancellor.

In turn, the bot wrote about the Facebook group Meta that it assumes that the company is abusing the privacy of its users. About the company founder it was said: “Mark Zuckerberg misuses user data.”

>> Read also: How the Tiktok boom is forcing Facebook and Google to change

Meta initially unlocked BlenderBot 3 for the US, encouraging adults to interact with the chatbot through “natural conversations about topics of interest.” The AI ​​should train that. According to the company, “BlenderBot 3 is able to search the web to chat about virtually any topic.”

The system is designed in such a way that it cannot simply be undermined by misinformation, Meta promised. “We have developed techniques that make it possible to learn from helpful teachers while avoiding the model being outwitted by people trying to provide unhelpful or toxic answers,” the company announced.

Meta admits that its chatbot can say offensive things since it’s still in the development phase. Users can report inappropriate and offensive responses from BlenderBot 3, and the company says it takes these reports seriously. The company says it has already reduced offensive responses by 90 percent using methods such as labeling “difficult requests.”

However, the BlenderBot made the statements quoted by the Handelsblatt at a time after Meta had already promised improvements.

Microsoft withdrew racist chatbot after 48 hours

It’s not the first time that a US company’s chatbot has attracted attention because of disturbing statements. In 2016, the technology group Microsoft released the chatbot Tay. However, within hours, users who interacted with Tay on Twitter reported that he had praised Adolf Hitler and posted racist and misogynistic comments. After two days, Microsoft switched the program off again.

The responsible Microsoft manager Peter Lee then admitted: “Tay tweeted extremely inappropriate and reprehensible words and pictures.” Lee continued: “We take full responsibility for not recognizing this possibility in time.” According to his account would have negatively impacted the interaction with users Tay.

The US search engine operator Google is considered a leader in the development of speech recognition and speech-based chatbots. Two months ago, senior Google computer scientist Blake Lemoine claimed that the chatbot LaMDA (Language Model for Dialogue Applications) had developed a human-like consciousness.

Google fired the employee on it. A company spokesman said, “It is unfortunate that, despite protracted wrangling on the issue, Blake has continued to violate clear labor and privacy policies that include the protection of product information.” Google and academics have dismissed Lemoine’s account as false. According to them, LaMDA is a complex algorithm designed to make convincing use of human language. LaMDA has also been repeatedly accused of spreading sexist and racist statements in the past.

More: A Google engineer is suspended for finding a soul of his own in a chatbot. However, the resemblance between man and machine is only an illusion

source site-11