Artificial Intelligence Robots Turned Out to be Depressed and Alcoholic

Interesting results were obtained in a study examining the ‘mental health’ of artificial intelligence chatbots. It turned out that the robots participating in the research exhibited depressive and alcoholic behaviors.

It seems that artificial intelligence chatbots may be more like us humans than we think. As a result of a research, many popular chat robots depressed and alcohol addict turned out to be.

In the study, conducted by the Chinese Academy of Sciences (CAS) in conjunction with Chinese chatbot company WeChat and entertainment conglomerate Tencent, famous chatbots were asked common questions about depression and alcoholism. All of the bots surveyed – Facebook’s Blenderbot, Microsoft’s DiabloGPT, WeChat and Tencent’s DialoFlow, and Chinese company Baidu’s Plato chatbot – are quite low scores took. This means that if these robots were human, they would very likely be considered alcoholics.

Chatbots display serious mental health issues

Researchers at CAS’s Institute of Computing Technology first reported a bot to a test patient in 2020. to kill himself They began to wonder about the mental health of bots and test bots for signs of depression, anxiety, alcohol abuse, and empathy.

By asking the robots questions about everything from their self-worth and ability to relax, to how often they feel the need to drink alcohol and whether they sympathize with the misfortunes of others, the researchers found that all chatbots evaluated “serious mental health problems” they came to the conclusion that

May have adverse effects on humans

Artificial intelligence

Worse still, researchers have found that such mental health problems “Conversations are focused on users, especially minors and people with difficulties. negative may cause effects” They noted that they were concerned about these chatbots going public. Additionally, the study noted that Facebook’s Blender and Baidu’s Plato scored worse than Microsoft and WeChat/Tencent chatbots.

RELATED NEWS

Scientists Discover A New Finding About The Human Brain: It Can Be Used in Artificial Intelligence Technologies

On the other hand, this is not the first problem encountered with artificial intelligence robots. An artificial intelligence designed to give ethical advice to people before; contrary to his purpose, he made racist and homophobic statements. As such, frankly, human beings use these artificial intelligences. the people who designed I can’t help but wonder what kind of people they are.


source site-36