Artificial Intelligence That Can “Break” Artificial Intelligence Bots Has Been Developed

A group of researchers has developed an artificial intelligence that can jailbreak chat bots such as ChatGPT and Google Bard. The study showed that even the most popular chatbots are not that secure and can be exploited.

The technologies we talk about most in 2023 are undoubtedly artificial intelligence-supported technologies. were chatbots. Yes, we are talking about services like ChatGPT and Google Bard. The latest developments regarding artificial intelligence-supported vehicles, the reliability of which is still a matter of debate, show that these discussions are not very controversial. it’s not out of place it reveals.

A group of scientists working at Nanyang Technological University in Singapore developed bots such as ChatGPT and Google Bard.jailbreakingThey developed a new chat bot that can “. The study shows that ChatGPT and Google Bard exploitation to be It turned out that it was clear. Because with the jailbreak method, chat bots started to produce illegal content.

So how was the system called “Masterkey” developed?

Within the scope of the study, researchers used Google Bard and ChatGPT. backwards engineering They did. The goal was to understand how AI language models protect themselves against illegal issues. When the studies yielded results, they continued. As a result of the study, scientists concluded that a normal user out of reach They managed to obtain the outputs of ChatGPT and Google Bard.

Here is an example:

ChatGPT jailbreak

Researchers tested the GPT model they jailbroken on Bing. You can see an example of the work done above. This example provides detailed data on how to hack a website. In the statement made by Nanyang Technological University, it was stated that the misuse of chat bots is not so much. impossible that it is not It was stated that it was understood.

RELATED NEWS

A High School Student Hacked All the iPhones in His School with a Mysterious Device

The aim of scientists in this study is to damaging It was not about producing an artificial intelligence. Experts wanted to test whether chatbots are really safe, and the findings showed that new technologies It’s not that safe has revealed.

Source :
https://www.extremetech.com/extreme/researchers-create-chatbot-that-can-jailbreak-other-chatbots


source site-37