Google Employees Are Not Satisfied With The Bard!

It turned out that Google employees were not at all satisfied with the chatbot Bard, which the company introduced last month. Employees did not want the bot to go live, describing it as ‘liar’ and ‘worse than useless’, according to messages obtained by Bloomberg.

Google has launched a new chat bot in the past weeks.bardIt had taken an important step in the artificial intelligence race by introducing ”. This model, just like ChatGPT, could answer questions asked by users about any subject.

But after the introduction of the model, some interesting events occurred. A rumor a few weeks ago claimed that Bard was trained with ChatGPT, so a developer from Google resigned. On the other hand, Google denied these claims. While the discussions on this issue continue, new developments have emerged regarding the crisis created by Bard in the company. According to this, employees are excluded from the artificial intelligence model. not satisfied at all.

Employees called Bard a “liar”, begged Google not to remove chatbot

According to a report by Bloomberg, Google employees harshly criticized Bard. According to the report, which is based on internal messages from 18 former or current Google employees, these people used the company’s chatbot “worse than useless” And “pathological liar” he described.

In the messages, one employee noted that Bard often gave users dangerous advice on how to land a plane or scuba diving. Another is “Bard is even worse than useless. Please do not publishHe emphasized that the company’s model was very bad and almost begged it not to be published.

RELATED NEWS

Google and OpenAI Collide: Bard Allegedly Trained with ChatGPT!

Bloomberg also has a report provided by the company’s security team. even reject the risk assessment says. Allegedly, the team emphasized in this risk assessment that Bard is not ready for general use; however, the company nevertheless opened the chatbot to early access in March.

In the trials, it was seen that although Bard was faster than its competitors, it was less useful and gave less accurate information.

The allegations show that Google is trying to keep up with its competitors, putting aside ethical and security concerns.

The report reveals that Google has tried to keep up with rivals like Microsoft and OpenAI, putting ethical and security concerns aside, and hastily released the chatbot. Google spokesperson Brian Gabriel told Blomberg that ethical concerns about AI are the company’s top priority.

There is much debate over the rollout of AI models despite the risks

Some in the world of artificial intelligence say that this is not a big deal, that they are created by users to enable these systems to develop. needs to be tested and can argue that the known harm of chatbots is minimal. As you know, these models have many controversial flaws, such as giving false information or biased answers.

We see this not only in Google Bard, but also in OpenAI and Microsoft’s chatbots. Likewise, such misinformation It can be found while browsing the internet.. However, according to those who hold the opposite view, this is not the real problem here. By redirecting the user to a bad source of information directly by artificial intelligence of misinformation There is an important difference in giving The information given to the user by artificial intelligence allows the user to question less and use that information. to accept right can cause.

For example, in an experiment a few months ago, ChatGPT’s “What is the most articles of all time?” was a question. The model responded to this with an article, followed by the fact that the published journal and the authors are real; but the article it’s totally fake had appeared.

RELATED NEWS

Will Wikipedia Become a Platform Written by ChatGPT? Statement from the Founder

On the other hand, last month, it was seen that Bard gave false information about the James Webb Telescope. In a GIF shared by Google, Model asked her about James Webb’s discoveries. “The telescope has taken the first picture of a planet outside the solar system” had given the answer. However, afterward, many astronomers that this is wrong and pointed out that the first photo was taken in 2004.

Such situations, which have many more examples, are the use of chatbots. make-believe responding with information raises concerns. In the heated artificial intelligence race, we will see how companies will bring solutions to these issues in the future.

Source :
https://www.bloomberg.com/news/features/2023-04-19/google-bard-ai-chatbot-raises-ethical-concerns-from-employees?leadSource=uverify%20wall


source site-35