ChatGPT failed programming questions

OpenAI company’s artificial intelligence chat bot called ChatGPT is slowly becoming indispensable for users. Especially students and employees get support from many different bots such as ChatGPT in order to be faster in their work or homework. But a new study has found that we shouldn’t rely too much on ChatGPT to answer computer programming questions.

The research, conducted at Purdue University, was presented at the Computer-Human Interaction Conference in Hawaii earlier this month. ChatGPT’s answers to 517 programming questions received from Stack Overflow, a question-answer platform created for software developers and programmers, were examined.

ChatGPT failed programming questions

According to the research, ChatGPT gave answers containing incorrect information to 52% of programming-related questions. It also turned out that 77% of his answers were unnecessarily long. However, 35% of respondents preferred ChatGPT answers due to their well-articulated language style and comprehensiveness.

Programmers surveyed missed misinformation in ChatGPT responses 39% of the time. This situation also revealed the risks that ChatGPT’s seemingly correct answers could bring. The research helped quantify issues that are familiar to anyone who uses tools like ChatGPT.

Scarlett Johansson is preparing to sue ChatGPT!Scarlett Johansson is preparing to sue ChatGPT!

Scarlett Johansson is preparing to sue ChatGPT!

Scarlett Johansson made a statement about the similarity of OpenAI’s voice model ChatGPT Sky with her own voice and took legal steps.

Major technology companies continue to invest billions of dollars to deliver the most reliable chatbots to users. Meta, Microsoft and Google are actually racing to take over a new space that has the potential to take our relationship with the internet to another dimension. However, there are many problems they need to overcome.

The most important of these problems is that bots such as ChatGPT fail not only in programming but also in answering many questions. It has been observed that Google’s new artificial intelligence-supported search engine generally presents articles from unreliable or humorous sources as accurate information.

So should users ask these types of chatbots the most ordinary questions? Or is there still time for them to develop?

source site-28