hatGPT inventors want to recognize text from machines

OpenAI

The San Francisco company developed ChatGPT.

(Photo: AP)

new York The developer company OpenAI published a program that is supposed to distinguish whether a text was written by a human or a computer. ChatGPT is so good at mimicking human speech that there are concerns, among other things, that it could be used to cheat on schoolwork or create large-scale disinformation campaigns.

The detection still works rather mediocre, as OpenAI admitted in a blog entry on Tuesday. In test runs, the software correctly identified texts written by a computer in 26 percent of the cases. At the same time, however, nine percent of the texts formulated by humans were incorrectly assigned to a machine. Therefore, for the time being, it is recommended not to rely primarily on the assessment of the “classifier” when evaluating the texts.

ChatGPT is artificial intelligence-based software trained on massive amounts of text and data to mimic human speech. At the same time, the program can convincingly mix completely incorrect information with correct information. OpenAI made ChatGPT publicly available last year, prompting admiration for the software’s capabilities and concerns about fraud.

Google has also been developing software that can write and speak like a human for years, but has so far refrained from publishing it. Now the Internet group is letting employees test a chatbot that works similarly to ChatGPT, the broadcaster CNBC reported on Wednesday night. An internal email states that a response to ChatGPT has priority. Google is also experimenting with a question-and-answer version of its Internet search engine.

More: Baidu is developing a competitor for ChatGPT

source site-13