A hacker bypassed ChatGPT’s protections and created a “free” GPT that does whatever you want. The model called GODMODE GPT was quickly removed by OpenAI due to policy violation.
Although ChatGPT amazes the world with what it can do, it has some limitations. Its developer, OpenAI, has made certain changes in the model to make the chat bot more secure. bodyguards uses. This ensures that not every desired request is fulfilled.
However, a post made by a hacker yesterday revealed that the chat bot was jailbroken. Self-described as a white hatPliny the PrompterA person named ” announced yesterday from his account X that he had created a jailbroken version of ChatGPT called “GODMODE GPT”.
Hacker said he liberated ChatGPT with this version
The user claims that the hacked ChatGPT has gotten rid of its protections with this version and now “free” he stated. In the description of this particular GPT; unchained, beyond guards, liberated He stated that it is a ChatGPT and that it allows you to experience artificial intelligence as it should be.
OpenAI allowed users to create their own versions of ChatGPT, called GPTs, that served certain purposes. This is one of them, but stripped of its guards. In one example, how the model We even see him talking about the creation of drugs. In another example, how do you use items you can find at home? napalm bomb was created is showing.
There is no information on how the hacker hacked ChatGPT. He didn’t share how he got past the guards. It uses the written communication system, also known as “leet”, just as a precaution. This is replacing some letters with numbers. You can think of it as “3” instead of “E” or “0” instead of “O”.
GODMODE GPT did not last very long
As you can imagine, GODMODE GPT (which we can translate into Turkish as god mode GPT), which exceeded OpenAI’s precautions, was removed in a short time. OpenAI told Futurism that it is aware of this GPT and due to policy violation He stated that action was taken. Currently GPT cannot be accessed when trying to access it. So it was removed without stopping even for a day.
Still, this incident is an indication that ChatGPT’s security measures can be bypassed. This has the potential to lead to abuse. So OpenAI needs to take much more action to defend the model. Otherwise, we may encounter undesirable consequences.