OpenAI Introduces Most Advanced Language Model GPT-4o

OpenAI held its long-awaited event today. The company introduced its new flagship model, GPT-4o, at the event. The model can talk, see and hear like a real person.

Pioneering the artificial intelligence revolution and leaving everyone speechless with its models OpenAIheld its event, which has been awaited for days, today. The company during the event ChatGPT chat bot and GPT-4 language model made important announcements, including upcoming innovations.

The most striking of today’s announcements is The company’s new flagship language model was GPT-4o. This model not only outperforms the existing GPT but is also much faster.

GPT-4o; Can reason through voice, text and image

The new GPT-4o model that the company will offer to its users will power the ChatGPT chat bot. The model, which is described as much more efficient and ahead of previous versions of GPT, will be able to reason through voice, text and image. According to the statements, GPT-4o, a built-in multi-model artificial intelligence model. This means that it can understand voice, text and image and produce content.

We can say that there has been a serious improvement, especially on the voice response side. Users can now model real-time with less lag, feeling much more realistic speeches can realize it. According to OpenAI, it can respond to sound in as little as 232 milliseconds. It’s almost as fast as talking to a human. Previously, delays in voice mode averaged 2.8 seconds.

In addition, you can even interrupt ChatGPT while he is answering and ask him to change his answer. For example, in the live demo at the event, OpenAI executives ask the model to tell a story about a robot. While the model is talking, they interrupt her and ask her to express different emotions. ChatGPT; By making this change instantly, he can fulfill the other person’s wishes. You can take a look at those moments in the video above.

The model’s built-in advanced visual capabilities were also demoed. The model can “see” and comment on things shown to her through the device’s camera. For example, in one demo, an equation written on paper was shown to the model and the model was asked for help in solving it. ChatGPT helped them find the solution. When “I Love You ChatGPT” was written on paper, it responded with an emotional voice, just like a human.

Can do real-time translation surprisingly well


source site-36