He said it’s not safe! OpenAI employee resigns

Artificial intelligence giant OpenAI has been on the agenda for allegedly ignoring security. Jan Leike, a key researcher working for OpenAI, said that security processes and culture have been left behind for the sake of “shiny products” (products that will stick) and that it’s not safe He resigned by saying. Here are the details of the event…

Jan Leike announced that he was leaving his post at the beginning of last week. Leike, “superalignment” (shiny products) team, which was formed to solve the main technical challenges in implementing security protocols as OpenAI tried to develop artificial intelligence that could think like a human. However, according to Wired’s report, OpenAI completely disbanded the team. Okay but why?

OpenAI

Leike, in his statements on Twitter, “In recent years, security culture and processes have been pushed to the back burner in favor of shiny products.” said. These statements highlight the growing tensions within OpenAI and the potential that artificial intelligence can create. potential dangers It revealed concerns about management. Leike, his team “critical” to carry out the work requiring security He stated that he could not access resources and this eventually led to his resignation.

It was a pity!  Gpt-4o showed no mercy to Humane AI Pin and Rabbit R1It was a pity!  Gpt-4o showed no mercy to Humane AI Pin and Rabbit R1

It was a pity! Gpt-4o showed no mercy to Humane AI Pin and Rabbit R1

OpenAI’s GPT-4o and Google’s Gemini project have eclipsed devices like the AI ​​Pin and Rabbit R1. Here are the details…

OpenAI company’s artificial intelligence security will no longer be a question from Jan Leike. The duties left behind by Leike Supporting Sam Altman another OpenAI founder John Schulman will take over. It is not surprising that the intra-office conflicts caused by security risks at OpenAI have a great impact in the world of artificial intelligence. After all, OpenAI leading artificial intelligence one of several companies.

In his statements after resigning, Leike said that OpenAI would create artificial intelligence. unsafe and dangerous He emphasized that it is necessary to seriously prepare for the consequences. Leike, “Only then will all artificial intelligence beneficial for humanity “We can make it happen.” said. With these incidents, the extent to which OpenAI takes security policies into consideration is questioned.

This incident at OpenAI has caused us to rethink how companies take security measures in the field of artificial intelligence. So what do you think? You can write your opinions in the comments section below.

source site-28