OpenAI Board of Directors Granted “Anti-Sam Altman” Power

OpenAI’s internal security measures are being expanded. The new security unit established within the institution and the veto right given to the board of directors on artificial intelligence were announced.

It has been on the agenda lately with Sam Altman’s dismissal, then his return and change of the board of directors. OpenAI One of the claims made about them was that they had developed a superior and dangerous artificial intelligence. OpenAI, Although it did not need to make a statement on that issue, it made regulations that prioritize artificial intelligence security.

OpenAI primarily “security consulting group” He formed a team called. This team is positioned above the organization’s technical teams and will provide leadership advice to the technical teams. Moreover OpenAI Board of Directorsfrom now on, artificial intelligence projects that he sees as dangerous veto don’t He got his rights.

A new “security advisory group” was established, veto power was given to the management

Although such changes are made in almost every company from time to time, OpenAI has made such a change recently. the events that took place and it is of great importance when we consider the allegations. After all, OpenAI is a leading institution in the field of artificial intelligence and security measures in this field are of great importance.

The new regulation, announced by OpenAI in a blog post, “Preparedness Framework” It was called. In November, two members of the board of directors who were known to be opposed to aggressive growth, Ilya Sutskever and Helen Toner, were removed from the board. With the new regulation “devastating“A guide has also been created to define and analyze artificial intelligence and decide on the actions to be taken.

According to the new internal arrangement, the security system team will manage the models in production. ChatGPT Tasks such as limiting their systematic abuse in APIs will fall to this team. The preparedness team will be responsible for preliminary models and will focus on identifying possible risks and taking precautions. The team, called the superorientation team, will develop theoretical guidance for super artificial intelligence models. Each team identifies the models they are responsible for. “cybersecurity”, “persuasion”, “autonomy”, “CBRN” It will be evaluated in terms of (chemical, biological, radiological and nuclear threats).

OpenAI has not yet made these evaluations. how to do While they did not share any information about the issue, a cross-functional group of security advisors will generally oversee and help lead these efforts.

Source :
https://techcrunch.com/2023/12/18/openai-buffs-safety-team-and-gives-board-veto-power-on-risky-ai/


source site-33