Research paper creates new AI skepticism

Dusseldorf A few days ago, the Future of Life Institute published a letter signed by prominent tech pioneers such as Tesla boss Elon Musk and Apple co-founder Steve Wozniak. In it, they demand that “all AI labs immediately stop training AI systems that are more powerful than GPT-4 for six months”. The signatories cite a “profound risk for society and humanity” as the reason.

As it turns out, the reason for the outcry of the tech visionaries is a new research paper from Microsoft, which is causing a sensation among experts and scientists in the field of artificial intelligence (AI) worldwide. “Sparks of Artificial General Intelligence” is the name of the report by 14 researchers, including Eric Horvitz, chief scientist at Microsoft.

On 155 pages, the researchers report on their experiences and tests with the AI ​​system GPT-4, which was launched a few weeks ago by the start-up Open‧AI in cooperation with Microsoft. “The performance of GPT-4 is remarkably close to human levels,” it says.

The researchers assume that GPT-4 is an “Artificial General Intelligence” (AGI). This has been a common term used by AI experts for decades, but has so far only been used theoretically. This means that an AI can understand and master any intellectual task that a human being is capable of.

“This is the first research paper reporting from AGI,” says Tristan Post, who teaches “AI for Innovation and Entrepreneurship” at the Technical University of Munich.

Musk’s call for a break has met with mixed reactions from experts. “The idea of ​​a moratorium is generally a good idea,” says Philipp Hacker, Professor of Law and Ethics in Digital Society at the European University Viadrina in Frankfurt an der Oder. However, in view of the tense geopolitical situation with Russia and China, hardly all countries would participate. Effective regulation is also hardly feasible in six months.

“From my point of view, it is amazing that no further or fundamental proposals for the worldwide use of generative AI systems are formulated,” comments Doris Weßels, Professor of Business Informatics at Kiel University of Applied Sciences, on the petition. “Unfortunately, we can and must assume that behind the open letter there are also special interests of different competitors of the currently leading provider OpenAI and Microsoft.”

Tesla is also very active in AI

The closest competitor in AI is Google, which has dominated research with its subsidiary Deepmind for many years. However, the online company is not directly connected to the petition. But Tesla is also very active in this area. For seven years, the electric car manufacturer has relied more than any other industry group on AI, which is intended to enable autonomous driving technology.

The company operates one of the world’s largest supercomputers for AI applications and has an AI team, the importance of which Musk often emphasizes. Most recently, he hired Deepmind’s Igor Babushkin, a well-known AI researcher and specialist in language models like GPT.

graphic

“A six-month moratorium is a terrible idea,” says Andrew Ng, who founded Google Brain, the company’s AI laboratory, in 2011 and now runs the start-up Landing AI. It would prevent important innovations in education, health or food. “When governments pause emerging technologies just because they don’t understand them, it distorts competition, sets a very bad example and is appalling innovation policy.”

Old argument between Altman and Musk

Experts also see the demand as a continuation of the personal rivalries between OpenAI’s Sam Altman and Musk. The two founders of the foundation fell out in 2018 over the direction of AI research. A year later, Altman founded a for-profit subsidiary in order to be able to raise the immense research costs. At that time, Microsoft invested in OpenAI.
“OpenAI was founded as an open-source, non-profit company,” Musk said a few weeks ago. “But now it’s become a closed, profit-maximizing company, effectively controlled by Microsoft.”

The demand for robust regulation in the European Union (EU) will probably be seen as confirmation. The confederation of states is working with the AI ​​Act on a regulation that is intended to provide a framework for the development and use of the technology – the initiators hope that a global “gold standard” will emerge. However, critics see more bureaucratic hurdles.

Negotiations in Brussels are ongoing and some basic features are already emerging. For example, the AI ​​Act provides for a classification into risk classes – the higher the risk of an application, the higher the requirements. For example, particularly strict requirements should apply to the credit rating systems of banks or surgical robots in clinics.

Read more about artificial intelligence

With such high-risk applications, companies will in future have to fulfill transparency obligations towards users, create technical documentation with detailed information on the data used and maintain risk management. In addition, they should be obliged to enter their program in an EU database.

Many questions are still open. For example, how to treat the new generation of artificial intelligence based on models like GPT-4. Or where the numerous experts who are needed to assess AI systems are supposed to come from. However, the legislative process that began in 2021 shows one thing: regulation is lengthy – six months is hardly enough.

“Experience shows that technological change cannot be stopped,” says AI expert Post about Musk’s request. “The petition should therefore be understood more as an appeal: be careful.”

More: How dangerous is AI? Above all, Musk’s proposal shows helplessness

source site-11