Stop Artificial Intelligence Studies Letter from 1150 Famous Names

Artificial intelligence is developing rapidly and is being integrated into more and more aspects of our lives every day. A letter came from the industry for the concerns arising from this situation. Future of Life has published a letter requesting that artificial intelligence studies be stopped for 6 months.

In the last year, the world of technology and internet has entered a new era. Chatbots that can access anything you want beyond a human being, visual tools that visualize the text you write ‘really’ even realistically… With these tools that came out in just one year. In many sectors, the process began to change.

This rapid and uncontrolled development in artificial intelligence divided the experts in two. While some support the development of these technologies, others He began to worry about the risks. These concerns were voiced in a letter to AI developers. to the letter Names like Elon Musk and Steve Wozniak also signed.

“Stop the development of artificial intelligence systems”

It’s worth reading the entire letter to understand the concerns:

As extensive research has shown and top AI labs have confirmed, AI systems with intelligence to compete with humans can pose huge risks to society and humanity. As outlined in the widely endorsed Asilomar AI Principles, Advanced AI can represent a profound change in the history of life on Earth and must be planned and managed with proportional care and resources. Unfortunately, this level of planning and management isn’t happening, despite the fact that in recent months AI labs have been locked in an out-of-control race to develop and deploy more powerful digital minds that no one – not even their creators – can understand, predict or reliably control.

Contemporary AI systems are now able to compete with humans on common tasks[3] And we must ask ourselves the question: Should we let machines fill our channels of information with propaganda and falsehood? Should we automate all jobs, including the satisfying ones? Should we develop non-human minds that can eventually outnumber, outwit, outmoded, and replace us? Should we risk losing control of our civilization? Such decisions should not be delegated to unelected technology leaders. Strong AI systems should only be developed when we are confident that their impact will be positive and that their risks will be manageable. This trust should be well justified and should increase with the magnitude of a system’s potential effects. OpenAI’s latest statement on artificial general intelligence states, “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts, it may be important to agree to limit the growth rate of computing used to build new models.” We agree. That point is today.

Therefore, we call on all artificial intelligence labs to pause the training of AI systems stronger than GPT-4 for at least 6 months. This pause should be public and verifiable and include all key players. If such a pause cannot be enacted quickly, governments should step in and initiate a moratorium.

artificial intelligence

AI labs and independent experts should use this pause to jointly develop and implement a set of shared security protocols for advanced AI design and development, rigorously audited by independent outside experts. These protocols must ensure that the systems connected to them are secure beyond reasonable doubt. This doesn’t mean a pause in AI development in general, it just means stepping back from the perilous race to ever larger, unpredictable black-box models with emerging capabilities.

AI research and development must refocus on making today’s powerful, cutting-edge systems more accurate, secure, interpretable, transparent, robust, compliant, reliable and faithful.

In parallel, AI developers must work with policymakers to significantly accelerate the development of robust AI governance systems. These should include at a minimum: New and capable regulatory authorities dedicated to AI; surveillance and monitoring of highly capable AI systems and large pools of computational capability; source and watermark systems to help distinguish real from synthetic and monitor model outputs; a robust audit and certification ecosystem; liability for damage caused by artificial intelligence; strong public funding for technical AI security research; well-resourced institutions to deal with the dramatic economic and political disruptions (especially in democracy) that AI will cause.

Humanity can enjoy a flourishing future with artificial intelligence. Once we’ve managed to build strong AI systems, we can now enjoy an “AI summer” where we reap the rewards, design these systems for the obvious benefit of all, and give them a chance to adapt to society. Society has taken a break from other technologies that have potentially devastating effects on society. We can do this here. Let’s enjoy a long AI summer, not rush into autumn unprepared.

Among the signatories to the letter are big names:

Among the names that signed the letter, which has reached 1125 supporters from the sector as of now, are the following:

  • Elon Musk (CEO of SpaceX, Tesla and Twitter)
  • Steve Wozniak (Co-Founder of Apple)
  • Evan Sharp (Pinterest Founder)
  • Emad Mostaque (CEO of Stability AI)
  • Zachary Kenton (Senior Researcher at DeepMind at Google)

You can click on this link to reach the letter and all supporters.

RELATED NEWS

“Papa in a Puffy Coat” Image Produced Using Artificial Intelligence Blasts the Internet: How Do We Know What’s Real?


source site-38