Joint signature of the three giants! Another open letter on "Beware of AI, Defend Humanity" was issued

“Mitigating the risk of AI extinction should be a global priority like managing other societal-scale risks such as pandemics and nuclear war.”

Written by: VickyXiao, Juny

Source: Silicon Stars

Ever since generative AI swept through nearly every field at such a rapid pace, the fear of AI challenging humans has become increasingly real.

Last time, Musk issued an open letter calling for the suspension of AI large model training for half a year and strengthening the supervision of AI technology on the artificial intelligence research community and industry leaders, calling on all laboratories around the world to suspend the development of stronger AI models , with a suspension period of at least 6 months. But it turned out that he actually bought 10,000 GPUs for Twitter to promote a brand new AI project, and it is very likely that he is developing his own large language model.

But this time, another open letter urging people to pay attention to the threat posed by AI has been issued again. What is more striking than the last time is that the current three giants in the field of generative AI: OpenAI, DeepMind (belonging to Google), and Anthropic have all joined them.

22 word statement, 350 signatures

The statement, issued by the Center for AI Safety, a San Francisco-based nonprofit, is a fresh warning of what they believe is an existential threat to humanity posed by AI. The entire statement is only 22 words—yes, you read that right, only 22 words, and the full content is as follows:

**Mitigating the risk of AI extinction should be a global priority along with managing other societal-scale risks such as pandemics and nuclear war. **

Although the AI threat theory is not new, it is the first time that it has been publicly compared with crisis factors such as "nuclear war" and "pandemic" that endanger all mankind.

The byline portion of the statement is much longer than the content of the statement.

In addition to Sam Altman, CEO of OpenAI, Demis Hassabis, CEO of DeepMind, and Dario Amode, CEO of Anthropic, more than 350 top AI researchers, engineers and entrepreneurs have also joined, including the "AI Big Three" who once won the Turing Award "Two Geoffrey Hinton and Yoshua Bengio, but Yann LeCun, who won the award with them and is currently the chief AI scientist of Meta, the parent company of Facebook, has not yet signed.

In addition, Chinese scholars also appeared on the list, including Zeng Yi, director of the Research Center for Artificial Intelligence Ethics and Governance at the Institute of Automation of the Chinese Academy of Sciences, and Zhan Xianyuan, an associate professor at Tsinghua University.

The full list of signatures can be viewed here:

Hendrycks, executive director of the Center for AI Security, the publisher of the statement, said the statement was succinct and deliberately did not address any potential ways to mitigate the threat of artificial intelligence in order to avoid disagreement. "We didn't want to push for a huge portfolio of 30 potential interventions," Hendrycks said. "When that happens, it dilutes the message."

Enhanced version of Musk's open letter

This open letter can be seen as an enhanced and "clean version" of Musk's open letter earlier this year.

Previously, Musk joined more than a thousand leaders from industry and academia to issue a joint announcement on the website of the "Future of Life Institute". The open letter mainly conveys two aspects of information: one is to warn of the potential threat of artificial intelligence to human society, and requires the immediate suspension of the training of any artificial intelligence system stronger than GPT-4, with a time span of at least 6 months. The second is to call on the entire field of artificial intelligence and policy makers to jointly design a comprehensive artificial intelligence governance system to supervise and review the development of artificial intelligence technology.

The letter was criticized at the time on multiple levels. Not only because Musk was exposed as "not talking about martial arts", while publicly calling for the suspension of AI research, he secretly promoted a brand new AI project and poached some technical talents from Google and OpenAI, but also because the suggestion of "suspending development" did not It's not feasible, and it doesn't solve the problem.

For example, when he won the Turing Award with Yoshua Bengio, Yann LeCun, one of the "Big Three" in artificial intelligence, made it clear at the time that he did not agree with the point of view of the letter and did not sign it.

However, Yann LeCun also did not sign this new and more ambiguous open letter.

Wu Enda, a well-known scholar in the field of artificial intelligence and founder of Landing AI, also posted on LinkedIn at the time that the idea of suspending AI training for 6 months is a bad and unrealistic idea.

He said that the only way to really suspend the industry's research on AI training is government intervention, but asking the government to suspend emerging technologies they don't understand is anti-competitive and obviously not a good solution. He acknowledged that responsible AI is important and that AI does have risks, but a one-size-fits-all approach is not advisable. What is more important at present is that all parties should invest more in the security field of artificial intelligence while developing AI technology, and cooperate to formulate regulations around transparency and auditing.

Sam Altman even directly stated when he was questioned by the US Congress that the framework of Musk’s call was wrong, and the suspension of the date was meaningless. "We pause for six months, then what? We pause for another six months?" he said.

But like Andrew Ng, Sam Altman has been one of the most vocal advocates for greater government regulation of AI.

He even made regulatory recommendations to the U.S. government at the hearing, asking the government to form a new government agency responsible for issuing licenses for large-scale AI models. If a company’s model does not meet government standards, the agency can revoke the company’s license.

Last week, he also joined several other OpenAI executives to call for the establishment of an international organization similar to the International Atomic Energy Agency to regulate AI and called for cooperation among leading international AI developers.

Voice of the Opponent

Like Musk's open letter, this latest one is also based on the assumption that AI systems will improve rapidly, but humans will not have full control over their safe operation.

Many experts point out that rapid improvements in systems such as large language models are predictable. Once AI systems reach a certain level of sophistication, humans may lose control over their behavior. Toby Ord, a scholar from the University of Oxford, said that just as people hope that big tobacco companies will admit that their products will cause serious health hazards earlier and start discussing how to limit these hazards, AI leaders are doing so now.

But there are also many who doubt these predictions. They point out that AI systems can't even handle relatively mundane tasks, such as driving a car. Despite years of effort and tens of billions of dollars invested in this area of research, fully autonomous vehicles are still far from a reality. If AI can't even meet this challenge, say the skeptics, what chance does the technology have of posing a threat in the next few years?

Yann LeCun took to Twitter to express his disapproval of this concern, saying that superhuman AI is not at the top of the human extinction crisis list at all — mainly because it does not yet exist. “Before we can design dog-level AI (let alone human-level AI), it’s completely premature to discuss how to make it safer.”

Wu Enda is more optimistic about AI. He said that in his eyes, factors that would cause survival crises for most human beings, including epidemics, climate change, asteroids, etc., AI will instead be a key solution to these crises. If humanity is to survive and thrive in the next 1,000 years, AI needs to develop faster, not slower.

View Original
The content is for reference only, not a solicitation or offer. No investment, tax, or legal advice provided. See Disclaimer for more risks disclosure.
  • Reward
  • Comment
  • Share
Comment
0/400
No comments
  • Pin