BEIJING, March 30 (Xinhua)-The American drama westworld and the Hollywood movie Terminator have described that the "awakening" of artificial intelligence (AI) may endanger human beings. Recently, concerns in science fiction films have come into reality, and many "big brothers" in the artificial intelligence industry have suddenly called for suspending the development of more powerful AI systems, suggesting that AI systems with human-level intelligence may "pose potential risks to society and mankind".
Where do their concerns come from? Will the development of artificial intelligence endanger the "civilization control" of human beings? Has the development of AI reached a turning point where the "pause button" needs to be pressed?
An open letter, four questions about AI risk
Recently, elon musk, an American billionaire, and Yoshua Bengio, a top expert in the field of artificial intelligence and winner of Turing Prize, signed an open letter calling for suspending the development of AI systems more powerful than GPT-4 for at least six months, calling it "a potential risk to society and mankind".
An open letter calling for the suspension of giant AI training. Image source: "Future Life Institute" website
The open letter published on the website of the Future of Life Institute, a non-profit organization, advocates that before developing a powerful AI system, it should be confirmed that the impact is positive and the risks are controllable.
The letter details that the AI system with human intelligence may bring great risks to society and human beings, saying that "this point has been recognized by a lot of research and top AI laboratories" and throws four questions in succession:
-Should we let machines flood our information channels with propaganda and lies?
-Should we automate all the work, including those that are satisfactory?
-Should we develop non-human thinking that may eventually surpass us in number, surpass us in intelligence and be able to eliminate and replace us?
-Should we risk losing control of our civilization?
The open letter pointed out that if such suspension cannot be implemented quickly, the government should step in and take measures. In addition, "AI Labs and independent experts should take advantage of this suspension to jointly develop and implement a set of shared security protocols for advanced AI design and development, and be strictly audited and supervised by independent external experts".
The letter also called on developers and policy makers to cooperate to greatly accelerate the development of a powerful AI governance system. This should at least involve regulatory agencies, auditing and certification systems, monitoring and tracking high-performance AI systems, liability issues after artificial intelligence causes injuries, and providing public funds for AI technology security research.
At the end of the open letter, it is mentioned that today, our society has suspended other technologies that may have disastrous effects, and the same should be true for artificial intelligence. "Let’s enjoy a long ‘ Summer of AI ’ Instead of entering autumn unprepared. "
Technology, ethics and interests call for slowing down.
It is not difficult to see from these contents that the supporters of the open letter are more concerned about the possible "barbaric growth" of artificial intelligence and the social and ethical problems that its development may bring.
As mentioned in the open letter, this suspension does not mean suspending the development of artificial intelligence as a whole, but "taking a step back" from the dangerous competition. The research and development of artificial intelligence should re-focus on making today’s powerful and advanced systems more accurate, safe, interpretable, transparent, robust, consistent, credible and reliable.
Gary Marcus, a professor at new york University, said, "This letter is not perfect, but the spirit is correct." He believes that people need to slow down until they can better understand the consequences of all this.
At present, the open letter has been signed by thousands of people, including Emad Mostaque, CEO of open source artificial intelligence company Stability AI, DeepMind researcher of artificial intelligence company of Google’s parent company Alphabet, computer scientist Stuart Russell and other experts, scholars and technology company executives.
The "Future Life Research Institute" that issued an open letter was mainly funded by the Musk Foundation and the Silicon Valley Community Foundation. Tesla CEO Musk is applying artificial intelligence to autonomous driving system, and he has been frankly expressing his concerns about artificial intelligence.
Reuters also said that Europol recently expressed ethical and legal concerns about advanced artificial intelligence such as ChatGPT, warning that such systems may be abused for phishing, false information and cyber crimes. In addition, the British government also announced a proposal to establish an AI regulatory framework.
On the Internet, this letter also caused heated discussion. Some netizens agreed that people need to know what happened. "Just because we know we can build it doesn’t mean we should build it."
However, some people doubt Musk’s motivation. "Musk signed it because he wanted to use his artificial intelligence to make money."
Others analyzed, "This is terrible, but I don’t think some projects need to stop. Technology is developing rapidly and responsible innovation is also necessary. "
Thinking about the relationship between human beings and AI has never stopped since the birth of technology. The famous physicist Hawking once said, "The successful creation of artificial intelligence is probably the greatest event in the history of human civilization. But if we can’t learn how to avoid risks, then we will put ourselves in a desperate situation. "
Nowadays, from passing the qualification examination to artistic "creation", AI can do more and more things and go further and further in the direction of "intelligence". At the same time, however, criminal acts such as using AI technology for online fraud and extortion and spreading illegal information are also emerging one after another.
Different from the long cycle of the transformation of technological innovation achievements in the physical world, AI technological breakthroughs often spread around the world overnight through the Internet. These discussions and concerns are not superfluous until human beings fail to find a proper solution to AI ethics and supervision.