Aaron Gregg, Cristiano Lima, Gerrit De Vynck
Hundreds of artificial intelligence scientists and tech executives signed a one-sentence letter that succinctly warns AI poses an existential threat to humanity, the latest example of a growing chorus of alarms raised by the very people creating the technology.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” according to the statement released this week by the non-profit Center for AI Safety.
The open letter was signed by more than 350 researchers and executives, including chatbot ChatGPT creator OpenAI’s CEO, Sam Altman, as well as 38 members of Google’s DeepMind artificial intelligence unit.
Altman and others have been at the forefront of the field, pushing new “generative” AI to the masses, such as image generators and chatbots that can have human-like conversations, summarise text and write computer code. OpenAI’s ChatGPT bot was the first to launch to the public in November, kicking off an arms race that led Microsoft and Google to launch their own versions earlier this year.
Since then, a growing faction within the AI community has been warning about the potential risks of a doomsday-type scenario where the technology grows sentient and attempts to destroy humans in some way. They are pitted against a second group of researchers who say this is a distraction from problems like inherent bias in current AI, the potential for it to take jobs and its ability to lie.
Sceptics also point out that companies who sell AI tools can benefit from the widespread idea that they are more powerful than they actually are – and they can front-run potential regulation on shorter-term risks if they hype up those that are longer term.
Dan Hendrycks, a computer scientist who leads the Center for AI Safety, said the single-sentence letter was designed to ensure the core message was not lost.
“We need widespread acknowledgement of the stakes before we can have useful policy discussions,” Hendrycks wrote in an email. “For risks of this magnitude, the takeaway isn’t that this technology is overhyped, but that this issue is currently under-emphasised relative to the actual level of threat.”
In late March, a different public letter gathered more than 1 000 signatures from members of the academic, business and technology worlds who called for an outright pause on the development of new high-powered AI models until regulation could be put into place. Most of the field’s most influential leaders didn’t sign that one, but they have signed the new statement, including Altman and two of Google’s most senior AI executives: Demis Hassabis and James Manyika. Microsoft chief technology officer Kevin Scott and Microsoft chief scientific officer Eric Horvitz both signed it as well.
Notably absent from the letter are Google CEO Sundar Pichai and Microsoft CEO Satya Nadella, the field’s two most powerful corporate leaders.
Pichai said in April that the pace of technological change may be too fast for society to adapt, but he was optimistic because the conversation around AI risks was already happening. Nadella has said AI would be hugely beneficial by helping humans work more efficiently and allowing people to do more technical tasks with less training.
Industry leaders are also stepping up their engagement with Washington power brokers. Earlier this month, Altman met President Biden to discuss AI regulation. He later testified on Capitol Hill, warning lawmakers that AI could cause significant harm to the world. Altman drew attention to specific “risky” applications, including using it to spread disinformation and potentially aid in more targeted drone strikes.
Hendrycks added that “ambitious global co-ordination” might be required to deal with the problem, possibly drawing lessons from both nuclear non-proliferation or pandemic prevention. Though a number of ideas for AI governance have been proposed, no sweeping solutions have been adopted.
Altman, the OpenAI CEO, suggested in a recent blog post that there probably would be a need for an international organisation that can inspect systems, test their compliance with safety standards, and place restrictions on their use – similar to how the International Atomic Energy Agency governs nuclear technology.
Addressing the apparent hypocrisy of sounding the alarm over AI while rapidly working to advance it, Altman told Congress that it was better to get the tech out to many people now while it is still early so that society can understand and evaluate its risks, rather than waiting until it is already too powerful to control.
Others have implied that the comparison to nuclear technology may be alarmist. Former White House tech adviser Tim Wu said likening the threat posed by AI to nuclear fallout missed the mark and clouded the debate around reining in the tools by shifting the focus away from the harms it may already be causing.
“There are clear harms from AI, misuse of AI already that we’re seeing, and I think we should do something about those, but I don’t think they’re … yet shown to be like nuclear technology,” he said last week. – The Washington Post.
- Pranshu Verma and Cat Zakrzewski contributed to this report.
The Independent on Saturday