Scientists and AI tech industry leaders have issued a new warning about the perils of artificial intelligence.
Dozens of leading tech figures have signed a statement saying: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from AI. Even so, it can be difficult to voice concerns about some of advanced AI’s most severe risks. The statement aims to overcome this obstacle and open up discussion. It is also meant to create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously.
AI has been compared to electricity and the steam engine in terms of its potential to transform society. The technology could be profoundly beneficial, but it also presents serious risks, due to competitive pressures and other factors.
AI systems are rapidly becoming more capable. AI models can generate text, images, and video that are difficult to distinguish from human-created content. While AI has many beneficial applications, it can also be used to perpetuate bias, power autonomous weapons, promote misinformation, and conduct cyber attacks. Even as AI systems are used with human involvement, AI agents are increasingly able to act autonomously to cause harm.
When AI becomes more advanced, it could eventually pose catastrophic or existential risks. There are many ways in which AI systems could pose or contribute to large-scale risks. According to the Centre for AI safety, the key risks from artificial intelligence include:
Weaponisation – Malicious actors could re-purpose AI to be highly destructive, presenting an existential risk in and of itself and increasing the probability of political destabilization. For example, deep reinforcement learning methods have been applied to aerial combat, and machine learning drug-discovery tools could be used to build chemical weapons.
Misinformation – A deluge of AI-generated misinformation and persuasive content could make society less-equipped to handle important challenges of our time.
Proxy Gaming – Trained with faulty objectives, AI systems could find novel ways to pursue their goals at the expense of individual and societal values.
Enfeeblement – Enfeeblement can occur if important tasks are increasingly delegated to machines; in this situation, humanity loses the ability to self-govern and becomes completely dependent on machines, similar to the scenario portrayed in the film WALL-E.
Value Lock-in – Highly competent systems could give small groups of people a tremendous amount of power, leading to a lock-in of oppressive systems.
Emergent Goals – Models demonstrate unexpected, qualitatively different behaviour as they become more competent. The sudden emergence of capabilities or goals could increase the risk that people lose control over advanced AI systems.
Deception – We want to understand what powerful AI systems are doing and why they are doing what they are doing. One way to accomplish this is to have the systems themselves accurately report this information. This may be non-trivial however since being deceptive is useful for accomplishing a variety of goals.
Power-Seeking Behaviour – Companies and governments have strong economic incentives to create agents that can accomplish a broad set of goals. Such agents have instrumental incentives to acquire power, potentially making them harder to control.