AI Experts, Elon Musk Want UN Ban On AI Military Weapons
In the letter from the Future of Life Institute — which Musk is a backer of — the 116 signees express their concern over weapons that integrate autonomous technology and call for the U.N. to establish protections that would prevent an escalation in the development and use of these weapons. Autonomous weapons refer to military devices that utilize artificial intelligence in applications like determining targets to attack or avoid.
Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close. We therefore implore the High Contracting Parties to find a way to protect us all from these dangers.
Last December, the U.N.’s Review Conference of the Convention on Conventional Weapons confirmed plans to begin discussions on autonomous weaponry and 19 of its members called for an outright ban. Other notable signees for the letter include Mustafa Suleyman, founder at the Google-acquired startup DeepMind, Element AI’s Yoshua Bengio and Bayesian Logic founder Stuart Russell.
In a statement, Clearpath Robotics founder and signee Ryan Gariepy said that immediate concerns over the growth of autonomous military weapons drove him to sign the letter.
“The number of prominent companies and individuals who have signed this letter reinforces our warning that this is not a hypothetical scenario, but a very real, very pressing concern which needs immediate action,” Gariepy said. “We should not lose sight of the fact that, unlike other potential manifestations of AI which still remain in the realm of science fiction, autonomous weapons systems are on the cusp of development right now and have a very real potential to cause significant harm to innocent people along with global instability.”
While much of the current backlash against artificial intelligence development has focused on its theoretical long-term consequences, weapons that heavily feature artificial intelligence are an immediate reality. As a 2016 report from Arizona State University backed by the Future of Life Institute pointed out, artificial intelligence has gradually been used in current applications like drone weapon systems that can independently analyze their surroundings and attack when they see a specified target.
For observers like the letter’s signees, much of their concern over artificial intelligence isn’t about science fiction hypotheticals like Gariepy alludes to. Instead, they argue that the increased efficiency of AI-powered military devices could result in an escalation for the research and development of weapons that could make warfare significantly more dangerous for civilians.
On Musk’s part, the Tesla CEO has been a longtime supporter of increased regulation for artificial intelligence research and has regularly argued that, if left unchecked, it could pose a risk to the future of mankind.