The Case For and Against Armed Drone Robots
After a short film was released by the “Campaign Against Killer Robots” warning of drone assassins coming, Phys.org asked two ‘academics’ to give the for and against argument in the case of drone assassins. Here is a brief excerpt from their presentations, but do follow the link to read their full presentations.
A new short film from the Campaign Against Killer Robots warns of a future where weaponised flying drones target and assassinate certain members of the public, using facial recognition technology to identify them. Is this a realistic threat that could rightly spur an effective ban on the technology? Or is it an overblown portrayal designed to scare governments into taking simplistic, unnecessary and ultimately futile action? We asked two academics for their expert opinions.
Overactive imagination risks panic and distress
Peter Lee is a Reader in Politics and Ethics and Theme Director for Security and Risk Research and Innovation at the University of Portsmouth.
The newly released short film offers a bleak dystopia with humans at the mercy of “slaughterbots”. These are autonomous micro-drones with cameras, facial recognition software and lethal explosive charges. Utterly terrifying, and – the film claims – not science fiction but a near-future scenario that really could happen. The film warns with a frightening, deep voice: “They cannot be stopped.” The only salvation from this impending hell is, it is suggested, to ban killer robots.
This imaginative use of film to scare its viewers into action is the 21st-century version of the panic that HG Wells’s science fiction writings created in the early 20th century. New technologies can almost always be used for malevolent purposes but those same technologies – in this case flying robots, facial recognition, autonomous decision-making – can also drive widespread human benefit.
What about the killing part? Yes, three grams of explosive to the head could kill someone. But why go to the expense and trouble of making a lethal micro-drone? Such posturing about the widespread use of targeted, single-shot flying robots is a self-indulgence of technologically advanced societies. It would be hugely costly to develop such selective killing capability for use on a mass scale – certainly outside the capacity of terrorist organisations and, indeed, most militaries……
A wake-up call on how robots could change conflicts
Steve Wright is a Reader in the Politics and International Relations Group at Leeds Beckett University and a member of the International Campaign for Armed Robot Control.
The Campaign Against Killer Robots‘ terrifying new short film “Slaughterbots” predicts a new age of warfare and automated assassinations, if weapons that decide for themselves who to kill are not banned. The organisation hopes to pressure the UN to outlaw lethal robots under the Convention on Certain Conventional Weapons (CCW), which has previously banned antipersonnel landmines, cluster munitions and blinding lasers on the battlefield.
Some have suggested that the new film is scaremongering. But the technologies needed to build such autonomous weapons – intelligent targeting algorithms, geo-location, facial recognition – are already with us. Many existing lethal drone systems only operate in a semi-autonomous mode because of legal constraints and could do much more if allowed. It won’t take much to develop the technology so it has the capabilities shown in the film.