Look out, gang. The dark side of AI is here, at least according to one study called The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.”
According to the report, some of the things we have to fear are autonomous swarms of micro-drones and automated, self-aware cyber attackers. I for one welcome my robot overlords, of course.
|Top AI experts warn of a ‘Black Mirror’-esque future with swarms of micro-drones and autonomous weapons|
Humanity needs to better prepare for the rise of dangerous artificial intelligence.
So says a report from 26 technology experts from leading artificial intelligence and security organizations including the non-profit OpenAI, the University of Cambridge’s Center for the Study of Existential Risk, the University of Oxford’s Future of Humanity Institute, the bipartisan nonprofit Center for a New American Security and the nonprofit Electronic Frontier Foundation, among others. (Tuesday, Open AI announced co-founder Elon Musk would depart the board but continue to donate and advise the organization.)
The report, titled “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation,” was published Tuesday.
…..AI will allow the automation of tasks involved in digital cyberattacks that will make those offensives easier to carry out, larger and more efficient. They authors expect new varieties of attacks using speech synthesis for impersonation and automated hacking too.
In the physical ream, using AI to automate tasks involved in drone and autonomous weapon attacks “may expand the threats associated with these attacks,” the report says. Further, the report predicts new attacks that “subvert” the signals to autonomous vehicles, causing them to crash. There could also be weapons including a “swarm of a thousand micro-drones,” which is reminiscent of Netflix show “Black Mirror.” In one popular episode, bee-like drones went on a murderous rage.
Thirdly, using AI to automate tasks involved in political security may expand existing surveillance, persuasion and deception threats, the report says. The authors expect new types of attacks based the ability to analyze human behaviors, moods and beliefs. “These concerns are most significant in the context of authoritarian states, but may also undermine the ability of democracies to sustain truthful public debates,” the report says.