Using robots to kill: Ethics debated after Dallas
NEW YORK—When Dallas police detonated a “bomb robot” Thursday night to take down a sniper suspect, it was believed to be the first time a robot was used by law enforcement to kill a human being in the U.S.
Dallas police chief David Brown explained in a press conference that “other options would have exposed our officers to grave danger.”
The action raises ethical questions about the role of robots in warfare, or in this case, police work, especially given continuing breakthroughs in machine learning and artificial intelligence.
“I think for all of us, the first issue that comes to mind is some degree of relief,” says Michael Kalichman, director of the Center for Ethics in Science and Technology. “While it’s premature to judge exactly what happened, it certainly seems likely that this ended a tragedy that could have been far worse. However, we also can’t help but think about where this will go next.”
Dallas Mayor Mike Rawlings told reporters Friday he could foresee the device being used in similar situations across the nation in the future, but only as a last resort. “The key thing is to keep our police out of harm’s way,” he said.
The good news is that we’re a long way from unleashing robots that are potentially autonomous Terminator-like killing machines.
The robot used by Dallas police remained under full human control, noted Martial Hebert, head of The Robotics Institute at Carnegie Mellon University.
CEO Sean Bielat of Endeavor Robotics, which before being spun off in April was the Defense & Security unit at iRobot, agrees: “When it comes to life and death, you want people making those decisions,” he says.
Endeavor has delivered more than 6,000 mobile robots worldwide, but only a few hundred to law enforcement. The costs—the robots command prices in excess of $100,000— are a key barrier for some budget-strapped police agencies. The rest go to military in the U.S. and in other countries.
Robots enlisted for law enforcement, say helping determine whether there were any additional shooters during the December mass shooting in San Bernardino, represent just one segment of a disparate but flourishing industry.
Robots have been infiltrating numerous areas, some for years, others more on the cutting edge. There’s a sizable consumer segment—think toys and Roomba vacuum cleaners—as well as applications that touch industrial, medical, enterprise, drones, and autonomous vehicles fields.
According to the Tractica market research firm, in 2015, there were about 8.8 million robotics shipments made globally, roughly 75% from the consumer segment. By 2020, Tractica projects shipments to reach about 61.4 million, with the consumer share dropping to around 50%.
“Robotics in a general have gone from science fiction to an ever present and growing (force) in both war and civilian life,” says Peter Singer, author of Wired For War: The Robotics Revolution And Conflict In The 21st Century, who has documented more than 20 different examples of autonomous and AI systems being worked on for war.
Unmanned drones, which have been been lumped into the robotics category, have certainly been used in warfare.
Whether various robotics systems eventually end up in widespread use by law enforcement isn’t so much about the technology as it is about legal, financial and ethical policy.
“The reality is that there are innumerable reasons why things can and will go wrong,” Kalichman says. “Deciding when the risks of making things worse outweigh the possibility of a successful outcome is at best based on an educated guess. This isn’t to say the choice to deploy the robot in Dallas was mistaken; it’s only a reminder that these questions are not easily answered.”