Skip to Content, Navigation, or Footer.

Professor debates use of robots in war

(Courtesy of Gary Marchant)
(Courtesy of Gary Marchant)

Gary Marchant admits it’s a case of technology outpacing politics.

Lethal autonomous military robots, machines that can act and kill on the battlefield without human input, are the future, but it’s still unclear exactly how they should be used ethically.

“They are a gray area,” said Marchant, the Lincoln Professor of Emerging Technologies, Law and Ethics at the Sandra Day O’Connor College of Law. “It’s not clear what the rules would be for these types of weapons.”

In an article published in June in The Columbia Science and Technology Law Review, Marchant and his colleagues argue that a debate needs to take place on how to use the technology before it becomes more widely available.

The article, “International Governance of Autonomous Military Robots,” doesn’t prescribe a certain path for using the technology, but instead notes some of the pros and cons.

Marchant and others began writing the article after a discussion at the Consortium on Emerging Technologies, Military Operations and National Security, a consortium based at ASU’s Lincoln Center for Applied Ethics.

The most obvious reason to use them is that such robots prevent soldiers from risking their lives by only putting the robots at risk.

If a robot is destroyed in a conflict, “so what? You’d lose a robot,” Marchant said.

But even that has complication. In a bigger-picture case, it raises the question of if countries are more willing to go to war if the loss of human life isn’t an immediate deterrent.

Beyond that, the article poses other questions, such as, will robots ever be able to recognize all the nuances of surrender at the same level as humans?

“Maybe all they have is a blue flag, but they’re clearly holding it up in a defensive surrender motion,” Marchant said. “But the robot says, ‘That’s not white.’ Boom.”

And if they can’t, who should be held responsible if the robots break treaties the United States military is bound by?

Another argument for the technology is that robots aren’t subject to the same emotional pressures on the battlefield as humans are. A particularly stressful situation could lead to rash decisions, but a robot could think it out logically before acting.

“It may be possible to make these weapons more ethical than human soldiers,” Marchant said.

Even the term lethal autonomous military robot could be misleading.

“People have an intuitive idea of lethal and autonomous and robots,” said Braden Allenby, ASU’s Lincoln Professor of Engineering and Ethics, who also helped in the discussion and writing of the article. “All of these terms are, in fact, fairly fuzzy.”

Even the word “robot” could extend to “smarter” landmines or ships that act lethally on their own if humans are incapacitated.

Some acts of war, such as the use of biological weapons, are forbidden under treaties signed by the United States and other countries as they cause too much collateral damage.

The case of military robots, however, isn’t so clear.

“You really don’t know how things are going to work out in combat,” Allenby said.

Reach the reporter at clecher@asu.edu


Continue supporting student journalism and donate to The State Press today.

Subscribe to Pressing Matters



×

Notice

This website uses cookies to make your experience better and easier. By using this website you consent to our use of cookies. For more information, please see our Cookie Policy.