Inside the messy ethics of making war with machines
This is why a human hand must squeeze the trigger, why a human hand must click “Approve.” If a computer sets its sights upon the wrong target, and the soldier squeezes the trigger anyway, that’s on the soldier. “If a human does something that leads to an accident with the machine—say, dropping a weapon where it shouldn’t have—that’s still a human’s decision that was made,” Shanahan says.
But accidents happen. And this is where things get tricky. Modern militaries have spent hundreds of years figuring out how to differentiate the unavoidable, blameless tragedies of warfare from acts of malign intent, misdirected fury, or gross negligence. Even now, this remains a difficult task. Outsourcing a part of human agency and judgment to algorithms built, in many cases, around the mathematical principle of optimization will challenge all this law and doctrine in a fundamentally new way, says Courtney Bowman, global director of privacy and civil liberties engineering at Palantir, a US-headquartered firm that builds data management software for militaries, governments, and large companies.
“It’s a rupture. It’s disruptive,” Bowman says. “It requires a new ethical construct to be able to make sound decisions.”
This year, in a move that was inevitable in the age of ChatGPT, Palantir announced that it is developing software called the Artificial Intelligence Platform, which allows for the integration of large language models into the company’s military products. In a demo of AIP posted to YouTube this spring, the platform alerts the user to a potentially threatening enemy movement. It then suggests that a drone be sent for a closer look, proposes three possible plans to intercept the offending force, and maps out an optimal route for the selected attack team to reach them.
And yet even with a machine capable of such apparent cleverness, militaries won’t want the user to blindly trust its every suggestion. If the human presses only one button in a kill chain, it probably should not be the “I believe” button, as a concerned but anonymous Army operative once put it in a DoD war game in 2019.
In a program called Urban Reconnaissance through Supervised Autonomy (URSA), DARPA built a system that enabled robots and drones to act as forward observers for platoons in urban operations. After input from the project’s advisory group on ethical and legal issues, it was decided that the software would only ever designate people as “persons of interest.” Even though the purpose of the technology was to help root out ambushes, it would never go so far as to label anyone as a “threat.”
This, it was hoped, would stop a soldier from jumping to the wrong conclusion. It also had a legal rationale, according to Brian Williams, an adjunct research staff member at the Institute for Defense Analyses who led the advisory group. No court had positively asserted that a machine could legally designate a person a threat, he says. (Then again, he adds, no court had specifically found that it would be illegal, either, and he acknowledges that not all military operators would necessarily share his group’s cautious reading of the law.) According to Williams, DARPA initially wanted URSA to be able to autonomously discern a person’s intent; this feature too was scrapped at the group’s urging.
Bowman says Palantir’s approach is to work “engineered inefficiencies” into “points in the decision-making process where you actually do want to slow things down.” For example, a computer’s output that points to an enemy troop movement, he says, might require a user to seek out a second corroborating source of intelligence before proceeding with an action (in the video, the Artificial Intelligence Platform does not appear to do this).