Opinion | To See One of A.I.’s Greatest Dangers, Look to the Military

A.I. and nuclear weapons could make for a catastrophic combination.
Opinion | To See One of A.I.’s Greatest Dangers, Look to the Military

Rogue artificial intelligence versus humankind is a common theme in science fiction. It could happen, I suppose. But a more imminent threat is human beings versus human beings, with A.I. used as a lethal weapon by both sides. That threat is growing rapidly because there is an international arms race in militarized A.I.

What makes an arms race in artificial intelligence so frightening is that it shrinks the role of human judgment. Chess programs that are instructed to move fast can complete a game against each other in seconds; artificial intelligence systems reading each other’s moves could go from peace to war just as quickly.

On paper, military and political leaders remain in control. They are “in the loop,” as computer scientists like to say. But how should those looped-in leaders react if an A.I. system announces that an attack by the other side could be moments away and recommends a pre-emptive attack? Dare they ignore the output of the inscrutable black box that they spent hundreds of billions of dollars developing? If they push the button just because the A.I. tells them to, they are in the loop in name only. If they ignore it on a hunch, the consequences could be just as bad.

The intersection of artificial intelligence that can calculate a million times faster than people and nuclear weapons that are a million times more powerful than any conventional weapon is about as scary as intersections come.

Henry Kissinger, who turns 100 years old on May 27, was born when warfare still involved horses. Now Kissinger, the secretary of state under Presidents Nixon and Ford, is contemplating A.I.-enabled warfare. I recently read “The Age of A.I. and Our Human Future,” the 2021 book he wrote with Eric Schmidt, a former chief executive and chairman of Google, and Daniel Huttenlocher, the inaugural dean of the M.I.T. Schwarzman College of Computing. It was rereleased last year with an afterword that noted some of the recent advances in A.I.

“The A.I. era risks complicating the riddles of modern strategy further beyond human intention — or perhaps complete human comprehension,” the three authors wrote.

The obvious solution is a moratorium on the development of militarized A.I. The Campaign to Stop Killer Robots, an international coalition, argues: “Life and death decisions should not be delegated to a machine. It’s time for new international law to regulate these technologies.”

But the chance of a moratorium is slim. Gregory Allen, a former director of strategy and policy at the Pentagon’s Joint Artificial Intelligence Center, told Bloomberg that efforts by Americans to reach out to their counterparts in China were unsuccessful.

The Americans are not going to pause development on militarized A.I. on their own. “If we stop, guess who is not going to stop: potential adversaries overseas,” the Pentagon’s chief information officer, John Sherman, said at a cybersecurity conference this month. “We’ve got to keep moving.”

Schmidt is pressing for development of American capabilities in militarized A.I. through the Special Competitive Studies Project, a foundation that’s part of the Eric & Wendy Schmidt Fund for Strategic Innovation. A report this month reiterates the project’s call for “military-technological superiority over all potential adversaries, including the People’s Liberation Army” of China.

On the crucial topic of keeping people in the loop, Schmidt’s project favors “human-machine collaboration” and “human-machine combat teaming.” The former is for decision-making and the latter is for “executing complex tasks, including in combat operations.” Working together, the report says, humans and machines can accomplish more than either could alone.

The Schmidt project doesn’t advocate autonomous weapons. But the fact is, the Pentagon already has some. As David Sanger noted in The Times this month, Patriot missiles can fire without human intervention “when overwhelmed with incoming targets faster than a human could react.” Even at that stage, the Patriots are supposed to be supervised by human beings. Realistically, though, if a computer can’t keep up in the fog of war, what chance does a person have?

Georges Clemenceau, who was France’s prime minister toward the end of World War I, said that war is too important to be left to military men. He meant that civilian leaders should make the final decisions. But the arms race in artificial intelligence could one day bring us to the point where civilian leaders will see no choice but to cede the final decisions to computers. Then war will be considered too important to be left to human beings.


Keyu Jin’s viewpoints, which you wrote about, are quite typical of educated urban middle-class and upper-middle-class Chinese, who benefited from the meteoric rise of the Chinese economy the most. As someone who grew up in rural China, I beg to differ on several aspects. First, there is no olive-shaped income distribution in China (maybe slightly closer to reality in urban China). Second, the people who have best access to foreign information (including reports of Chinese government misdeeds) are the same ones who benefit from the current Chinese system, just like Jin. They have every reason to rationalize or downplay the Chinese government’s ills and emphasize the many achievements. Thus, I think the issue is an asymmetry of socioeconomic status and information access, not innate cultural differences between West and East.

Hu ZengRochester, Minn.


“Man naturally desires, not only to be loved, but to be lovely; or to be that thing which is the natural and proper object of love.”

— Adam Smith, “The Theory of Moral Sentiments,” sixth edition (1790)