Democracies must use AI to defend open societies
The world would have been a far darker place had Nazi Germany beaten the US to build the world’s first atomic bomb. Mercifully, the self-defeating hatred of Adolf Hitler’s regime sabotaged its own efforts. A 1933 law sacking “civil servants of non-Aryan descent” stripped one quarter of Germany’s physicists of their university posts. As the historian Richard Rhodes noted, 11 of those 1,600 scholars had already earned, or would go on to earn, the Nobel Prize. Scientific refugees from Nazi Europe were later central to the Manhattan atomic bomb project in the US.The agonising soul-searching of scientists over building nuclear weapons resonates strongly today as researchers develop artificial intelligence systems that are increasingly adopted by the military. Excited though they are about the peaceful uses of AI, researchers know it is a dual-use general purpose technology that can have highly destructive applications. The Stop Killer Robots coalition, with more than 180 non-governmental member organisations from 66 countries, is campaigning hard to outlaw so-called lethal autonomous weapons systems, powered by AI.War in Ukraine has increased the urgency of the debate. Earlier this month, Russia announced that it had created a special department to develop AI-enabled weapons. It added that its experience in Ukraine would help make its weapons “more efficient and smarter.” Russian forces have already deployed the Uran-6 autonomous mine-clearing robot as well as the KUB-BLA unmanned suicide drone, which its manufacturer says uses AI to identify targets (although these claims are disputed by experts).Russia’s president Vladimir Putin has spoken about AI’s “colossal opportunities.” “Whoever becomes the leader in this sphere will become the ruler of the world,” he has said. However, the Kremlin’s efforts to develop AI-enabled weapons will surely be hampered by the recent exodus of 300,000 Russians, many from the tech sector, and the poor performance of its conventional forces.The Russian initiative followed the Pentagon’s announcement last year that it was intensifying efforts to achieve AI superiority. The US Department of Defense was “working to create a competitive military advantage by embracing and leveraging AI,” said Kathleen Hicks, the deputy defence secretary. China, too, has been developing AI for both economic and military uses with the clear aim of overtaking the US, in what has been called the AI arms race.While much of the debate about the use of nuclear weapons has been relatively clear-cut and confined for decades, no matter how terrifying, the discussion about AI is far more confused and kaleidoscopic. To date, only nine nation states have developed nuclear weapons. Only two atomic bombs have ever been used in modern warfare, at Hiroshima and Nagasaki in 1945. Their appalling destructive power has made them weapons of last resort.AI, on the other hand, is less visible, more diffuse and more unpredictable because of its lower threshold for use, as the veteran strategist Henry Kissinger has written. It is perhaps best seen as a force multiplier that can be used to enhance the capabilities of drones, cyber weapons, anti-aircraft batteries or fighting troops. Some strategists fear that western democracies might be at a disadvantage against authoritarian regimes because of heightened ethical constraints. In 2018, more than 3,000 Google employees signed a letter saying the company should “not be in the business of war” and calling (successfully) for its withdrawal from the Pentagon’s Project Maven, designed to apply AI to the battlefield. The Pentagon now stresses the importance of developing “responsible” AI systems, governed by democratic values, controls and laws. The war in Ukraine may also be swaying public opinion, especially in Europe. “Young people care about climate change. And now they care about living in open societies,” Torsten Reil, the co-founder of Helsing, a German start-up that uses AI to integrate battlefield data, tells me. “If we want to live in an open society we have to be able to deter and defend and do that credibly.”To some, this may smack of a cynical rebranding of the death industry. But as physicists learnt during the second world war, it is hard to be morally pure when awful real-world choices have to be made. To their great credit, many AI researchers are today pressing for meaningful international conventions to constrain otherwise uncontrollable killer robots. But it would be reckless to forsake the responsible use of AI technology to defend democratic societies.john.thornhill@ft.com
The world would have been a far darker place had Nazi Germany beaten the US to build the world’s first atomic bomb. Mercifully, the self-defeating hatred of Adolf Hitler’s regime sabotaged its own efforts. A 1933 law sacking “civil servants of non-Aryan descent” stripped one quarter of Germany’s physicists of their university posts. As the historian Richard Rhodes noted, 11 of those 1,600 scholars had already earned, or would go on to earn, the Nobel Prize. Scientific refugees from Nazi Europe were later central to the Manhattan atomic bomb project in the US.
The agonising soul-searching of scientists over building nuclear weapons resonates strongly today as researchers develop artificial intelligence systems that are increasingly adopted by the military. Excited though they are about the peaceful uses of AI, researchers know it is a dual-use general purpose technology that can have highly destructive applications. The Stop Killer Robots coalition, with more than 180 non-governmental member organisations from 66 countries, is campaigning hard to outlaw so-called lethal autonomous weapons systems, powered by AI.
War in Ukraine has increased the urgency of the debate. Earlier this month, Russia announced that it had created a special department to develop AI-enabled weapons. It added that its experience in Ukraine would help make its weapons “more efficient and smarter.” Russian forces have already deployed the Uran-6 autonomous mine-clearing robot as well as the KUB-BLA unmanned suicide drone, which its manufacturer says uses AI to identify targets (although these claims are disputed by experts).
Russia’s president Vladimir Putin has spoken about AI’s “colossal opportunities.” “Whoever becomes the leader in this sphere will become the ruler of the world,” he has said. However, the Kremlin’s efforts to develop AI-enabled weapons will surely be hampered by the recent exodus of 300,000 Russians, many from the tech sector, and the poor performance of its conventional forces.
The Russian initiative followed the Pentagon’s announcement last year that it was intensifying efforts to achieve AI superiority. The US Department of Defense was “working to create a competitive military advantage by embracing and leveraging AI,” said Kathleen Hicks, the deputy defence secretary. China, too, has been developing AI for both economic and military uses with the clear aim of overtaking the US, in what has been called the AI arms race.
While much of the debate about the use of nuclear weapons has been relatively clear-cut and confined for decades, no matter how terrifying, the discussion about AI is far more confused and kaleidoscopic. To date, only nine nation states have developed nuclear weapons. Only two atomic bombs have ever been used in modern warfare, at Hiroshima and Nagasaki in 1945. Their appalling destructive power has made them weapons of last resort.
AI, on the other hand, is less visible, more diffuse and more unpredictable because of its lower threshold for use, as the veteran strategist Henry Kissinger has written. It is perhaps best seen as a force multiplier that can be used to enhance the capabilities of drones, cyber weapons, anti-aircraft batteries or fighting troops. Some strategists fear that western democracies might be at a disadvantage against authoritarian regimes because of heightened ethical constraints. In 2018, more than 3,000 Google employees signed a letter saying the company should “not be in the business of war” and calling (successfully) for its withdrawal from the Pentagon’s Project Maven, designed to apply AI to the battlefield.
The Pentagon now stresses the importance of developing “responsible” AI systems, governed by democratic values, controls and laws. The war in Ukraine may also be swaying public opinion, especially in Europe. “Young people care about climate change. And now they care about living in open societies,” Torsten Reil, the co-founder of Helsing, a German start-up that uses AI to integrate battlefield data, tells me. “If we want to live in an open society we have to be able to deter and defend and do that credibly.”
To some, this may smack of a cynical rebranding of the death industry. But as physicists learnt during the second world war, it is hard to be morally pure when awful real-world choices have to be made. To their great credit, many AI researchers are today pressing for meaningful international conventions to constrain otherwise uncontrollable killer robots. But it would be reckless to forsake the responsible use of AI technology to defend democratic societies.