Opinion | A.I. Is Being Built by People Who Think It Might Destroy Us

Silicon Valley’s futurists have gone from utopian to dystopian.
Opinion | A.I. Is Being Built by People Who Think It Might Destroy Us

“Last time we had rivals in terms of intelligence they were cousins to our species, like Homo neanderthalensis, Homo erectus, Homo floresiensis, Homo denisova and more,” the neuroscientist Erik Hoel wrote in one much-passed-around meditation on the current state of play, with the subtitle “Microsoft’s new A.I. really does herald a global threat.” Hoel went on: “Let’s be real: After a bit of inbreeding we likely murdered the lot.”

More outspoken cries of worry have been echoing across the internet now for months, including from Eliezer Yudkowsky, the godfather of A.I. existentialism, who lately has been taking whatever you’d call the opposite of a victory lap to despair over the progress already made by A.I. and the failure to erect real barriers to its takeoff. We may be on the cusp of significant breakthroughs in A.I. superintelligence, Yudkowsky told one pair of interviewers, but the chances we will get to observe those breakthroughs playing out are slim, “because we’ll all be dead.” His advice, given how implausible he believes a good outcome with A.I. appears to be, is to “go down fighting with dignity.”

Even Sam Altman — the mild-mannered, somewhat normie chief executive of OpenAI, the company behind the most impressive new chatbots — has publicly promised “to operate as though these risks are existential,” and suggested that Yudkowsky might well deserve the Nobel Peace Prize for raising the alarm about the risks. He also recently wrote that “A.I. is going to be the greatest force for economic empowerment and a lot of people getting rich we have ever seen,” and joked in 2015 that “A.I. will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.” A year later, in a New Yorker profile, Altman was less ironic about the bleakness of his worldview. “I prep for survival,” he acknowledged — meaning eventualities like a laboratory-designed superbug, nuclear war and an A.I. that attacks us. “My problem is that when my friends get drunk they talk about the ways the world will end,” he said. “I try not to think about it too much, but I have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force and a big patch of land in Big Sur I can fly to.”

This may not be a universal view among those working on artificial intelligence, but it also is not an uncommon one. In one much cited 2022 survey, A.I. experts were asked: “What probability do you put on human inability to control future advanced A.I. systems causing human extinction or similarly permanent and severe disempowerment of the human species?” The median estimate was 10 percent — a one in 10 chance. Half the responses rated the chances higher. In another poll, nearly one-third of those actively working on machine learning said they believed that artificial intelligence would make the world worse. My colleague Ezra Klein recently described these results as mystifying: Why, then, would you choose to work on it?

There are many possible answers to this question, including that ignoring growing risks in any field is a pretty good way to make them worse. Another is that the respondents don’t entirely believe their answer, and are instead articulating how significant they believe A.I. to be by resorting to theological and mythological reference points. But another partial explanation could be that, to some, at least, the apocalyptic possibilities look less like downsides than like a kind of enticement — that those answering survey questions in self-aggrandizing ways may be feeling, beyond the tug of the pathetic fallacy, some mix of existential vanity and an almost wishful form of end-of-days prophecy.