Silicon Valley Confronts a Grim New A.I. Metric

Where do you fall on the doom scale — is artificial intelligence a threat to humankind? And if so, how high is the risk?
Silicon Valley Confronts a Grim New A.I. Metric

This article is part of our special section on the DealBook Summit that included business and policy leaders from around the world.


Dario Amodei, the chief executive of the A.I. company Anthropic, puts his between 10 and 25 percent. Lina Khan, the chair of the Federal Trade Commission, recently told me she’s at 15 percent. And Emmett Shear, who served as OpenAI’s interim chief executive for about five minutes last month, has said he hovers somewhere between 5 and 50 percent.

I’m talking, of course, about p(doom), the morbid new statistic that is sweeping Silicon Valley.

P(doom) — which is math-speak for “probability of doom” — is the way some artificial intelligence researchers talk about how likely they believe it is that A.I. will kill us all, or create some other cataclysm that threatens human survival. A high p(doom) means you think an A.I. apocalypse is likely, while a low one means you think we’ll probably tough it out.

Once an inside joke among A.I. nerds on online message boards, p(doom) has gone mainstream in recent months, as the A.I. boom sparked by ChatGPT last year has spawned widespread fears about how quickly A.I. is improving.

It’s become a common icebreaker among techies in San Francisco — and an inescapable part of A.I. culture. I’ve been to two tech events this year where a stranger has asked for my p(doom) as casually as if they were asking for directions to the bathroom. “It comes up in almost every dinner conversation,” Aaron Levie, the chief executive of the cloud data platform Box, told me.