Opinion | Why A.I. Might Not Take Your Job or Supercharge the Economy

Ezra Klein answers listener questions about how A.I. might change our lives — or not.
Opinion | Why A.I. Might Not Take Your Job or Supercharge the Economy

[MUSIC PLAYING]

ezra klein

I’m Ezra Klein. This is “The Ezra Klein Show.”

Welcome to the Ask Me Anything episode. I am your guest Ezra Klein, here with Roge Karma, our senior editor, who is going to be asking me questions and proving that we are not, in fact, the same person —

roge karma

[LAUGHS]

ezra klein

— as has sometimes been suspected. But Roge, thank you for being here.

roge karma

It’s great to be here. I appreciate you giving us the opportunity to prove once and for all we are, indeed, two different people.

ezra klein

So what you got for me?

roge karma

So this A.M.A. was a little bit unique in that we usually get a very wide range of questions without any particular subject dominating. But this time, we got absolutely flooded with questions on A.I., questions about existential risk, and labor markets, and utopias. And we’re going to get to all of that, but given how fast all of this is moving, I just wanted to start by checking in on where your head is right now.

So a couple of questions — first, how are you thinking about A.I. at the moment? And then second, what is your approach been to covering A.I., both in your writing and on the show?

ezra klein

I think, as we go through questions, people are going to get a sense of how I’m thinking about it, but I guess I’ll say, in the approach bucket, I am trying to remain open to how much I do not know and cannot predict. Look, I enjoy covering things and have typically covered things where I think there is usually a body of empirical evidence, where you can absorb enough of it to have a relatively solid view on where things are going. And I don’t think that is at all true with A.I. right now.

So here, my thinking is evolving and changing faster than it normally does. I am entertaining the simultaneous possibility of a more radical set of perspectives, everything from the existential risk perspectives of — and we can talk about this — that we will create non-aligned hyper-intelligent computerized agents that could pose a catastrophic risk or create a catastrophe for humankind all the way to, in the 100, 200-year range, we could be in a post-work utopia.

And then, of course, the much more modest and, I think, likelier and more normie views of we’ll get a lot of disinformation. We will have a faster pace of scientific discovery. So there’s such a, I think, a wide range of possibilities here that you have to just sit with a lot of things potentially being true all at once.

And you — or I, I should say, have to be willing to be learning and wrong in public and not collapsing into the tendency that is attractive and, I think, sometimes correct on other issues, to say, this is what I think is likeliest. This is my best read of what’s going on. And as such, this is the lane I’m going to cover or the interpretation I’m going to advocate for. To use a computer science metaphor that some people may know, both mentally and then in the podcast and in the column professionally, I’m in much more of an explorer mode than an exploit mode.

roge karma

So there’s a lot to dig into from that answer, and luckily, we have a lot of great audience questions to get into some elements of it. So you mentioned the existential risk scenarios, and we had a few different questions around those.

For instance, we have a question from Patrick A, who has been reading the arguments of folks like Eliezer Yudowsky, who are sounding the alarm on how A.I. is an existential threat to humanity, and Patrick writes, quote, “Increasingly, my feeling is that, to the extent that the conversation around A.I. is fixated on the relatively short-term, things like job loss, disinformation, biased algorithms, that as important as these issues, we are whistling past the graveyard on this problem. So what do you make of the most dire assessments of the risks posed by A.I.? And what level of alarm do you feel about its dangers, and why?”

ezra klein

Oh, I have very complicated thoughts here. Let me pick apart an idea in that question first, and then I’ll get to the bigger part of it. So first, for people who are not fully familiar with this idea, existential risk around A.I. is fundamentally the prediction that there is at least a high probability that we will develop superintelligent artificial intelligences that will either destroy humanity entirely, causing an extinction-level event, or displace us profoundly, disempower us profoundly in the way we have done to so many other animal species, so many of our forebearers.

So that’s the existential risk theory. And we can talk about why people think that could happen, and I will. I am much less persuaded than some people that you want to disconnect that from medium-term questions because, when you’re talking about existential risk, you’re talking about what gets called a misaligned A.I.

So in one version of this, you have something that is all powerful but somewhat stupid. You build this hyper-intelligent system, and you say, again, canonically here, make me paper clips. And then it destroys the entire world and evades all of your efforts to shut it off because its one goal in life is to make paper clips. It can make the iron in our bodies into a couple more paper clips on the margin, and for that matter, it needs to kill us because we might stop it from making more paper clips.

So that’s one version, and I think that sounds to people, when they hear it, a little stupid. It sounds to me, when I hear it, a little stupid, because, if you’re going to make something that’s smart, don’t you think you’re going to be able to program in some amount of reflectivity, some amount of uncertainty, some amount of, hey, check in with us about whether or not you’re actually achieving the goals that we want you to achieve?

But there are, of course, more conceptually sophisticated versions of that. So you are a rogue state, a North Korea, or maybe you’re just an up-and-coming state. And you say, hey, I want you to make us as powerful as you can. And you do not have the capabilities to correctly program your system, and the system causes a huge catastrophic event, trying to destroy the rest of the power centers in the world. You can really think about how that might happen.

I think the best way to approach this question, though, for me, for right now, is thinking about the ways you might get highly misaligned A.I.s. And there are two versions of this. One is what you might call the intelligence-takeoff version, which is get A.I. systems that become smarter than human beings in a general intelligence way and way more capable of coding, so now, the A.I.s begin working on themselves — is something I talked about with Kelsey Piper.

The A.I.s create an intelligence-take off because, in recursively coding themselves to become better, they can go through many, many, many generations very, very quickly. And so you have this hyper-accelerated evolutionary event, which leads to — in a fairly short time span — much smarter systems than we know how to deal with, that we don’t really understand their alignment. And all of a sudden, we’re out in the cold.

That could happen. I have a lot of trouble knowing how to rate the possibility of rapid intelligence take off. I have some points of skepticism around it, around how much, for instance, how phenomenal of a capability set you can get from just, say, absorbing more and more online text, whether or not it’s actually going to be true of these systems that have a lot of trouble distinguishing true things from false things are going to be able to so effectively improve themselves without creating a whole lot of problems.

But a version of this I find more convincing came from Dan Hendrycks, who’s a very eminent A.I. safety researcher. He wrote a recent paper that you can find. It’s called “Natural Selection Favors A.I.s Over Humans,” and the point of his paper is I think it offers a more intuitive idea of how we could get into some real trouble, whether or not you’re thinking about existential risk or just a lot of risk.

And he writes, “as A.I.s become increasingly capable of operating without direct human oversight, A.I.s could one day be pulling high-level strategic levers. And if this happens, the direction of our future will be highly dependent on the nature of these A.I. agents.”

And so you might say, well, look, why would we let them operate without direct human oversight? Why would we program these things and then turn over key parts of our society to them, such that they could pose this kind of danger?

And what I appreciate about his paper is I think he gives a very, very realistic version of what that would look like. So he writes that these A.I.s are basically going to get better. That’s already happening. We can get that. We’re going to turn over things like make an advertising campaign or analyze this data set or I’m trying to make this strategic decision about what my country or my company should do. Look at all the data, and advise me.

And he writes, “Eventually, A.I.s will be used to make the high-level strategic decisions, now reserved for C.E.O.s or politicians. At first, A.I.s will continue to do tasks they already assist people with, like writing emails, but as A.I.s improve, as people get used to them, and as staying competitive in the market demands using them, A.I.s will begin to make important decisions with very little oversight.”

And that, to me, is a key point. What he is getting at here is that, as these programs become better, there’s going to be a market pressure for companies and potentially even countries to hand over more of their operations to them because you’ll be able to move faster. You’ll be able to make more money with your high speed, A.I.-driven algorithmic trading. You’ll be able to outcompete other players in your industry. Maybe you’ll be able to outcompete other countries.

And so there will be a competitive pressure, where, for a period of time, the institutions, companies, countries that do this, will begin to prosper. They will make more money. They will get more power. They will get more market power. But then having done that, they will then have systems they understand less and less and have less and less oversight over holding more and more power. And then he goes through a thing about why he thinks, evolutionarily, that would lead to selfish systems.

But the thing I want to point out about it is that, rather than relying on a moment of intelligence take off, it relies on something we understand much better, which is that we have an alignment problem, not just between human beings and computer systems but between human society and corporations, human society and governments, human society and institutions.

And so the place where I am worried right now, the place where I think it is worth putting a lot of initial effort that I don’t see as much of, is the question of, how do you solve the alignment problem, not between an A.I. system we can’t predict yet and humanity, though we should be working on that, but in the very near term between the companies and countries that will run A.I. systems and humanity? And I think we already see this happening.

Right now, A.I. development is being driven principally by the question of, can Microsoft beat Google to market? What does Meta think about all that? So there’s a competitive pressure between countries. Then there is a lot of U.S. versus China, and other countries are eventually going to get into that in a bigger way.

And so where I come down right now on existential risk is that, when I think about the likely ways we develop these systems that we then create, such that we have very little control over them, I think the likeliest failure mode right now is coming from human beings. So you need coordinating institutions, regulations, governance bodies, et cetera that are actually thinking about this from a broader perspective.

And I worry sometimes that the way the existential risk conversation goes, it frames it almost entirely as a technical problem when it isn’t. It’s, at least for a while, a coordination problem, and if we get the coordination problems right, we’re going to have a lot more leverage on the technical problems.

roge karma

I think a lot of other people share a similar concern about these incentives, about the speed at which everything is moving, and one of the responses in the last week has been this open letter from more than 1,000 tech and A.I. leaders, including some really high-profile people, like Elon Musk, along with a lot of A.I. researchers.

And this letter was calling for a six-month pause on any A.I. development more advanced than GPT-4. And I think the concern that that letter comes from is the same concerns you just outlined — this is all moving too fast. We don’t like the incentives at play. We don’t know what we’re creating or how to regulate it. So we need to slow this all down to give us time to think, to reflect.

And so I’m wondering what you think of that effort and whether you think it’s the right approach.

ezra klein

I think there’s a good amount to be said for it, but my thinking on this has evolved a bit. So in my column here a few weeks ago, now feels like months ago. Maybe it was a month ago, but time is moving quite quickly. But the “This Changes Everything” column, or I released it on the podcast under my own view on A.I. or my view on A.I., I do end up calling in that, either for an acceleration of human adaptation to this or a slowdown in development.

I have become increasingly skeptical, whatever you think of the merits of a slowdown, though, that it is a winning political position, that it’s even a viable political position, to make the two sides of this — people who think A.I. is kind of cool, and they enjoy asking the chat bot questions, and they want the better help on term papers and marketing support and more immersive and relational porn that’s going to come out and the human to A.I. companions — all the things that are near-term values here and all the things that companies want to make money on — that’s all in one side, everybody who either doesn’t care about A.I. or wants something from it. And on the other side, just A.I. is bad. Stop it. I’m scared.

And I don’t mean that to dismiss that position because I think A.I. might be bad, and at times, I am scared. But I think you actually need a positive view much more so than people have of what you’re doing, and if you do a pause and you don’t know what you’re doing with that pause, if that pause takes place — you do a six-month stop — and what?

The idea is that the people in the A.I. companies and in academia are going to try to spend six months more on interpretability? What are the systems under which we have public input here? How are we coming up with an agenda? What are you doing with that time?

Otherwise, you just delay whatever’s going to happen six months and maybe gave worse actors out there, although I want to be careful with this because I think people hear China — and I am not sure China is actually a worse actor on this than we are right now because I think we are actually moving a lot faster, and we are using this specter of China to absolve ourselves of having to think about that at all.

But putting that aside because I also don’t want China to have A.I. dominance for a bunch of very obvious reasons, I think, I do think I am more inclined to say that what I want is, first, a public vision for A.I. I think this is, frankly, too important to leave to the market. I do not want the shape and pace of A.I. development to be decided by the competitive pressures between functionally three firms.

I think the idea that we’re going to leave this to Google, Meta and Microsoft is a kind of lunacy, a societal lunacy. So one, I think that more than I understand this as slowing it down, I understand it as shaping technology. And there are things that I want to see in A.I.

I want to see a higher level of interpretability, and when I say interpretability, I mean the ability to understand what the A.I. system is doing when it is making decisions, or when it is drawing correlations, or when it is answering a question. When you ask ChatGPT to summarize the evidence on whether or not raising wages reduces health care spending, it’ll give you something, but we don’t really know why or how.

So basically, if you try to spit out what the system is doing, you get a complete incomprehensible series of calculations. There is work happening on trying to make this more interpretable, trying to figure out from where in the system a particular answer is coming from, for trying to make it show more of its work. But that work, that effort, is way, way, way, way behind where the learning systems are right now. So we’ve gotten way better at getting an output from the system than we are at understanding what the inputs were that went into it or at least what the sort of mid-level calculations were that went into it.

One thing I would do — and I don’t know exactly how to phrase this because I’m not myself an A.I. researcher — but I think that, particularly as these systems become more powerful, we should have some level of understanding of what’s going on in them. And if you cannot do that, then you cannot release it, and so I think one totally valid thing to say — because this would slow down A.I. research or at least A.I. development, but it would do so for a cause — is to say that, if you want to create anything more powerful than GPT-4 that has a larger training run or training set and is using more G.P.U. power and all the rest of it, more compute power, then we want these levels hit for interpretability. And it is not our problem to figure out how to hit it. It is your problem. That yeah, there’s a lot of money here — start putting that money towards solving this problem.

There’s a lot of places in the economy where what the regulators say is that you cannot release this unless you can prove to us it is safe, not that I have to prove to you that you can make it safe for me. If you want to release GPT-5 and GPT-7 and Claude+++ and whatever, you need to put these things in where we can verify that it is a safe thing to release.

And doing that would slow all this down. It would be hard. There are parts of it that may not be possible. I don’t know what level of interpretability is truly even possible here.

But I think that is the kind of thing where I want to say, I’m not trying to slow this down. I’m trying to improve it. I’m trying to make it better.

And by the way, that might be true. Even from the perspective of somebody who wants to see A.I.s everywhere, it’s only going to take one of these systems causing some kind of catastrophe that people didn’t expect for a regulatory hammer to come down so hard it might break the entire industry. If you get a couple people killed by an A.I. system for reasons we can’t even explain, do you think that’s going to be good for releasing future A.I. systems? Because I don’t.

There’s one reason we don’t have driverless cars actually all over the road yet. That’s one thing. Another is that these systems are being shaped in the direction — they’re being constructed to solve problems that are in the direction of profit. So there are many different kinds of A.I. systems you can create directed at many different purposes.

The reason that what we’re ending up seeing is a lot of chat bots, a lot of systems designed to fool human beings into feeling like they’re talking to something human, is because that appears to people to be where the money is. You can imagine the money in A.I. companions, and there are startups like Replika trying to do that. You can imagine the money in mimicking human beings when you’re writing up a Word document, or a college application essay, or creating marketing decks, or whatever it might be.

But so I don’t know that I think that’s great, actually. There are a lot of purposes you could turn these systems to that might be more exciting for the public. It is still, to me, the most impressive thing A.I. has done is solve the protein-folding problem. That was a program created by DeepMind.

What if you had a prizes system, where we had 15 or 20 scientific and medical innovations we want, problems we want to see solved by whatever means you can do it. And we think these are big enough that, if you do it, you get a billion dollars. We’ve thought about prizes in other areas, but let’s put them into things that society really cares about.

Maybe that would lead more A.I. systems to be tuned in the direction not of fooling human beings into thinking they’re human but into solving important mathematical problems or into speeding up whatever it is we might want to speed up, this kind of drug development. So that’s another place where I think the goals we actually have publicly, how you can make money off of this, I would like to see some real regulation here.

I don’t think you should be able to make money, just flatly, by using an A.I. system to manipulate behavior to get people to buy things. I think that should be illegal. I don’t think you should be able to feed into it surveillance capitalism data, get it to know people better and then try to influence their behavior for profit. I don’t think you should be allowed to do that.

Now, you might want to think about what that regulation actually reads like in practice because I can think of holes in that. But whether I’m right or wrong about those things, these questions should be answered. And at the end of that answering process, I think, is not a vision of less A.I. or more A.I. but a vision of what we want from this technology as a public.

And one thing that worries me is that just the negative vision — let’s pause it. It’s terrifying. It’s scary — I don’t think that’s going to be that strong. And another thing that worries me is that Washington is going to fight the last war and try to treat this like it was social media. We wish we had had somewhat better privacy protections. We wish we had had somewhat better liability, maybe around disinformation, something like that.

But I don’t think just regulating the harms around the edges here, and I don’t think just slowing it down a little bit, is enough. I think you have to actually ask as a society, what are you trying to achieve? What do you want from this technology? If the only question here is, what is Microsoft want from the technology or Google, that’s stupid.

That is us abdicating what we actually need to do.

So I’m not against a pause, but I am also not for pause being the message. I think that there has not been nearly enough work done on a positive public vision of A.I., how it is run, what it includes, how the technology is shaped, and to what ends we are willing to see it turned. That’s what I want to see.

roge karma

So let’s dig in more into what that positive vision could be, because I think a lot of people hear about all of these risks, some of these existential scenarios, and their response is like, well, why should we be doing this at all? But we actually got some questions from audience members about the possibilities that A.I. technology can unlock.

And so for example, Katherine E. asks, while I researchers think that there’s a 10 percent chance of terrible outcomes, they think there’s an even higher chance of amazing utopian outcomes. And here she’s referencing a recent survey of leading A.I. experts that we can link to in the show notes. So she continues, you’ve discussed the possible nightmare scenarios for A.I., but do you see potential upsides? Do you think your kids’ lives might be better because of A.I., not worse?

ezra klein

Yeah, I do think there’s a lot of possibility here for good outcomes, and I do think probably the good outcomes or, at least, the weird and mixed outcomes are a lot likelier than the totally catastrophic ones. I will be honest that I ask this question of a lot of people in the space, and I don’t find the answers I get are that good.

So I think the most common answer I hear is A.I. could become an unbelievable scientific accelerant. And maybe — absolutely maybe. I think the reason I’ve always been a little more skeptical of that than some people in the space is that, while it’s clearly true there are many scientific advances you could make just by being a hyper intelligent thing reading papers — Einstein was not running direct experiments. He was creating brilliant thought experiments that led to, over time, a tremendous revolution also in industry, and technology, and so on.

I do think a lot of what we want in the world requires actually running experiments in the world. So you’ll hear things like, A.I. could be so great at identifying molecules for drug development, and so maybe it could. Maybe it would be much better than we are at identifying molecules to test. But then you still need to run all these phase 3 trials — and 1 and 2 trials, for that matter— and animal trials and everything else. And I think something we know from that area is a lot of things we think work out don’t work out. So I find it quite untested, this question, of will A.I. be this huge scientific accelerant.

There are things where prediction, like protein folding, could be a really big deal, and it’s also possible that just a lot more needs to be done of running experiments in the real world to make fundamental breakthroughs in things that would change our reality. So the scientific side of this I consider plausible and exciting, and I also find it a little bit hand-wavey.

I think that the place where this is going to have really rapid effects — because let’s think about what these systems really are right now. There are these large language models that are unbelievably good at impersonating humans and giving you predictive answers based on a huge corpus of human text. And the problem with them is that they know how to predict what the entire human internet might say to something, but they don’t really know if what they’re saying is true or not.

Even the word know there is a really weird word. They don’t know anything at all on some level. And so I think you have to ask, what would really work for a system like that, where it can be really brilliant but it hallucinates a lot and? I think the answers here have to do with social dimensions.

We have a lot of really lonely people in society. Loneliness is a true and profound scourge, and I think what you’re going to get from A.I., before you get things that are economically that potent or scientifically that potent, is a lot of companionship, for better or for worse.

And this is, of course, a complete mainstay of both science fiction and fantasy, right? Robot friends in science fiction, you’re running around a C-3PO and R2-D2. You have daemons and other kinds of familiar beasts in fantasy. When I was growing up, I was obsessed with this fantasy series called “Dragonriders of Pern.” And I was a lonely bullied kid, and Ruth the White Dragon and the relationship between Ruth and Ruth’s rider was really important to me.

And we have a lot of lonely older people. We have a lot of lonely young people. And we also just have a lot of people who would like more companionship, more people to talk to. Again, the movie “Her” is a remarkable dramatization of this.

I could imagine ways that gets dark if people begin preferring A.I. relationships to human relationships. That could be a problem, but it could also not go that way. One thing I found moving — there was a good piece in New York Magazine about Replika, which is this company making what are, at this point, quite rudimentary A.I. companions.

And there are — a lot of people named in the piece saying, I prefer this companion to people in my life. But there are also a lot of people who said having this companion has given me more confidence and has given me more encouragement and incentive to go out and have experiences myself. I’m going to go out and learn how to dance. I’m doing this hobby because I have this supportive figure. And I think a lot of us know this in our own relationships. When you have supportive people in your life, it is a base from which you venture out into the world. It isn’t something where it’s like, OK, I’ve got two friends and a partner. I’m never talking to anybody ever again.

And so I think there could be a lot of value in companions, and I think the systems are building in the short term look more like that to me. The fact that a companion might say something that isn’t true, I mean, my friends say untrue things all the time. That is a little bit different than, you’re not going to have an A.I. doctor who occasionally just hallucinates things at you. The liability on that alone would be a nightmare.

So that’s a place where I think there’s some real value. I do think creativity and just an expansion of human capability is a real value, the idea that I can’t code but I can create a video game. I can’t film, but I can make a movie. That’s cool. We might be able to see really remarkable new kinds of art in it.

And then I think you’re going to have a world for quite a while, which is just have a team of assistants at no cost. So right now, if you have a lot of money or you’re high up in a firm, maybe they’ll hire you a chief of staff, an executive assistant. Maybe you have people who you can outsource your ideas to, and they’ll come back with a presentation. And then you can give them feedback.

All of a sudden, everybody’s going to have a team. Maybe not everybody, but a lot of people have access to, functionally, a team, people who can research things for you. I want to think about how I could be healthier in this way, but I don’t have a lot of medical literacy. Can you look into this thing for me?

That’s hard to do right now. It’s not going to be hard to do for very long. And so I think, if you just think about the way the economy works, you always have that line where the future’s already here. It’s just unevenly distributed.

Rich, powerful people already have large teams of people who help them live better. Much of what those people do is remote at this point, and when I say remote, I just mean it can be done on a computer. Maybe the person is actually in the office, but you’re telling people to go do intellectual work for you. In a world where everybody has access to a lot of eyes like that, that might be quite cool, and amazing things could be unlocked from that.

[MUSIC PLAYING]

roge karma

I want to talk about the other side of some of this utopianism, though, and even some of these middle-ground scenarios because I would say probably the most common kind of email we’ve gotten over the past few weeks is people really concerned about how A.I. is going to impact the labor market and, specifically, the kinds of knowledge work jobs that tend to be done by folks with college degrees.

So you mentioned things like art, and research, and copy editing, and as things that we could have these systems make a lot easier for us. But there are also a lot of people doing those jobs right now, and we’ve had lots of copy editors, writists, artists, programmers emailing in, wondering if they’ll still have jobs. We’ve had students, like Katie W, asking whether it still makes sense to get a law or master’s degree when the future of the economy is so uncertain.

And there’s really two levels to this I’ve seen. One is financially like, am I going to have a job? Will I be economically OK? But then also, on a deeper, more existential level, I think there’s a lot of concern, and I feel this, too, about, what would it mean for me and for my life, my sense of self worth, my purpose, my sense of meaning to have A.I. systems be able to do my job better than I can?

And so I’m wondering how you think about both dimensions of that, both how these systems could affect the labor market and how people should think about their effect on the labor market. But then also this deeper existential question is raised of what it means to be human in a world where machines can do a lot of what we define as being human better than we can.

ezra klein

Yeah, those are profound questions, and yeah, I’ve seen a lot of what I would describe as A.I. despair in the inbox. I have a lot of uncertainty here as I do in everything I’m saying, but I tend to be much less confident that A.I. is going to replace a lot of jobs in the near term than other people seem to be in part because I don’t think the question is whether a job can be replaced. There’s also a question of whether we let it be replaced.

So this will be an analogy that will make some people mad, specifically doctors, but there is a lot that doctors currently do that can be done perfectly well by nurses, and nurse practitioners, and physician assistants. But we have created regulatory structures that make that really hard. There are places where it’s incredibly hard just to become a hair cutter because of the amount of occupational licensing you need to go through.

We do not just let, in many, many, many, many cases, jobs get done by anyone. We do let some of them get outsourced, and we’ve done that in, obviously, a lot of cases. But again, think about telehealth and how many strictures are on that. Now we’re seeing a little bit more of it.

So I am skeptical that A.I. is going to diffuse through the economy in a way that leads to a lot of replacement as quickly as people think is likely, both because I don’t think the systems are going to prove to be, for a while, as good as they need to be for that. It’s actually very, very hard to catch hallucinations in these systems, and I think the liability problems of that are going to be a very big deal.

Driverless cars are a good example here, where there’s a lot they can do, but driverless cars are not going to need to be as safe as human drivers to be put onto the road in mass. They’re going to have to be far, far, far safer. We are going to be — and I think we already are — less tolerant of a driverless car getting in a crash and kills a person than we are of human beings getting in a crash and kills a person.

And you could say, from a consequentialist perspective or a utilitarian perspective, maybe that’s stupid. But that is where we are. We see that already. And it’s a reason driverless cars are now seeming very far off still.

We can have cars. I mean, they’re all around San Francisco. You have these little Waymo cars with their little hats running around. But they are not going to take over the roads any time soon because they need to be not 80 percent reliable, not 90 percent reliable, not 95 percent reliable, but like 99.99999 percent reliable.

And these models, they’re not, and so that’s going to be true for a lot of things. We’re not going to let it be a doctor. We might let it assist a doctor but not if we don’t think the doctor knows how to catch a system when it’s getting something wrong. And as of now, I don’t see a path given how these are being trained to not getting enough wrong that we are going to be fully comfortable with them in high-stakes and, even frankly, a lot of low-stakes professions.

Now, do I worry about things like copy writing? I do. I don’t know how I think that’s going to look, and it’s also possible it’s going to make copy writers much more efficient, and cheaper, and, as such, dramatically increase the market for copy writers. The canonical point here is that we have more bank tellers than we did before A.T.M.s because A.T.M.s made it possible to expand banking quite a bit. And as such, even though you need bank tellers for fewer things, it did not lead to bank tellers being wiped out in the way people thought it would.

So these things often move through society in ways you don’t really expect. They create new markets. They create new possibilities. If they make us more productive, that creates more money. But I get it at the same time.

One of the things that both worries and interests me that, I think, these systems are forcing — are going to force a reckoning with — and this goes to your second point, Roge. How do I say this?

There’s been a strain of commentary and pushback from people saying that, as we think about A.I., we are dehumanizing ourselves in order to adapt ourselves to our own metaphors. There’s a point in a way that Meghan O’Gieblyn makes in her truly fantastic book “God, Human, Animal, Machine,” that metaphors are bidirectional. You start applying a metaphor to something else. And soon enough, it loops around, and you’re applying it to yourself.

You have a computer, and the metaphor is like the computer is like a mind. Then you begin thinking your mind is like a computer because you get so used to talking about it that way. And so you’ll see these things — Emily Bender, the linguist, has really pushed on this. You can see a YouTube presentation of her on A.I. and dehumanization.

And she has a lot of points she’s making in that, but one of them is people will say — and Sam Altman, the head of OpenAI said, to paraphrase, we’re all stochastic parrots with the point being that there’s this idea that these models are stochastic parrots. They parrot back what human beings would say with no understanding.

And so then people turn and say, maybe that’s all we’re doing, too. Do we really understand how our thinking works, how our consciousness works? These are token-generating machines. They just generate the next token in a sequence, a word, an image, whatever.

We’re token-generating machines. How did I just come up with that next word? I didn’t think about it consciously. Something generated the token.

And a lot of people who do philosophy and linguistics and other sort of related areas are tearing their hair out over this, that in order to think about A.I. as something more like an intelligence, you’ve stopped thinking about yourself as a thicker kind of intelligence. You have completely devalued the entirety of your own internal experience. You have made valueless so much of what happens in the way you move through the world.

But I would turn this a little bit around, and this has been on my mind a lot recently. I think the kernel of profound truth in the A.I. dehumanization discourse is that we do dehumanize ourselves and not just in metaphors around A.I. We dehumanize ourselves all the time. We make human beings act as machines all the time.

We tell people a job is creative because we need them to do it. We need them to find meaning in it. But in fact, it isn’t. Or we tell them there’s meaning in it, but the meaning is that we pay them.

So this, I think, is more intuitive when we think about a lot of manufacturing jobs that got automated, where somebody was working as part of the assembly line. And you could put a machine on the assembly line, and you didn’t need the person.

And that is actually true for a lot of what we call knowledge work.

A lot of it is rules based, a lot of the young lawyers creating documents and so on. We tell stories about it, but it is not the highest good of a human being to be sitting around doing that stuff. And it has taken a tremendous amount of cultural pressure from capitalism and other forces — from religion — to get people to be comfortable with that lot in life.

You have however many precious years on the spinning blue orb, and you’re going to spend it writing marketing copy. And I’m not saying there’s anything wrong with marketing copy. I’ve written tons of marketing copy in my time. But you’ve got to think about how much has gone in to making people comfortable or at least accept that lot.

We dehumanize people. And I wonder — I don’t think this in the two, or five, or 10-year time frame. But on the 25, 50, 100, 150-year time frame, if there’s not a possibility for a rehumanization here, for us to begin to value, again, things that we don’t even try to value and certainly don’t try to organize life around.

If I tell you that my work in life is I went to law school and now I write contracts for firms trying to take over other firms, well, if I make a bunch of money, you’d be like, great work. You really made it, man [LAUGHS]

If that law degree came from a good school, and you’re getting paid, and you’re getting that big bonus, and you’re working those 80 hour weeks, fantastic job. You made it. Your parents must be so proud.

If I tell you that I spend a lot of time at the park, I don’t do much in terms of the economy, but I spend a lot of time at the park. I have a wonderful community of friends. I spend a lot of time with them. It’s like, well, yeah, but when are you going to do something with your life, right, just reading these random books all the time in coffee shops.

I think that, eventually, from a certain vantage point, the values of our current society are going to look incredibly sick. And at some point, in my thinking on all this, I do wonder if A.I. won’t be part of a set of technological and cultural shocks that leads to that kind of reassessment.

Now, that doesn’t work if we immiserate anybody whose job eventually does get automated away. If to have your job as a contract lawyer, or a copy editor, or a marketer, or a journalist automated away is to become useless in the eyes of society, then, yeah, that’s not going to be a reassessment of values. That’s going to be a punishment we inflict on people so the owners of A.I. capital can make more money.

But that is a choice. It doesn’t need to go that way. Lots of people, Daron Acemoglu and Simon Johnson, the economists have a new book coming out “Power and Progress” on this point exactly. It doesn’t need to go that way. That is a choice. And I think this is a quite good time for more radical politics, to think about more radical political ideas.

roge karma

What I hear you saying is that a huge question for all of us is not just the question, like the economic questions around labor markets, but the cultural questions about what we as a society choose to value and what we value people for. And totally on board with basically everything you were just saying, but I also think a lot about, for example, the now famous Keynes essay, “Economic Possibilities for Our Grandchildren,” where he was making a prediction almost 100 years ago that, around the time of our lifetimes, we would have reached a level of economic productivity that could allow us to work 15-hour weeks and that we were approaching this post-work utopia.

And we hit the productivity numbers, and we’re working not as much as we were 50, 60 years ago but still a lot. And a lot of the people who are most educated are working a lot. So I guess I’m just wondering, why do you think that didn’t happen? What do you think it would take to actually culturally shift us in that direction? And are you actually hopeful about that possibility?

ezra klein

Well, one thing I think it’s commonly believed Keynes got wrong in that essay was he was interested in the question of material sufficiency. What would it mean if you held material wants steady but increased productivity and income by this much? Then how much would you have to work?

But it turns out don’t hold material wants steady. You have huge amounts of above inflation, cost increases, and things like health care, and housing, and education. But also people want bigger homes. They want to travel. They want nice cars.

They want to compete with each other. A lot of spending is also positional. It probably doesn’t make us happy, but there’s an old line — the question of whether a man is rich relies on how much money his brother-in-law makes —

roge karma

[LAUGHS]

ezra klein

— which I think gets at something important. So one version of this is to say, well, if you believe that A.I.s will create so much material and economic abundance that it just makes that kind of competition ridiculous, then people compete on other grounds, that we’re not going to get away from at least a somewhat competitive society.

People still want power. They still want to be partnered with and attractive to the people they want to be attractive to. Maybe everybody’s going to spend 47 hours a week in the gym or something.

But I think the bigger question here is that the case gets something’s wrong, but it gets others right. And we know, over and over again, that humanity actually does go through very profound shifts in what it values. Now, it doesn’t do it in any given 90-year time frame, but it does do it in terms of the shift from monarchies to more kinds of democracies and more kinds of political systems.

Did it in the shift from hunter gathering to monarchies and cities. Did it in the shift to agriculture. Religions create a lot of this. I think just like — I have no predictions here, but I think that the question of how religions, both old and new, interact with dramatic changes here in the world is going to be very, very, very interesting.

And I think a lot of them have a lot to say about these questions of how we value human life that is simply waiting there to be picked up. I did this episode on Shabbat not long ago about Shabbat and rest and the idea that a day of rest is the day that should be the way the rest of the world works and that the Shabbat practice, in its radicalism, is a profound critique of the values of our economy as they exist right now.

I can imagine that becoming much more widespread, that becoming a much more profound practice and cultural not just artifact but challenge in the kind of world I’m describing.

So I don’t believe in utopias, just in general, but I do believe in change. Now, it’s not going to happen — I don’t believe typically change happens so quickly that, between when I am 38, as I am now, and when I am 50 or 55, that we’ll have stopped having this overwhelming ideology of productivism, nor that I will stop applying it to myself. I have completely imbibed the values of this culture, and standing outside them to critique them in a podcast is a lot easier than not weaving them through my own soul.

But I don’t think the fact that Keynes was wrong about how much we would work and what we would want means that these kinds of shifts don’t happen. I think that a longer view should make that look pretty different to us. And you never know when you’re on the cusp of a world working quite differently than it has in the past.

[MUSIC PLAYING]

roge karma

For all we’ve been talking about how A.I. could change the nature of the economy, in our conversations, you’ve been a lot more skeptical or at least hesitant about whether on net A.I. will lead to the kind of take off in economic growth and productivity that a lot of people think it could. You mentioned earlier that you’re skeptical of whether A.I. will lead to a super take off in scientific progress, but there are lots of other ways you can imagine A.I. systems making us a lot more productive, a lot more efficient.

We’ve already discussed things like automating a lot of repetitive work. So could you just unpack why you’re a bit more skeptical about whether A.I. will supercharge economic productivity?

ezra klein

So one thing I would say is one of the possibilities I hold is that they won’t. As I keep trying to emphasize, I’m open to a lot of things being potentially true here. But yeah, let me give two reasons.

If I was trying to imagine 15 or 20 years from now, when people are like, to paraphrase an old line about the internet, how can you see the A.I. revolution everywhere but in the productivity numbers? Why am I think that would be?

So one reason is that systems that don’t really have an understanding of what they are telling you, that they have this capacity to predict the next token in a sequence, that it’s going to turn out that there is an ineradicable amount of falsehood and hallucination and weirdness in there that just makes it too hard to integrate this into the parts of the economy that really make a lot of money — that you would need such a level of expert oversight of them, somebody who knew everything the system knew so or needed to know so it could know when the system was telling it something untrue, that it just doesn’t really net out.

So that’s one.

But the one I think is maybe even more likely — think about the internet. Let’s say, that we go back in time to 1990, and I tell you what the internet is going to be in 2020 — the size of it, the pervasiveness of it, the awareness of it.

You will have in your pocket, in your pocket, imagine, a computer with functionally the entire corpus of human knowledge on it. You’ll be able to search that in a second. It will talk to space, and you’ll be able to talk to and collaborate with anybody anywhere in the world instantly. You will pull this all-knowing pocket rectangle out, press two buttons, and the face of your collaborator in Japan will appear immediately.

And you can translate. So in addition, you can now work with anybody. You can read anything in any language. And I said to you, if we had that technology, what do you think would happen to the economy?

Think about the amount of knowledge that is now instantly accessible. Think about the amount of collaboration and cooperation that is now being unlocked. Think about the speed — you imagine journalists going before to the library and going to look stuff up, and now you can just Google everything.

What do you think will have happen to the pace of scientific progress? What do you think will have happen to the pace of productivity growth? And if you had given me that in 1990, I would have been six, so I probably wouldn’t have had a very good answer.

But I think if you had framed that in 1990, somebody would reasonably say, wow. That is going to hypercharge the economy. That is going to hypercharge scientific progress.

And here we are, and it did none of those things. The productivity growth has been quite disappointing in the age of the internet, worse than before it, in the post-World War II period. There’s a lot of people, and we’ve had some of them on the podcast worried about the slowdown in scientific knowledge. The advances we are making seem less potent in many ways than the advances were made before.

And obviously, there are a million different explanations for this, but one explanation I favor more than other people seem to is that we weren’t really wrong about what the internet would do to make us more productive. There is no doubt in my mind that I am profoundly more productive than I could have been before it. What we were wrong about is the shadow side of the internet, is what it would take away from our productivity.

So now, go back to that 1990 thought experiment. And let’s say I come to you, and I say, we’re going to invent a technology. Everybody’s going to have it on them at all times, all times. And what it’s going to do is it is going to have so much content, so much entertainment and so much, fundamentally, distraction that all of humanity averaged out. They’re going to be 30 to 45 percent more distracted than they are now. They’re going to be angrier. They’re going to be more annoyed. They’re going to be more tired. They’re going to be less able to hold a train of thought. The time they can spend focusing their attention on one thing is going to reduce. The amount of time for reflection and contemplation is going to narrow.

What do you think that will do to the economy or scientific progress? I think some people probably say, oh, that seems bad. And both those things, in my view, happened.

There’s been a productivity-enhancing effect of the internet. I just don’t think you can do a job that is online and not see that. And there’s been a productivity-reducing effect of the internet. The number of times I distract myself from a train of thought in an hour, in one hour, by flicking over to some garbage in my email or looking at Slack because you’ve told a joke — you don’t really say much on Slack actually, Roge.

roge karma

[LAUGHS]

ezra klein

It’s constant, and I think that has a real cost in the depth of what I’m able to produce.

roge karma

You’re welcome for not imposing those costs on you.

ezra klein

I appreciate that. And I think that is — it’s really possible that happens with A.I. And this is part of the Gary Marcus interview we did a couple of months back.

I think it is really possible that particularly large language models are going to prove to be a much better source of distraction creation than of actual productivity enhancement because, particularly if we don’t quickly get to the kind of A.I. can make profound scientific advances on its own, that is truly autonomous and generative in that way, what I think we’re going to get instead is an A.I. that is, as social media, which has a lot of machine learning beneath it, is, is really, really good at finding and now creating content personalized to us to distract us, serving us up exactly what we want.

I think the bigger and more plausible set of distractions are actually social companions. Look, I sometimes go to a coffee shop, and I work there with my best friend. When I do that, I really enjoy the experience, and I’m less productive, because I enjoyed talking to my best friend at the coffee shop.

And if you have your A.I. best friend and also your A.I. girlfriend or boyfriend or nonbinary sexual partner in your pocket at all times, how distracting might that be? What will it mean to have a lot of those kinds of figures in your life or entities in your life? So I think it’s entirely possible that this is a technology that, for good or for bad, ends up being a technology of entertainment, of socializing, of content creation, of distraction, much more than a technology of productivity, that particularly if it is still human beings who have to be the ones who are productive.

So far we’ve seen a couple of these studies come out. And there are things like, we gave a bunch of coders GPT-3 or GPT-4, and it turned out they were 35 percent more productive. I don’t buy those studies at all. I don’t buy them at all because, if you put people in a laboratory condition and you just tell them to use the new technology in the most productive way, yeah, it’s going to make them more productive.

But that’s not what’s going to happen necessarily when these same people live in the world, where, on the one hand, yes, you could use GPT-4 or 5 or whatever to help you code for long stretches. Or you could use it to screw around on the internet and play video games that are so immersive and personalized that we’ve never seen anything like them before or talk to this perfect, just-for-you A.I. companion who’s now always there in your ear. I’m not sure that I think people are ultimately going to be that much more productive. So I think there’s a lot of ways this could go weird but also that it might not be the economic boon we are hoping it will be.

roge karma

I will just say that Ezra is constantly distracting me on Slack and that I actually think a companion chat bot might be an improvement. So I’m not sure. I’m a little skeptical. [LAUGHS]

ezra klein

[LAUGHS] That’s reasonable.

roge karma

But I will say, in the background of a lot of these questions, including the one you just answered in, I think, a pretty deep way, is also this question of — and what you were just saying makes me think maybe there isn’t a clear answer. But there’s this question it keeps bringing up for me of like, what kind of technology do you think A.I. is?

You mentioned earlier that you don’t like the social media analogy, and you think it could lead to regulation that maybe is inadequate. And I’m wondering if you think there are analogies that are better because there are a lot of people who are making these comparisons right now, some who say like, A.I. will be the new internet, others who compare it to these general purpose technologies, like electricity or the steam engine, which became the foundation of economies.

You’ll hear it be compared to the printing press, to oil, to fire. And clearly, these analogies are important because they determine how we treat the technology and how we try to regulate it. So I’m wondering if you think any — if there is any good analogy for what this technology is and what it could do and, if so, what that analogy is.

ezra klein

It’s funny. I was actually just talking about this with somebody yesterday, Henry Farrell, who is a political scientist and has done really, really cool work on everything but A.I. among them. He has a good piece on high-tech modernism people should look up.

And we were talking about this question of analogy, and what is the analogy, and the one I was saying was, what if we fully understood the social implications of simultaneously the internet and the globalized opening of China, particularly, but you could say other big manufacturing exporters as of 1990? What if we could feel all that in its full weight, right then?

We knew what was coming. What would that have meant? What kind of preparatory work would we have wanted to do?

That this feels to me like it is on that level, like internet plus China. Now, maybe that’s wrong, and in the long run, maybe that’s too small. But I think something like that — I don’t see it as a technology.

I see it in the Timothy Morton conception, a hyperobject, a thing that touches everything in weird ways such that it also changes the way you look at the things it’s touching, to our whole earlier conversation about the metaphors on the human mind or social relationships.

So the internet is helpful, to me, there because the internet had that quality of touching almost everything.

And I think China is helpful there because China really changed the way the U.S. economy worked, I think, much more frankly than the internet did. And I’m using China, again, a bit as a stand in for globalized export-based supply chains. But China was the big mover there.

So maybe like that, but I don’t think anything in particular is going to work for something that will be as weird as this is going to be. If it is true that we are creating inorganic intelligences that are going to surpass our generalizable intelligence within the lifespan of a human being born today, we’re entering something profoundly new, then.

Now, maybe it’s just weird, right? There’s a lot of sci-fi, and I think the Star Wars universe is a good, easy-at-hand version of this. We’re just like the world is full of computers, computer companions. And they talk, and they beep at you. And they’re like friends, and they do things that are useful.

And everybody just goes along their day as normal. And maybe that’s where we’re going. There’s been a lot of imagining of that. It really does depend whether or not we get things that are superintelligence level or things that are kind of human but programmed. And I don’t know. I think, again, uncertainty is a place to sit right now.

roge karma

So I think that’s a good place to start to come to an end and give everyone a break from all the A.I. talk. So let’s do some recommendations. I know you just got back from a music festival. I was wondering if you have any music recs for the audience.

ezra klein

I would love to do some music recommendations, what I actually enjoy.

roge karma

[LAUGHS]

ezra klein

So I saw a show at this festival, in fact, by Danielle Ponder, who is this just — I mean, to have a voice like that, I don’t know. Her album is called “Some of Us Are Brave,” and a song maybe to start with there is “So Long.”

But it shows the kind of nerd I am that I was sitting there, also thinking about A.I. and thinking about all that we miss in the importance of human beings being in a room together and actually experiencing the beauty they can create while I was listening to her because I can totally imagine a world where we’re generating a lot of computer art but it is not going to have the effect of hearing about her life from her and then hearing her sing, her art.

That was really quite special. Yes, I’ve been listening to a lot of Felix Rösch, R-O-S-C-H. He’s a modernist composer, electronic, but a lot of strings. Actually a lot of music I like has that quality, electronic with a lot of strings, like lightly electronic with a lot of strings.

But his work is really, really beautiful. The song that got me into him was this song called “In Memory of a Honeybee,” which I really, really, really recommend checking out. But his song “Clouds” is great. “Driven” is great.

Just go check out any of his essential mixes.

And I found this artist recently, Mabe Fratti. And I might be saying that wrong, but you spell it M-A-B-E F-R-A-T-T-I. And cellist and electronic elements, l and just some of her songs are just astonishingly, just truly astonishingly, beautiful.

And then going back, I guess, to the Rick Rubin episode, he ended up in that episode recommending a Nils Frahm album. And one I ended up listening — spending a lot of time with before that episode and after is the one that he did with Ólafur Arnalds called “Trance Frendz,” which is a fully improvised album in one night. But I just love that album and would urge people to give it a lesson, just beautifully meditative, deep, rich music that also has a very different feel to it, knowing that it’s two friends who decide to spend a night together with a bunch of their instruments and this is what they came up with. But I appreciate it for what it gave to the world.

roge karma

Well, given that description, I will definitely be having to check that album out. Last question before I let you go, is there anything that’s on your mind right now as you think about your life and the show over the next year that you’d want to share with the audience or talk about?

ezra klein

Ooh, do I want to share it with the audience?

I’m about to move from California to New York, and I think, if you listen to the show, you can get a sense of my identity as a Californian runs really deep and that there’s a lot I draw strength from here and curiosity from here and a lot in my intellectual way and personal way of being in the world that fits here. And New York has never been my place, in part, because I have spent most of the time I’ve been there in Midtown, which is certainly not, I think, the finest part of it.

But so that’s going to be a big move, and I’m trying to remain open to what the place has to offer. Obviously, other people really love New York — and open to recommendations about great things in New York, but there will be some probably disruption in the show as I make that move and also just some, hopefully, interesting changes in me as I try to absorb a new place with a different culture.

roge karma

I will just say, as a fellow native Californian, I think you’re making a terrible decision, but I am excited for you and for what this next chapter brings. So thanks for having me. This is a lot of fun, and I hope we get to do it again sometime with you on the East Coast.

ezra klein

Thank you.

[MUSIC PLAYING]

All right, this episode of “The Ezra Klein Show” was produced by Roge Karma, Kristin Lin, and Jeff Geld. It was hosted by Roge Karma, thank you to him, fact checking by Michelle Harris and Kate Sinclair, mixing by Jeff Geld, original music by Isaac Jones, audience strategy by Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. And special thanks to Sonia Herroro and Kristina Samulewski.