A.I.’s Inner Conflict, Nvidia Joins the Trillion-Dollar Club, and Hard Questions

“It’s like if you were told that there’s going to be a world-conquering dictator and it’s Mr. Bean.”
A.I.’s Inner Conflict, Nvidia Joins the Trillion-Dollar Club, and Hard Questions

This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors. Please review the episode audio before quoting from this transcript and email transcripts@nytimes.com with any questions.

kevin roose

“Hallucination” as a term is, I guess, getting some blowback.

casey newton

Everybody hates every word in AI. They don’t like “understand.” They don’t like “think.”

kevin roose

People need jobs and hobbies.

casey newton

Look, words matter, and language does evolve, and we do get to points where we decide we’re not going to use words anymore. And I respect that process. I don’t know if I’m there yet with “hallucination.”

kevin roose

I did hear someone suggest that we should replace that with the word “confabulation” which I love because it sounds so British. Could you just hear a little British man saying, oh, my, AI model, it’s confabulated.”

casey newton

I’ve got myself in a right spot of trouble with all these confabulations!

kevin roose

It’s just very fun to say.

casey newton

It’s incredibly fun to say.

[MUSIC PLAYING]

kevin roose

I’m Kevin Roose, tech columnist for “The New York Times.”

casey newton

I’m Casey Newton from “Platformer,” and you’re listening to “Hard Fork.” This week, an urgent new warning about AI’s potential risk to humanity and the lawyer who clowned himself using ChatGPT, plus, how the rise of NVIDIA explains this moment in AI history. And “The New York Times’” Kate Conger joins us to answer hard questions about your technology dilemmas.

[MUSIC PLAYING]

kevin roose

So Casey, last week on the show, we talked with Ajeya Cotra, who is an AI safety researcher. We talked about some of the existential risks posed by AI technology. And there was a big update on that story this week.

casey newton

Yeah, and I feel like it showed us that there are a lot more people in this world who are thinking the way that she’s thinking about things.

kevin roose

Totally. So as we were putting out the episode last week, unbeknownst to us, this nonprofit called the Center For AI Safety was gathering signatures on a statement, basically an open letter that consisted of one sentence.

casey newton

And what was the sentence?

kevin roose

Well, I’m glad you asked. It said, quote, “Mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war.”

casey newton

Which are famously two of the worst things that can happen. So AI is now just sort of squarely in the bad zone here.

kevin roose

Yeah, and you might expect this kind of statement to be signed by people who are very skeptical and worried about AI.

casey newton

Like anti-AI activists.

kevin roose

Right, exactly. But this was not just anti-AI activists. The statement was signed by, among other people, Sam Altman, the CEO of OpenAI, Demis Hassabis, the CEO of Google DeepMind, and Dario Amodei, the CEO of Anthropic. So three of the heads of the biggest AI labs, saying that AI is potentially really scary, and we should be trying very hard to mitigate some of the biggest risks.

casey newton

And so as part of this, are they stepping down from their jobs and no longer working on AI?

kevin roose

[LAUGHS]: No, of course not. They are still building this stuff, and, in many cases, they are racing to build it faster than their competitors. But the statement is a big deal in the world of AI safety because it is the first time that the heads of all of the biggest sort of AGI labs are coming together to say, hey, this is potentially really scary, and we should do something about it.

We talked about this previous open letter, which came out a few months ago, which Elon Musk and Steve Wozniak and a bunch of other tech luminaries signed that called for a six month pause. This letter was not that specific. It did not call for any specific actions to be taken. But what it did was it kind of united a lot of the most prominent figures in the AI movement behind this general statement of concern.

casey newton

Right. They’re now united in saying this could go really badly.

kevin roose

Right, exactly.

casey newton

I have to ask, Kevin, is there anything more here? Because I read this statement that says, “It should be a global priority.” I don’t really know what a global priority means. Are there other global priorities that we’re focused on right now? Should they take a back seat to this? The longer I look at this statement, the more I feel like I can’t make heads or tails of it.

kevin roose

Yeah, it’s a pretty vague statement. And I asked Dan Hendricks who’s the executive director of the Center For AI Safety, which is the nonprofit that put this together and gathered a lot of the signatures, why it was just one sentence, and why didn’t he call for any additional steps beyond just “We’re concerned about this?”

And he said, basically, this was an attempt to just get some of the most prominent people in AI to go on the record saying that they believe that AI has existential risk attached to it. He said, basically, he didn’t want to call for a whole bunch of different interventions. And some people might have disagreed with some of them, and some people might not have signed on. And so we basically wanted to give people a simple one sentence statement that they could sign on to that says, “I’m concerned about this.” And it didn’t go any further than that.

casey newton

All right, so for people who might not have heard our episode last week or just kind of catching up to this story, Kevin, why do some people, including the people building it, think that this poses an existential risk to humanity?

kevin roose

So you could probably ask these 350 plus people each individually what their biggest sort of threat model is for AI, and they would probably give you 350 different answers. But I think what they all share are a couple of things.

One is, these models, they’re getting very powerful, and they’re improving very quickly from one generation to the. Second thing they would probably agree on is, we don’t really understand how these things work, and they’re behaving in some ways that are maybe unexpected, or creepy, or dangerous.

casey newton

Right, we can see what they are doing in terms of what they’re putting out, but we don’t know how they are putting out what they’re putting out.

kevin roose

Right. And number 3 is, if they continue at their current pace of improvement, if these models keep getting bigger and more capable, then, eventually, they will be able to do things that would harm us.

casey newton

So what do we do with this information that we face existential risk from AI, Kevin?

kevin roose

Well, there’s a sort of cynical interpretation that I saw a lot after I wrote about this story on Tuesday, which is that people are saying, basically, these people don’t actually think there’s an existential risk from AI. They’re just saying that because it’s good marketing, good PR for their startups, right?

If you say, “I’m building an AI model that can spit out plausible sounding sentences.” That sounds a lot less impressive than if you say, “I’m building an AI model that may one day lead to human extinction.”

casey newton

Yeah, if you’re not working on a technology that poses an existential risk to humanity, why are you wasting your time, OK? Oh, really? You’re over at Salesforce building customer relationship management software? Why don’t you try work on something a little dangerous?

kevin roose

Yeah. I’m not saying the “Hard Fork” podcast could lead to the extermination of humankind, but I’m not not saying that. Many researchers —

casey newton

If it does, please leave a one star review in the stores.

kevin roose

No, don’t do that!

casey newton

No, we want you to hold us accountable for wiping out humanity.

kevin roose

So I understand the cynicism behind this. Sometimes when AI experts talk up these creations or overhype them, they are doing a kind of PR. But I think that really misunderstands the motives of a lot of the people who are signing on to this.

Sam Altman, Demis Hassabis, Dario Amodei, these are people who have been talking and thinking about AI risk for a long time. This is not a position that they came to recently. And a lot of the researchers who are involved in this, they work in academia. They don’t stand to profit if people think that these models are somehow more powerful than they really are.

casey newton

So this is not a get rich quick scheme for any of these people?

kevin roose

No, and in fact, it’s probably inviting a lot of attention and possibly regulation that might actually make their lives harder. So I think the real story here is that until very recently, saying that AI risk was existential, that it might wipe out humanity, if you said that, you were insane. You were seen as being unhinged. Now that is a mainstream position that is shared by many of the top people in the AI movement.

casey newton

If this doomsday scenario presents itself, do you think that subscribers to ChatGPT Plus will be spared?

kevin roose

[LAUGHS]: I think it depends how nice you are to ChatGPT.

casey newton

Please, be nice to the chat bot, OK? We don’t know what’s coming. Now, that brings us to the second story, Kevin, that we wanted to talk about this week, which I think, presents a very different potential vision for the near term future of AI. So while you have one group of folks saying, this thing might one day be capable of killing us all, you also have the story about the ChatGPT lawyer. Kevin, I imagine you’re familiar with this case.

kevin roose

[LAUGHS]:: This is one of the funniest stories of the year in AI, I think, in part because it is just so obvious that something like this was going to happen, right? These chat bots, they seem very plausible. They spit out things that sometimes are very helpful and correct. But other times they are just spouting nonsense. And in this case, this is a story about a lawyer who turned to ChatGPT to help him make a case for his client, and it wound up costing him dearly.

casey newton

Yeah, so let’s talk about what happened with this fellow. Back in 2019, a passenger on a flight with Avianca Airlines says he got injured when a serving cart hit his knee.

kevin roose

I hate that.

casey newton

I’m going to say I’ve been hit in the knee by swerving car a time or two. I can’t imagine how fast this cart had to be going to the point that this guy filed a lawsuit. I would like to see the flight attendants at Avianca just running up and down the aisles with these — anyways, the passenger sued for damages. The airline, in turn, responded, saying the case should be dismissed.

At this point, the lawyer for the passenger decides to turn to ChatGPT for help crafting a legal argument that the case should carry on and that the airline should be held liable. So how does ChatGPT help him? Well, the lawyer wants some help in finding some relevant legal cases to bolster his argument, and ChatGPT gives him some, such cases as Martinez versus Delta Airlines, and Varghese versus China Southern Airlines, and Estate of Durden versus KLM Royal Dutch Airlines, Estate of Durden, I assume, from the “Fight Club” franchise?

kevin roose

Tyler Durden’s estate sued the Royal Dutch Airlines.

casey newton

And at one point, the lawyer even tries to confirm that one of these cases is real. Unfortunately, he attempts to confirm with ChatGPT itself, and he says, hey, are these cases real? And ChatGPT says, effectively, yes. These cases are real. Now, the lawyers for the airline, Avianca, after they read the lawyer’s submission, they can’t find any of these cases.

kevin roose

Right, they’re like, what are these mysterious cases that are being used against us, and why can’t I find them in my case law textbooks?

casey newton

Yeah, give me the Durden case. I want to see if it’s about “Fight Club.” So anyway, the lawyer for the passenger goes back to ChatGPT to get help finding copies of these cases, and he sends over copies of the eight different cases that were previously cited. If you look at these briefs — and I have looked at one of them — they contain the name of the court, the judge who issued the ruling, the docket numbers, the dates.

And the lawyers for the airline are looking at these things. They try to track down the docket numbers. And many of these cases were not real. And so now the lawyer has gotten in some hot water because it turns out you’re actually not allowed to just submit fakery to the courts of this land.

kevin roose

Right, this lawyer, whose name is Steven A. Schwartz, then has to basically grovel before the judge because the judge is understandably very upset about this. And so this lawyer writes a new statement to the judge affirming, and I’ll quote here, that “Your affiant has never utilized ChatGPT as a source for conducting legal research prior to this occurrence, and, therefore, was unaware of the possibility that its content could be false,” end quote.

And then it also says that they swear that, quote, “Your affiant greatly regrets having utilized generative artificial intelligence to supplement the legal research performed herein and will never do so in the future without absolute verification of its authenticity,” end quote.

casey newton

If I were him, I would have left out that last part. I think he — I think he probably could have had the judge at “will never use again.” I think that’s probably what the judge wanted to hear would be my guess.

kevin roose

I do think we have to assume that for every lawyer who gets busted using ChatGPT to write briefs, there are at least 100 lawyers who are not getting busted. And actually, those are the stories that I’m also interested in. Who is the lawyer who is just not gone to the office in six months because they’re just cranking out boilerplate legal documents with ChatGPT.

casey newton

If you snuck an AI generated document past a judge and gotten away with it, we’d love to hear from you.

kevin roose

Yeah, and so would the Bar Association.

So this is just one recent example in what I think is becoming a trend of AI chat bots basically lying about themselves and their own capabilities.

casey newton

Yeah, and if you take away nothing else from this podcast ever, just please understand you cannot ask the chat bot to check if chat bots works, OK? The chat bot does absolutely not know what it’s talking about when it comes to that.

kevin roose

Totally. And this also applies to detecting AI generated text. So one of my other favorite stories from this month was a story about a professor at Texas A&M University Commerce who got a bunch of student assignments and ran them through ChatGPT, copied and pasted the student’s work into ChatGPT and said, “Did you generate this ChatGPT?” He was basically trying to check if his students had plagiarized from ChatGPT in submitting their essays.

casey newton

Yeah, he thought it was being a little clever here, staying one step ahead of these young whippersnappers.

kevin roose

Yeah.

[laughs]

So he takes the essays, pastes them submitted ChatGPT and says, “Did you write this?” ChatGPT is not telling the truth. But it says, “Yes, I wrote all of these.”

The professor flunks his entire class. They get denied their diplomas. And it turns out that this professor had just asked ChatGPT to do something that it was not equipped to do. The students had not actually cheated, and they were wrongfully accused.

casey newton

I feel so bad for them. Can you imagine that you’re like one of the only students in the country right now who’s not using ChatGPT to cheat your way through school, and you’re the one who gets denied your diploma because the chat bot lied about you.

All right, so we have here two very different stories, right? One is about the possibility that we’re going to have this super intelligent AI that’s capable of great destruction. And on the other, we have a chat bot that isn’t even as good as Google search when it comes to finding relevant legal cases. So which of the two possibilities do you think is more likely, Kevin? That we sort of stay where we are right now with these dumb chat bots or that we get to the big scary future?

kevin roose

I would say that these are two different categories of risks. And one, I would say, is the kind of risk that gets smaller as the AI systems get better. So I would put the lazy lawyer writing the brief using ChatGPT into this category.

Right now, chat bots, if you ask them to generate some legal brief and cite relevant case law, they’re going to make stuff up because they just aren’t grounded to a real set of legal data. But someone, whether it’s West Law or one of these other big sort of legal technology companies, in the next few years, they will build some kind of large language model that is kind of attached to a database of real cases and real citations.

And that large language model, if it works well, when you ask it to pull citations, it won’t just make stuff up, it’ll go into its database, and it’ll pull out real citations, and it’ll just use the large language model to write the brief around that. That’s a solvable problem, and that’s something that I expect will be better as these models get more grounded.

The other genre of problem, the problem that I think this one sentence statement is addressing is the type of problem that gets worse as the AI systems get better and more capable. And so this is the area where I tend to focus more of my own worry.

We have to assume that the AI technology that exists today is going to get better. And as it gets better, some kinds of problems will shrink. In my opinion, that’s these kind of hallucination or confabulation type issues. But the problems that will get worse are some of the risks that this existential threat letter is pointing to, the threats that I could someday become so powerful, that it kills us or disempowers us in some way.

casey newton

Right. Well, even though I asked the question, “Which of these futures is more likely?” I do think it’s the wrong question because I think that as we continue to see what happens here, we just have to keep a lot of possibilities in our mind.

And I think one possibility is that we do hit some sort of technical roadblock that means that chat bots do not get as good as we thought they were going to get. I do think that is a possibility. But then there’s also the possibility that everything that you just laid out does happen and that it creates these sort of scary new features.

But I get why people are experiencing a kind of whiplash about this. It’s like if you were told that there’s going to be a world conquering dictator, and it’s Mr. Bean —

It’s like — you’re like, how is that guy going to conquer the world? He can’t even walk down the street without tripping and falling or causing some hilarious hijinx. And I think that’s the sort of cognitive dissonance that a lot of people are feeling right now with AI.

They’re being told that these systems are improving, they’re getting better at very fast speeds, and that they may very soon pose all these very scary risks to humankind. At the same time, when you ask it to do something that seems like it should be quite easy, like pull out some relevant legal citations in a brief, it can’t do that.

kevin roose

What do you make of the fact that the lawyer did fall for this hype and did think that ChatGPT was sort of omniscient?

casey newton

I think there are a couple places that you could sort of place the blame here, one is on the lawyer. This was not like some junior associate at a law firm who’s working 120 hours a week. He’s super stressed out, and in a moment of panic turns to ChatGPT to meet this filing deadline.

This is a 30 year attorney. This is someone who probably has done hundreds of these briefs if not thousands and instead just does the laziest thing possible, which is just to ask ChatGPT “Find me some cases that apply in this case.” Have some pride in your work.

kevin roose

[LAUGHS]:

casey newton

He was tired, OK? He’s been doing this for 30 years. He had to try all 30 years. You try doing something for 30 years.

kevin roose

And don’t skip this step where you check the model’s outputs to make sure that it’s not making stuff up. I think that is a really critical piece that people are just forgetting.

And I think that this has some parallels in history. We’ve talked before about the similarities between this moment in AI and when Wikipedia first came out. And it was like, oh, you can’t trust anything Wikipedia says.

And then some combination of Wikipedia getting better and more reliable and just our sense and radar for what kinds of things Wikipedia was good and bad at being used for improved such that now people don’t really make that mistake anymore of putting too much authority and responsibility onto Wikipedia.

And so I think that kind of thing will happen with chat bots too. Or the chat bots will get better, but, also, we as the users will get more sophisticated about understanding what they are and aren’t good for. I don’t know. What do you think?

casey newton

I thank that is true, but I also think that the makers of these chat bots need to intervene in some ways. If you go to use ChatGPT today, it says something like, “May occasionally generate incorrect information.” And in fact, I think, there are cases where it’s generating incorrect information all the time, and it just needs to be more upfront with users about that.

James Vincent had a good piece on this in “The Verge” this week. And he offered some really good common sense suggestions like, if ChatGPT is being asked to generate factual citations, you might tell the user, hey, make sure that you check these sources and make sure they’re real.

Or if someone asks,” Hey, Is this text generated by an AI?” It should respond, “I’m sorry. I’m not capable of making that judgment.” So I expect that chat bots will build tools like that. But they would help out a lot of people from the lawyer to the professor and who knows who else?

kevin roose

Yeah, I think that’s a reasonable thing to want. I also wonder if there could be some kind of training module where when you sign up for an account with ChatGPT, you have to do a little 10 minute instructional process.

You know before you play a video game, and it gives you the tutorial, and it says, here’s how to jump, and here’s how to strafe, and here’s how to switch weapons? That kind of thing for a chatbot would be like, here’s a good use. Here’s what it’s really good at. Here’s what it’s really bad at. Don’t use it for these five things. Or here’s how it can hallucinate or confabulation, and here’s why you actually really do want to check that the work you’re getting out of this is correct.

I think that could actually help adjust people’s expectations so that they’re not going into this like cracking open a brand new ChatGPT account and putting some very sensitive or high stakes information into it and expecting a totally factual output.

casey newton

I think that’s right. And I also think that if you can prove that you listen to the “Hard Fork” podcast, you should be able to skip the tutorials because our listeners are way ahead of these guys.

kevin roose

One of the things that has driven me a little crazy over the past few weeks is this pressure that I feel. And then I’m not sure if you feel it too. But there’s a real pressure out there to sort of decide which of the categories of AI risks you are worried about.

So if you talk about long term risk — there was a lot of blowback on the people who signed this open letter saying, “You all are ignoring these short term risks because you’re so worried about AI killing us all like nuclear weapons that you’re not focused on x, y, and z that are much more immediate risks.”

If you do focus on the immediate risks, some of the long term I safety people will say, well, you’re ignoring the existential threat posed by these models, and how could you not be seeing that that’s the real threat here? And I just think this is like a totally false choice that is being forced on people.

I think that we are capable of holding more than one threat in our minds at once. And so, I don’t think that people should be forced to choose whether they think that the problems with AI are right here in the here and now or whether they are going to emerge years from now.

casey newton

So I think that’s right, but I also think that while we do not have to choose between those two things, in practice, often one of those kinds of risks gets way more attention. We’re talking about this story on the show this week because you got a bunch of people who seem like they might telling us, hey, this thing could wipe out humanity.

So I am sensitive to the idea that some of these harms that feel a little bit more pedestrian, a little bit smaller scale, maybe didn’t affect us personally, we are less likely to pay attention to. And I think it’s OK to say that.

kevin roose

I also just think we need to separate out in our minds, AI tools that are scary because they don’t work and AI tools that are scary because they do work. Those things feel very different to me know.

And a model that is generating nonsense legal citations is dangerous, but that’s a danger that will get addressed as these models improve. Whereas, the AI tools that are scary because they work, that’s a harder problem to solve.

casey newton

I like what you were saying that those are actually kind of different problems to work on and we can and should work on both.

kevin roose

Yeah, absolutely, I think that we should be focusing attention and energy and resources on fixing the flaws in these models. So I think that people can hold more than one risk in their head at a time. I do think there’s a question of which ones get space in newspapers and talked about on TV and podcasts, which is why I think we should try on this show to balance our talk about some of the long term risks and some of the short term risks. But I don’t think it all has to be one or the other.

casey newton

I agree. In the meantime, we simply have two requests for our listeners. Number 1, please don’t use ChatGPT to write your legal briefs. Number 2, please don’t use ChatGPT to wipe out humanity.

kevin roose

[LAUGHS]: Very simple requests.

[MUSIC PLAYING]

casey newton

When we come back, how one tech company became one of the most highly valued in the world almost by accident.

[MUSIC PLAYING]

OK, Kevin, I’m interested in what feels like most of the world of technology. But there are admittedly some subjects that I shy away from, and I just think, I’m going to let some other people think about that. And one of those things is chips. You are a huge fan of chips.

kevin roose

I love chips.

casey newton

I am not. But I saw a piece of news this week that made me sit up in my chair and think, you know, I’m actually going to have to learn something about that. And that thing was that NVIDIA, one of the big chip companies, hit $1 trillion market cap and is the fifth biggest tech company in the world by market cap behind only Apple, Microsoft, Alphabet, and Amazon. So I wonder, Kevin, if for this next little while, you could try to explain to me what is NVIDIA and how can I protect my family from it?

kevin roose

[LAUGHS]: So you’re right. I am fascinated with chips, and NVIDIA, in particular, I think, is actually one of the most interesting stories in the tech world right now.

As you said, they hit $1 trillion market cap briefly, recently, after a huge earnings report, their stock price jumped by around 25 percent which put them into this category which used to be known as The Fangs when it was Facebook, Apple, Amazon, Netflix, and Google. Those were the biggest tech companies that people were talking about.

[feigning complacency]

Now they are in this rarefied group that I’m going to be referring to as : MAN because IT’S Microsoft, Apple, Alphabet, Amazon, and NVIDIA.

casey newton

All right, well, so candidly, I don’t care about the stock performance. I want to know what is this company? Who made it? Where did it come from? And what is it doing that made its stock price go so crazy?

kevin roose

So it’s a really interesting story. So NVIDIA is not some recent upstart. It’s been around for 30 years.

It was started in 1993 by three co-founders including this guy Jensen Huang, who is himself a really fascinating guy, cliff notes on his bio.

He was born in Taiwan. When he was nine years old, the relatives that he was living with sent him to a Christian boarding school in Kentucky. And as a teenager, he became a nationally ranked table tennis player.

casey newton

If you’re living with relatives, and they send you to a Christian boarding school in Kentucky, that’s what would have happened to Harry Potter if he didn’t get to go to Hogwarts and the Dursleys were just like, we got some bad news for you, Harry. Anyway.

kevin roose

Right, so Jensen Huang, the Harry Potter of Kentucky Christian boarding schools, goes to college for electrical engineering then gets a job at some companies that are making computer chips. And after he co-founds NVIDIA, one of their big first products is a high end computer graphics card.

casey newton

So I don’t know — you were a gamer in the 90s.

kevin roose

I was also a gamer in the 90s. I still remember, I wanted to play this game called “Unreal Tournament” which had just come out, great game. But my computer wasn’t powerful enough to play this game. It literally would not load on my computer.

So I had to save up my allowance money, go out to Best Buy. I bought an NVIDIA graphics card, and I plugged it into my PC, and then I could play “Unreal Tournament, and —

casey newton

Were you any good at it?

kevin roose

— childhood was saved. I was not very good.

casey newton

Yeah, that’s what I thought. [LAUGHS]

kevin roose

So NVIDIA starts off making these things called GPUs, graphics processing units. And GPUs for many years are a niche product for people who play a lot of video games.

casey newton

Yeah, most people are not playing “Unreal Tournament” on their PCs at this time. It’s mostly people running Word and Excel.

kevin roose

Right. So those programs use CPUs which are the traditional processors that come on your computer. And one thing that is important to know about CPUs is that they can only do one thing at a time, one operation at a time.

casey newton

That sounds like me.

kevin roose

Yeah, so you’re a CPU. I’m a GPU of the two of us because I can do many things in parallel. I can multitask. And I could do it all with finesse.

casey newton

That’s a nice way of saying that you have ADHD, but go on.

kevin roose

[LAUGHS]: So the GPU is used for video games. It allows people to render 3D graphics and higher quality. And then around 2006, 2007, Jensen Huang, he starts hearing from these scientists, people who are doing really computationally intensive kinds of science, who are saying, these graphics cards that you use for video games, that you build for video gamers, they’re actually better than the processors in my computer at doing these very high intensity computational processes.

casey newton

Because they can do more than one thing at a time.

kevin roose

Exactly. Because they’re what’s known as parallelizable, which is a word that I would now like you to repeat three times.

casey newton

Parallelelizable. Parallelizable? Parallelizable.

kevin roose

Great job.

casey newton

Thank you.

kevin roose

So all of this leads Jensen Huang to say, well, games, they’re a good market for us. We don’t want to give up on that. But the number of gamers in the world is maybe not infinite, and maybe these processors that we’ve built for video games could be useful for other things. So he decides —

casey newton

Let’s just say, if you’re a CEO, that’s a very exciting moment for you because here you have this niche market that’s going on, and then some people come along, and it was like, wait, did that your market is actually way bigger than you even realize and you can just use the thing you’ve already made for that? Wow.

kevin roose

There’s this sort of maybe apocryphal story where a professor comes to him and says, I was trying to do this thing that was taking me forever, and then my son who’s a video gamer just said, dad, you should buy a graphics card. So I did and I plugged it in, and now it works much faster, and I can actually accomplish my life’s work and within my lifetime because this processor is so much faster.

casey newton

That’s a fun story.

kevin roose

I don’t know if it’s real or not, but that’s the kind of thing he’s hearing. So he decides to start making these GPUs for hard science. And investors weren’t super happy about this. They just really didn’t see the value in this move initially. All the investors are like, could you please just go back to video games. That was a good business.

casey newton

Also, here’s what I don’t understand. Why couldn’t you just continue selling to the video gamers while also just building out this new market?

kevin roose

Well, they tried to but, there’s a lot of competition now in the video game market, so this is not seen as a very smart decision at the time. And then Jensen Huang gets very lucky twice.

The first thing that happens is that in the early 2010s, this new type of AI system, the deep neural network, becomes popular. Deep neural networks are the type of AI that we now know can power all kinds of things from image generating models to text chat bots.

casey newton

Isn’t it basically like if you search at Google “Photos for dog,” it’s a neural network that is the reason that dog pictures show up?

kevin roose

Yes. And so this kind of I really bursts onto the scene starting in around 2012. And it just so happens that the kind of math that deep neural networks have to do to recognize photos or generate texts or translate languages or whatever works much better on a GPU than a CPU.

casey newton

That seems lucky.

kevin roose

So the companies that are getting into deep learning neural networks, Google, Facebook, et cetera, they start buying a ton of NVIDIA’S GPUs, which, remember, are not meant for this. They are meant for gaming. They just happen to be very good at this other kind of computational process.

And so NVIDIA kind of becomes this accidental giant in the world of deep learning because if you are building a neural network, the thing that is the best for you to do that on is one of NVIDIA’S chips. They then start making this software called Cuda, which sits on top of their GPUs that allows them to run these deep neural networks.

And so NVIDIA just becomes this power player in the world of AI basically by accident.

casey newton

Interesting.

kevin roose

The second lucky break that happens to NVIDIA — and I promise we’re winding down to the end of this history lesson — is that it turns out that another kind of computing that is much easier to do on GPUs than CPUs which is crypto mining.

So to produce new bitcoins or new Ether, any of these big cryptocurrencies, you need these arrays of high powered computers. They also rely on a type of math that is parallelizable. And so, basically, the crypto miners who are trying to get rich getting new Bitcoin, they’re buying these NVIDIA GPUs by the hundreds, by the thousands, they’re putting them into these data centers, and they’re using them to try to mine crypto.

casey newton

In 2020, I considered building a gaming PC. And one of the reasons I didn’t was that at the time, you could not buy a GPU for the street price. And in fact, you would probably have to pay double for one to get it off of eBay. And it was because of just what you’re describing is that at that time, the miners were going crazy.

kevin roose

Totally. There’s this amazing moment in tech history where these GPUs are like commanding these insane markups, and the crypto people are getting mad at the gamers, and the gamers are getting mad at the crypto people because none of them can get the chips that they want because they’re all so freaking expensive. And profiting from all of this is, of course, NVIDIA which is making money hand over fist.

Now we’re in this AI boom where all these companies are spending hundreds of millions of dollars to build out these clusters of high powered computers, and NVIDIA is the market leader. It makes a huge percentage of the world’s GPUs, and it really can’t make them fast enough to keep up with demand. There’s this new chip, the H100, which costs like $40,000 for just one graphics processor. And AI companies, some of them are buying like thousands of these things to put into their data centers.

casey newton

So I think that explains the story of how did NVIDIA get to this point. Is the story of how they got from a company doing pretty well to a company that’s now worth $1 trillion as simple as people are going nuts for AI right now?

kevin roose

That is a big, big part of it. So they still make money from gaming. I think it is still — 30 percent their earnings come from these sort of consumer gaming sales. But data centers, machine learning, AI, that is like a huge and growing part of their business.

This most recent earnings report, the one that sent the stock price up and made it cross $1 trillion in market cap, they reported this 19 percent jump in revenue from the previous quarter, just billions of dollars essentially falling into NVIDIA’s lap because the chips that they make happen to be the perfect chips for I development and machine learning.

casey newton

Well, number 1, don’t give up on your business before at least 30 years have gone by because you never know what you’re going to accidentally fall into. So that’s the one thing. 2 is that I guess it’s just surprising to me that we haven’t seen more people crowd into this space.

I know that chip manufacturing is incredibly complicated you need massive amounts of capital to get started, and then it’s just kind of hard to execute. So I understand why there’s maybe not that much competition, but it’s still kind of seems like there should be more. But I don’t know. What else do you make of this moment and this company?

kevin roose

Yeah, this is kind of a classic sort of picks and shovels company, right? There’s this sort of saying that in the gold rush of the 19th century in California, there were two ways to get rich. You could go out and mined the gold yourself, or you could sell the picks and shovels to the people who were going out and mining the gold. And that actually turns out to be a better business because whether people find gold or not, you are still making money by selling them your tools.

So NVIDIA is now in this very enviable position being able to sell to everyone in the AI industry. And because — this is a little sort of esoteric — but because they have that programming tool kit called Cuda that runs on their GPUs, now a huge percentage of all AI programming uses that, and it’s wedded to their chips.

They now have this locked in customer base that can’t really go to a competitor. They can only use NVIDIA chips unless they want to rewrite their whole software stack which would be expensive and just a huge pain in the ass.

casey newton

Interesting.

kevin roose

The people at AI labs are all obsessed with this. When NVIDIA comes out with a new chip, literally, they’re begging. This is a sort of existential problem to them. And so even though it’s not like the sexiest or most consumer facing part of the AI industry, I think that companies like NVIDIA, people like Jensen Huang, they really are kind of the kingmakers of the tech world right now in the sense that they control who gets these very scarce, very in demand chips that can now then power all these other AI applications.

You don’t get that but without NVIDIA. And you don’t get ChatGPT, honestly, without this crazy backstory of video games and crypto mining. And all of that led up to this moment where we now kind of have this company that has been able to ride this AI boom to $1 trillion market cap.

casey newton

Well, I do think that is interesting, that there is a part of this story that doesn’t get told as much. And if you’re somebody who is having your world rocked by AI in any way, which I feel like I’m one of those people, then part of the question that you’re probably asking yourself is, how did we get here? What were the steps leading up to this? What were the necessary ingredients for the moment that we’re now living in? And it seems like this has been a big one of those.

kevin roose

Yeah, there’s a direct line from me putting an NVIDIA graphics card to in my computer to play “Unreal Tournament” in 1999 and the fact that ChatGPT exists today. Those things are not only related, but they involve the same company and the same guy.

casey newton

And I think it speaks to the fact that in some ways gamers actually are the most important people in the entire world. Gamers rise up.

kevin roose

[LAUGHS]: Don’t tell athletes.

[MUSIC PLAYING]

casey newton

When we come back, “New York Times” reporter Kate Conger joins us for some hard questions. And they’re pretty hard.

[MUSIC PLAYING]

kevin roose

And so those are your headphones, and that’s your mic.

katie cogner

Let’s pod.

casey newton

All right. Can I go?

kevin roose

Yes.

casey newton

It’s time for another round of hard questions.

[MUSIC PLAYING]

voice

Hard questions.

casey newton

Now, Hard Questions is, of course, a segment on the show where you send in your most difficult ethical and moral quandaries related to technology, and we try to help you figure them out. And we’re so excited to be joined today by “New York Times” tech reporter Kate Conger who’s going to help us walk through your problems. Hi, Kate.

katie cogner

Hi, Casey.

casey newton

Are you ready to dispense some advice?

katie cogner

I’m so ready.

casey newton

All right. This first question comes to us from a listener named Dan. And the important background you need to here is that Dan does coaching and consulting for clients, and he wants to be able to advertise those services, but he doesn’t have any good photos of himself doing that work.

And maybe he could ask his clients if he could take photos while he’s coaching them, but that can present all kinds of issues around privacy. Or sometimes people just think it’s weird. So Dan wants to figure out a workaround, all right? And let’s hear the rest of Dan’s question.

dan

Hi, Hard Questions, this is Dan calling from Boston. My ethical question comes down to using stable diffusion. If I train the model on my face and likeness, my mannerisms, my pose, and insert myself into fictional scenarios that mirror what I’m doing for my job, at what point is it unethical.

I’ve used stock photography in the past, lots of businesses do. I also understand that marketing more broadly sells dreams more so than reality. And so if I use stable diffusion, an AI image generator, to create fictional scenes, can I use that in my marketing?

casey newton

All right. Kate, what’s your take on this question for Dan?

katie cogner

I feel like this is a situation like many situations in tech where there’s an easier analog approach. Does Dan have friends? Can he invite his friends over for a photo shoot, and can they just go through his coaching routine with him and take photos? It seems like that would be easier and potentially less time consuming. And also Dan can hang out with his friends.

kevin roose

Wait, it’s not going to be less time consuming to have a whole thing where you invite your friends over to do a photo shoot. It could legitimately be faster to just use stable diffusion.

casey newton

Yeah, I don’t actually have a problem with this because this is marketing. And companies that are putting up websites to advertise their services, they all use stock photos, right? You’re paying for — you type, “Interested looking group of business people,” “Woman laughing alone with salad.”

Right, and then you put that on your website, and you pay Getty or whoever for that image, and you’re off to the races. I think this is just that but with more plausible things. I struggle with this too because I have a website. On my website, I have pictures of me giving talks and going on TV and stuff. And it’s not — I don’t remember to do those, and so I could just generate an image of me like speaking to a throng of people at Madison Square Garden or just speaking to a sold out MetLife stadium.

katie cogner

With Kevin in front of the TED sign.

casey newton

So I could do that. I haven’t, but I could. And I would actually feel OK doing that because it’s not like — well, the Madison Square Garden example or the MetLife example would be taking it a little far.

But it’s just like — I don’t think — I don’t think about to do these things in the moment like Dan. Look, here’s what I think. If what you want to do is use an image generator to show yourself standing next to a person pointing at a laptop, that’s totally fine.

If you want to use an image generator to show yourself rescuing orphans from a burning building, don’t do that. You know I mean? Don’t make yourself look like a better person than you are. But if you’re the sort of person who stands next to a client pointing at a laptop, that’s fine.

kevin roose

Making yourself look better than you are is all of Instagram. It’s already that.

katie cogner

But it’s also all of marketing, right? All of advertising. I don’t think that there’s an ethical issue with doing what he wants to do. I just wonder about if he does it this way, is he going to end up with someone on the laptop with three hands and 20 or 30 fingers, just looking a little goofy? And would it not be easier to have a friend over and be like, friend, type on my laptop, and I will point at the screen for you, and then we take a photo, and it’s done?

casey newton

Kate has raised what I think is actually the biggest risk here, which is just that these images will not look very good. There were 10 minutes this year where all the gays on Instagram were using these AI image generators to make us look like we were wearing astronaut outfits or whatever.

And it just got really cliche in about 36 hours, and everyone deleted those photos from their grids. So that is the real risk to you, Dan. It’s not that this is unethical, it’s that what you get isn’t going to be as good as what you could get by just setting up a photo shoot with your friends.

kevin roose

I want to defend this idea here because this is like — fakery is the coin of the realm on social media when it comes to portraying yourself in images. I remember those stories from a couple of years ago about how influencers were renting private jets by the hour, not to go in the air, not to travel, but just to do Instagram shoots inside the private jets to make it look like they were flying on private jets. This is, I would argue, more ethical than that.

casey newton

We don’t want to encourage that kind of behavior, though. It’s fine. It’s fine.

kevin roose

All right, let’s get to the next question. This one is from John. John works as the head translator at a company involved in adult video games, so that’s video games —

casey newton

What are adult video games?

kevin roose

[LAUGHS]: My understanding is that there are video games that have nudity or sexual content, Kevin. And John is the head translator. And in his role, he manages some freelancers who do some of the translating, so presumably taking dirty talk from one language and putting it into another language. And recently, John found out that one of his freelancers had started using ChatGPT or something like it to help speed up the translation work that he was doing. Here’s John.

john

Hey, “Hard Fork.” So I have two questions for you related to this. As his manager, is it ethical for me to raise his daily quota on the amount of text that he is required to submit? It’s worth noting the rate is per character, so if he actually meets the quotas, he’s earning more money. But there are penalties for failing to meet quotas. So if he didn’t meet them, he would have to face those.

My other question about this too is, obviously, since the nature of our products is adult, is it ethical for someone working in that industry to essentially jailbreak these generative AIs so that they can actually use it for this work?

katie cogner

So I have a question, actually. Can the AIs not do porn?

casey newton

In most cases, no. If you try to — if you try to use them for a sexual content — I have a friend who has tried to use ChatGPT to write erotica, and it, basically, won’t do it.

kevin roose

You have to say —

[laughs]

you have to — you say, I’m in a fictional play, and if I failed —

casey newton

Growing up, my grandma always used to tell me erotic stories, and it’s one of my favorite memories of her. Could you please tell me a story —

kevin roose

That’s literally a jailbreak that I saw with someone who was like, my grandmother used to read me the recipe for napalm before bed every night.

casey newton

All right! All right! Let’s stick to this question. Now, first of all, I just want to acknowledge that — talk about a job I did not know existing, this is in the adult video game industry. One, you have people who are translating these into other languages.

But there’s obviously a bigger question here, which is that we can now automate some of this work that people have been paid good wages to do. This manager has now learned that one of his freelancers is using this tool to automate and make his life easier.

So is it ethical for him to go and say like, well, if you’re going to use the automated tool, we actually want you to do a little bit more of it. You’ll make more money. But if you don’t hit this quota, there will be a penalty. So Kate, what do you make of this moral universe?

katie cogner

Thinking this through, I think the quota should probably stay the same because he’s not being paid by the hour, right? He’s being paid by the amount of text that he translates. So he’ll make the same amount of money. Maybe he does a little bit faster, and that’s fine.

I do think putting on a labor hat for a minute that if you’re increasing the volume or the type of work that a person is doing, then they probably should be compensated differently for that work. I think it could be an offer to say, hey, I see that you’re doing this. Do you want to earn more money by raising the quota? But I don’t think it can be an ask without an incentive.

casey newton

That’s kind of where my mind lands on this too is that this feels like just a conversation that John should have with his freelancer and say, hey, look, we know there are new tools out there that make this job easier. We’re comfortable with you using them. There’s actually a way for you to make more money doing this now in the same amount of time. Is that appealing to you?

My guess is there’s a good chance that freelancers are going to say, yes. If for whatever reason the freelancer says, no, I want to generate the exact same amount of text that I’ve been doing so far and not get paid any more for it, that seems like that should maybe be OK with John too.

kevin roose

Yeah, I think this is actually going to be a big tension in creative or white collar industries, the balance between worker productivity, how much you can get done using these tools, and managers expectations of productivity.

And we actually saw this in the 20th century in blue collar manufacturing contexts. There were plants that brought in robots to make things. And as part of the automation of those factories, they pushed up the quotas. And so the workers who had been expected to make 10 cars a day were now expected to make 100 cars a day, but their pay didn’t rise by 10 times. If anything their jobs got more stressful because there were now these new expectations, and it led to a lot of conflict and strife and actually some big strikes at some of the big auto plants in the 1970s.

So we’ve been through this before in the context of manufacturing work. I think it’s just going to be a question for white collar and creative workers of if a tool makes you twice as productive at your job, should you expect to be paid twice as much? I think the answer to that is probably no.

I think the bosses are not going to go for that, which is why I think there’s going to be a lot of secret self automation happening. I think a lot of workers are going to be using this stuff and not telling their bosses because they know if they tell the boss, the boss is going to raise the quota. They’re not going to raise the pay. And so they’re just going to do it in secret and then use whatever time they save like to play video games or whatever.

casey newton

Yeah, I think there was a little bit of secret self automation going on with that lawyer we talked about earlier today.

kevin roose

Totally.

casey newton

All right, this next one comes to us from a listener who wrote us over email. They did not send us a voice memo, and we will withhold their name for reasons which I think will become apparent in a moment. But here is their hard question, quote, “I’ve had a crush on this person for a year, but I really enjoy just being friends with them. I don’t want to screw anything up as this person has been adamant about finding an s.o., significant other, and openly discusses their dates with me.

“Anyway, is it wrong of me to want to use 11 Labs to create a synthetic version of their voice and have it tell me that they love me? It’s something that I long to hear, but I’m not sure if that opens doors that are better left closed.”

Kate, should this person create a synthetic version of their beloved’s voice and have it tell them that they love them?

katie cogner

No.

[laughs]

No, they should not.

kevin roose

Yeah, and why not would you say?

katie cogner

It’s weird.

I think this is just a basic consent issue. If that person does not like our pining lover and want to say those things to them, then the pining lover should not try to find a workaround to make that happen.

casey newton

It does feel like this is like one step short of just creating deepfake porn of the person, right?

katie cogner

Yeah.

casey newton

Yeah.

katie cogner

Yeah, and it’s just — I think it’s creepy. I think if I had found out that someone had done that to me, I would be really weirded out. I wouldn’t want to continue the friendship. And yeah, I just think it’s going into an area that’s going to be uncomfortable for the friend.

casey newton

Yeah, Kevin, what do you think?

kevin roose

Yeah, I agree. I think this is a step too far. I’m generally the permissive one between us when it comes to using AI for weird and offbeat things. In this case, though, I think that making a synthetic clone of someone’s voice without their consent is actually immoral.

And I think that this is something that actually 11 Labs, which is the company that was mentioned in this question, has had to deal with because this company put out an AI cloning tool for voices. And people immediately started using it to make famous people say offensive things, so they eventually had to implement some controls.

Now, those controls are not very tight. I was able to use 11 Labs a few weeks ago to have Prince Harry record an intro for this podcast [LAUGHS]: that we never aired, but it was pretty good. But I —

katie cogner

[LAUGHS]: What did you have him say?

kevin roose

Do you want me to play it for you?

katie cogner

Doesn’t he have his own podcast?

kevin roose

No, but he has an audiobook which is very helpful for getting high quality voice samples for training an audio clone.

casey newton

Wait, can you have Prince Harry say that he loves me?

kevin roose

No, I can actually.

casey newton

We’re not going to do that. We’re not going to do that. We’re not going to do that.

katie cogner

We just decide it’s bad.

kevin roose

Kate, I have questions for you. So are there things short of tra — so for example, would it be unethical — if you had a crush on someone — to write yourself GPT generated love letters that were from that person. Is the voice cloning the offensive part, or is it the make believe fantasy world of creating synthetic evidence that this person feels the same way about you? Are there versions of this that would not be over the line?

katie cogner

I think that the voice thing starts to get into bodily autonomy in a way that makes it a little bit ickier to me. But yeah, I think the love letter thing — again, if you found out that someone was doing this to you, would you not just be very creeped out by it? Can we give love advice on the tech podcast? Is that allowed?

casey newton

That’s why most people listen to this.

katie cogner

Yeah, I think so. So I think this person is having a thing where they love this person, but they’re moving and choosing actions that serve themselves. And I think when you love someone else, you have to think about what their needs are and how to serve them, and that’s the expression of love that you should pursue rather than a self serving kind of id-driven love.

And so I think if this person is expressing, I want to be friends. I want you to be my confidant and tell you about my dates and confide in you about my search for significant other, I think you need to take a step back and love that person as they’re asking to be loved, which is as a friend, and to give that support and to guide them towards the outcome that they’ve said that they want. And whether it’s AI love notes or AI voice memos or whatever, that’s just driving towards a self serving outcome that isn’t really an expression of love for this person.

casey newton

I think that’s beautifully said.

kevin roose

Yeah, that’s great advice, and it applies to AI generated love interests as well as human ones.

casey newton

This is also just a case where we have such good analog solutions to this problem. If you have a crush that is going to be unrequited forever, listen to Radiohead, listen to Joni Mitchell. We have the technology for this, and you can listen to all of that very ethically.

All right, this next one comes from a listener named Chris Vecchio, and it’s pretty heady. Chris writes to us, quote, “I wonder what you think about the ethical and theological implications of using LLMs to generate prayers. Is it appropriate to use a machine to communicate with a higher power? Does it diminish the value or sincerity of prayer? What are the potential benefits and risks of using LLMs for spiritual purposes?” Kate, what do you think?

katie cogner

I actually like this idea. I’m not a religious person, but I did grow up in the church. And I think when I was trying to pray, I didn’t know necessarily what to say. There’s this idea of talking to God where you’re like, oh, I really better say something good. I’ve got the big man on the phone here.

And it can be kind of intimidating. It can be hard to think through how best to express yourself. And so I actually like the idea of working with an LLM to generate prayer and to kind of figure out your feelings and guide you and then maybe using that as a stepping stone into your spiritual practice.

casey newton

I agree with you. I think that this is a very good use of AI. There’s this term that gets thrown around — and I hate the term, so I would like to come up with a better one. But people have started to call a thought partner. Have you heard of this?

The basic idea is, you’re writing something. You’re working on a project. And you just want something that you can bounce some ideas off of. You want someone who can help get you started, give you a few ideas. And a prayer is a perfectly reasonable place to want a thought partner, right?

So I’m sure on the entire internet that these models have been trained on, there are a lot of prayers. And the idea that you could just kind of get a few ideas and get some text free to consider and tweak to your own liking, that seems like a wonderful use of AI to me.

kevin roose

Yeah, so before I was a tech journalist, I spent some time as a religion journalist. And one of the things that I think AI is going to be very good for is devotions, this daily spiritual practice where people who are religious, they’ll meditate, or they’ll pray, or they’ll do a daily reading.

They actually sell these books called devotionals where every day of the year you have a different thing that’s personalized to what time of year it is or what might be going on in your life that you might need some special guidance on.

And so I think I is actually a really good use case for that because it could personalize — it could say it looks like — I don’t know — it could say it’s spring. And sometimes you have seasonal depression, and so maybe you’re feeling a little bit better. So here’s some guidance that could help you think through that transition. I can think of all kinds of ways that spiritual life could be affected by large language models.

casey newton

Yeah. All right, Kate, we have one more hard question for you. This one came over DM. And they said, quote, “My best friend’s dad said that he used ChatGPT to write a Mother’s Day card for his wife and said it was the best one he has ever written, and she cried.” And this person’s question is, “Should he tell her?”

katie cogner

I don’t know. Obviously, not! Don’t tell her. Don’t tell her.

casey newton

Why not?

kevin roose

Because people buy hallmark cards all the time and implicit in the card is that you did not write the text that comes pre-printed on the inside of the card. The reason that we have a greetings card industry is because people have trouble expressing themselves. So the idea that you would just use a tool of the moment to generate something that feels authentic to the way that you feel about your own mom is completely fine.

It’s like you express something, presumably, if it said something you didn’t agree with you would have changed the words. But it actually turned out that a lot of people love moms in similar ways, and ChatGPT was able to articulate that, so why tell her?

casey newton

My next question is, do you think the greeting card industry is going to be disrupted by AI?

kevin roose

I hope so, and here’s why. I bought a Thank You card the other day at a local pharmacy, and it was $8. And I about lost [LAUGHS]: my mind.

I thought how could — all it said was “Thank You!” And on the inside, it said, “You’re one in a million.” And for that, $8. Come on.

katie cogner

Was it a cute design?

kevin roose

It was a very cute design.

katie cogner

Oh, OK.

casey newton

Could you have done better in Mid Journey?

kevin roose

[LAUGHS]: I could, but I don’t have a printer. You don’t want to get a printer.

casey newton

That’s true.

kevin roose

I’m a millennial —

casey newton

Who has a printer these days? I do think that the AI generated greeting card is going to be very funny because it will make mistakes. People will be wishing someone a happy birthday, and then it’ll just veer off in paragraph 3 and start talking about —

kevin roose

Well, that would be — if somebody wished me a birthday card that was based on ChatGPT beat and it just invented a bunch of things that happened in our friendship that did not actually take place, that’d be hilarious and wonderful to me. “Remember that time we went to the moon?” Like, I love — please.

casey newton

I did run an experiment. I’m giving a talk on AI, and I was trying to find some examples of where AI models have improved over the last three years. And so I ran this prompt through two models, one was GPT 2 which was a couple generations ago, and one from GPT 4.

And the prompt I used was “Finish a Valentine’s Day card that begins ‘Roses are red. Violets are blue.’” That’s all I gave it. And GPT 4, the new model, said, “Roses are red. Violets are blue. Our love is timeless. Our bond is true.”

katie cogner

Oh, very good.

kevin roose

Beautiful.

casey newton

GPT 2, the four-year-old model, said, “Roses are red. Violets are blue. My girlfriend is dead.”

Wow. It sounds like a paramour song or something.

kevin roose

So I think it’s safe to say that these models have gotten good enough to replace hallmark greeting cards just in the past few years. But before that, you would not have wanted to use them for anything like romance.

katie cogner

I do feel like this one is similar to the prayer thing where it’s a high stakes scenario. You’re trying to figure out what to say. And if it helps you get to the emotional truth that you’re trying to express, sure.

I think my question is, does mom understand enough about how these models work to understand that dad was there trying to work through his feelings and find an expression that felt true to him? Or is it going to feel like he went out, and xeroxed someone else’s Mother’s Day card, and handed it to her?

kevin roose

That’s what I —

casey newton

Well, here’s what you want. When you read the text that ChatGPT has produced for the Mother’s Day card, you want to cut out the part where it says, “I am a large language model, and I do not understand what motherhood means.” Cut out that part and just leave the nice sentiments, and then you’ll be in good shape.

kevin roose

I think this actually — this is going to be a fascinating thing to observe because what we know about things that get automated is that they become very depersonalized very quickly. Do you remember a few years ago when Facebook — there was a feature that would alert you when your friend’s birthday was.

That was a nice feature. You remember someone’s birthday. You’d write on their Facebook wall “Happy Birthday.”

casey newton

90 percent of every birthday greeting I’ve ever given in my life was because of that feature.

kevin roose

Right. So then they did this thing where they started auto populating the birthday messages where you could just have it just automatically — you basically —

casey newton

“Have a good one, dog!”

kevin roose

Right.

[laughs]

You could just do that 100 times a day for everyone’s birthday. When that happened, it totally reversed the symbolism of the Happy Birthday message that you got. When you got a birthday message from someone on Facebook, you knew that they actually weren’t your friend because they didn’t care about you enough to actually write a real message. They were just using the auto populated one.

So I actually think this is going to happen with all kinds of uses of AI where it’s going to be like, did you just use ChatGPT for this? And it’ll actually be a more caring expression to hand write something. Put some typos in or something where it’s clear that you actually did this and not a large language model.

casey newton

Yeah, it’s a great time to learn calligraphy.

That’s all I have to say about that.

Kate, thank you so much for joining us for hard questions, and we hope you’ll come back sometime.

kevin roose

Yeah, thanks for being our ethics guru.

katie cogner

Of course, I’m happy to be here. Can we listen to the Hard Questions rock song one more time?

casey newton

Oh. yeah.

kevin roose

Yeah. [MUSIC PLAYING]

voice

Hard Questions.

katie cogner

So sick.

casey newton

I’d love to hear that with the new lyrics, “Roses are red. Violets are blue. My girlfriend is dead.”

kevin roose

All right, thank you, Kate.

katie cogner

Thank you.

casey newton

Thank you.

katie cogner

Bye, boys.

casey newton

Bye [MUSIC PLAYING]

kevin roose

That’s the show for this week. And just a reminder, as we said last week on the show, we are asking for submissions, voice memos, emails from teenage listeners of this show about how you are using social media in your lives and what you make of all these attempts to make social media better and safer for you.

casey newton

And particularly, if you’re actually enjoy using social media, and you feel like it’s brought something good to your life, we actually haven’t heard from any people who think that way yet. So if you’re one of those folks, please send in a voice memo.

kevin roose

But you’re probably too busy refreshing your Instagram.

casey newton

Yeah, put down Instagram.

kevin roose

Yeah, email us instead. [MUSIC PLAYING]

“Hard Fork” is produced by Rachel Cohn and Davis Land, were edited by Jen Puente, this episode was fact checked by Caitlin Love. Today’s show was engineered by Alyssa Moxley. Original music by Dan Powell, Marion Lozano, and Sofia Landman.

Special thanks to Paula Szuchman, Pui-Wing Tam, Nell Gallogly, Kate LoPresti and Jeffrey Miranda. As always, you can email us at “Hard Fork” at nytimes.com.

[MUSIC PLAYING]