Opinion | The Culture Creating A.I. Is Weird. Here’s Why That Matters.
[MUSIC PLAYING]
So this is the second in a little California twofer series. First with Scott Wiener. That episode came out a few days ago. And it’s all about the politics of California and the politics of the Bay Area. And this is more about the culture of it, and the culture of its weirdness, and what emerges if you’re willing to take that seriously.
Something about Northern California culture, in particular, is it is this very strange braiding of technology and engineering and capitalism and mysticism and openness — a radical kind of openness. And all that has created the technology industry. It is creating A.I. now, which is obviously going to be a major topic of this conversation with Erik Davis.
And to take California seriously and to understand what makes it special, and what makes it frustrating, and why what happens here happens here, I think you have to take the weird quite seriously. And Erik Davis is a guy who takes the weird quite seriously.
He is a historian of California counterculture. He is trained as a religious historian. He’s written books like “TechGnosis” and “High Weirdness.” And he’s tried to make weirdness into an interpretive framework and to understand the role it plays in this place that he loves and chronicles. And I found his work — it is very weird, but I find his work very, very helpful.
And trying to understand how to maintain an openness without losing a skepticism is a really important talent, a really important discipline, a really important practice. But I think this is actually pretty helpful for understanding why so much strange and so much powerful technology comes out of such a small area of the globe.
As always, my email ezrakleinshow@nytimes.com.
[MUSIC PLAYING]
Erik Davis, welcome to the show.
It’s great to be here.
For a lot of your career, a lot of your older books, I understand at least part of what you’ve been doing as being a theorist of California culture. And then the last book, and I think threaded through a lot of your more recent work, is becoming a theorist of this idea of weird or weirdness. And those seem very connected. So what is the word weird? What is the concept weird in your understanding? And what makes California weird?
Yeah, that’s a really good one. Yeah. One of the things I was doing with my decision to make the word weird mean more, which some people have already been doing for a while, but there’s a sense that it has something for us now that it didn’t have before. And one dimension of that is simply that there’s more substance to it than we allow.
And one way of looking at that is that if you pay attention to how people use the term colloquially, what kinds of things they put in that category, you start to realize that it does a lot of work, but sort of off to the side. It’s the place you put things that are uncomfortable, awkward, strange, maybe a little gross, kind of fascinating, spooky. It covers a strange range of things. And I realized that there was a lot hidden there.
So I said, well, let’s actually look at this word. Where does it come from? How does it evolve? What are the concepts associated with it? And you go back to Shakespeare with the weird sisters. You find there’s this whole marvelous underground history of how the ideas of fate merge with ideas of the uncanny and the spooky, and then increasingly the bizarre and the pulp and the perverse and the macabre.
And in that current, there’s something about Bohemia. There’s something about the night sides of consciousness, the edges of our culture that gets articulated and expressed in a way that because it’s been kind of hidden, unlike other terms that we might pay attention to — you can contrast it very interestingly with the idea of the uncanny, for example. But because it’s been kind of hidden, it has a lot for us now. So that’s something about the general term.
I’ve been trying to think about why I find your work very helpful for being a Californian and for understanding California. And one thing I’ve come to is the idea that you can have two relationships to that bucket of the weird, which is one relationship is dismissal. To say something is weird is to put it out of sight, to brush it off of your reality. And the other is an attraction to it, an orientation towards it. To say something is weird is to say it’s alluring, it’s seductive.
And when I think about what is different in, particularly, San Francisco, though, I think more broadly quite a bit of California, is this interest in the weird, this openness to things that other people would dismiss. If you think that is right, I’m curious why you think it is right.
Oh, yeah, I do think that’s very true. And I like that the ambivalence of it is really part of the power. I mean, California is a really — it’s an unusual place. It was unusual almost from the get-go. And it has a lot of really intense polarities in it culturally, how it defined itself. I mean, even today when we talk about it, we talk about it from a coastal point of view when it actually has all of this — some of the most righteous racists and reactionaries were also nurtured in its climes.
So it’s a place of polarity, but particularly a place of what I would call mutation, meaning that when it was recognized as a site of potentially great transformation and great exploitation, it was from the get-go seen as a place to work, to process, to develop, to transform.
I mean, we think about 19th century images of California, and we might — the pop story is Yosemite. And the great natural wonders. And it’s just gorgeous. And the weather is like Greece. And it’s like this beautiful bucolic place. But it was one of the most rapidly industrialized places in the history of the United States.
So it was always seen as material to work. And at some point, it became clear that part of the material to work was us. And so the development of media, the development of Hollywood; photography developed here; aspects of television — there’s a lot of technological development that happens here. That’s also about mediation and how we understand ourselves and how culture transforms.
And in a lot of ways, we’re living globally in a construct that was filtered through this peculiar space, which has, again, a popular story about exploration and novelty and courage and innovation — and then another story that’s more desperate and strange. It’s like if you take the almost mythological idea that the West expands West until doing its settler colonialist thing in all its glory and horror, until it slams into the Pacific, the largest body of water on the planet, and goes, well, now what?
What do we do now? Where do we go now? Do we go into aerospace or go into the heavens? Do you go into media, so you build a virtual reality? We move into the computer spaces. These are all places that end up being inflected with a certain kind of exuberance and a certain kind of anxiety and even desperation about what is the human now.
And do you go internally into consciousness and meditation and psychedelics? But I want to pick up on something you said at the beginning of that answer because I think it’s important. I think the stereotype of California is that what is getting mutated, evolved, created here is some form of cultural or even political liberalism.
And I think what is often missed is how much of the modern right is birthed here — Ronald Reagan, Richard Nixon. If you go into the modern right, Breitbart, Ben Shapiro and The Daily Wire. I don’t think it’s entirely fair to say he’s right, but Joe Rogan was taping out of here for a long time. That whole thing of the intellectual dark web was based here to the extent it was anywhere. The Claremont Institute, which is the central theorists of the Trump era, to the extent there are any — they’re in California.
And there is this really, I think, interesting way in which the boundary-pushing cultural liberalism of California has also created its own reaction on the right. And both of those have become very important in national and international politics.
Yeah, I’m really glad you pointed that out. It’s totally true. You have liberalism. You got the various forms of intense conservatism. And then there’s just the question of what do you do with libertarianism? And that’s, in a way, part of the secret key. And that’s also part about the way that the hippies become the cyber capitalists.
It has to do with aspects of libertarianism that I still think we don’t really wrestle with adequately in American history. And nobody else in the planet even understands it. Because they’re like, what, libertarian — what? Is it anarchism? Is it right-wing extremism? What do I do with this stuff? But there’s something about the way that the social logic of libertarianism overlaps both liberalism and a certain kind of conservative intensity that I think also gets us something very specific about the state.
So I grew up in Irvine, which is — when you talk about California, Irvine and Orange County, particularly in the era in which I grew up in it, much more right wing. We did not elect our first Democrat to Congress until Katie Porter, which I think was in 2018, if I’m not wrong. And that’s a very different culture. I mean, the aerospace industry, the defense industry is down there. There’s a lot of Californias.
But one thing that I think a lot of people mean when they talk about California culture is really Northern Californian. And this strange braiding of what some people call consciousness culture, then with technology and money. And that’s a really potent combination now.
But one of the things I think is interesting about it — and it’ll bridge us to maybe some of our main topics here — is the way in which that openness to strange things has been an accelerant to the technological industry here. That openness to ideas that sound weird, people that seem weird, has been a kind of secret sauce in attracting the folks and the industries that have continued inventing some of the biggest companies and technologies in the world. So how do you understand the interaction of weirdness and Silicon Valley, in particular?
Yeah, well, I think it’s — you probably have to get into the nitty-gritty of engineering culture, nerd culture, and then just the fact that a lot of the psychedelic movement and the consciousness movement that was related to it in so many ways — much more deeply than we often kind of remember or imagine — that those also had, if you will, a technical dimension to them. That there’s a protocol logic to the weird.
It’s not about necessarily having a stance or having a concept this is how the world is. It’s more like, hey, we can play with this. We can manipulate this. We can hack this. There’s a relationship to the possibilities of reality that has an open-ended experimental quality that almost inevitably invokes bohemian traditions, ideas of anarchy, of play, of the unknown.
And all of those — you can romanticize them or you can just look at them more pragmatically, where they’re just — there’s a material. We’re not satisfied. Let’s work with the material. Let’s see where it goes. So if you trace these lines back — if you look at the history of the personal computer and what are all the elements that are going on in the 1960s, you’re going to find these zones where there’s an engineering mind-set overlapping with a, like, let’s change reality mind-set. And they’re both practical and visionary and also playful, if you will. And there’s part of the weird is a kind of playfulness. You just don’t know, and it might go south or be strange, but it’s part of an experimental ethos. And over time, then, that gets — becomes more and more coherent, and more and more visible, more and more obvious that this is the thing to do.
That I’m not just going to go to Burning Man. I’m going to get my whole staff to go to Burning Man because it just opens your mind. And whether you see that as a petri dish of future products and a future consumers or you see it as a kind of edge condition of capitalism or technology or culture where inventions happen on the fly and a lot of them go south, but that’s part of what the game is is that there’s a lot of oddities along the way if you’re going to find some kind of novelty.
So it gets selected in a way that from someone like me who is more interested in the Bohemian cultural consciousness side of things, it also can look insidious, like there’s, like, oh, we’ve gotten really good at learning how to capture and exploit these elements. But that’s also a not an entirely accurate way to look at what’s happening.
So let’s talk about A.I., which is a place where I’ve reached for the metaphors of the weird working off of some of your work. And you’ve done a lot of thinking there now about A.I. and weirdness. So what makes A.I. weird?
Ah, that’s such a good question. I’ve really been thinking that a lot. I think part of it if you — you have to go, what is the weird? And one of the ways of thinking about it is that there’s something here that challenges my set assumptions about how things work. That has an additional quality, let’s say, of some kind of uncanniness, something that is not simply confusing or alien, but has a familiar unfamiliarity to it.
And I think the most obvious place to look at it is — and the first place — well, I’ll just explain personally. So I kind of ignored a lot of the A.I. stuff. I’ve known about A.I. safety issues for a very long time. And I know people who are really into it. And just I knew a little bit about it, but it didn’t hit me really. I almost consciously avoided it because I felt that it was going to be something that was going to take over my imagination and mind, and I was going to have to pay a lot of attention to it.
And so when I finally read this book, “Pharmako-A.I.,” by K Allado-McDowell, which was co-written with GPT-3, and K has a really — they are interested in shamanism and ayahuasca and the future of humanity and all these very, very Bay Area topics, all woven together in this bizarre braid. And I’m reading the book. And then I’m reading GPT-3. And I can see the way it’s kind of a collage.
And then there’s a statement that hits me. And I slip into projecting, constructing an author, or a sense of an author, that is almost immediately — the rug’s pulled out from under it. And I’m left in this space of ambivalence, but particularly about agency. And there’s the sense of — an almost animist sense that there’s something going on here that’s more than just pattern recognition and an algorithm choosing the next best word.
And you can intellectually lay that back on and go, OK, this is just a machine. It’s just operating. It’s read the whole internet. It’s just making a really good guess. It just has that feel. And you’re like, OK, but that’s not at all what’s happening emotionally or even spiritually in that response.
And that’s just one example. I think it’s a particularly concrete one of where do we locate the agency if we’re really trying to stay in a critical mind-set? I mean, some people are just like, sure, I’m just talking to the machine. No problem. I’m just talking to a chat machine. No big deal.
Yeah, but if you’re trying to deconstruct it and at the same time recognizing its interactive dimension, well, then we’re in this animist space where I’m not so sure if that doll in the corner is actually animated or not. And that’s a very classic site of the uncanny. So suddenly there’s an uncanniness in the midst of this highly commoditized, major, major world-changing machine. That is — well, that’s pretty weird.
It’s why I like a quote from you, which is that weird things are, quote, “anomalous, they deviate from the norms of informed expectation and challenge established explanations, sometimes quite radically.” And that felt very true to me here on two levels.
One is one that you’re getting at here, which is when you talk about the norms of informed expectation, when we interact with anything that has a facility with language and the ability to work in context that these chat bots do, we assume agency. Our informed expectation is there is something we would call a mind on the other side of that.
And you can go way too far with that and assume sentience and consciousness, I think. You can go not nearly far enough and just say, oh, this is an autocomplete and you’re an idiot for getting fooled by it. But it’s why I think trying to exist in a space of this is challenging, this is strange is helpful.
But then the other — when you talk about weird things challenge established explanations — we don’t have good explanations of what’s going on in these systems. And so this world where more and more might get turned over to them is a world where we might lose — and I think this is actually one of the possible coming traumas that people are not quite paying good enough attention to — we might lose a lot of the legibility of our own society. And you can say in certain areas of science we already have, right? We don’t really understand quantum physics, that kind of thing.
But just our kids will have friends who they understand to be friends operating on their phones, but we don’t know why those friends, those inorganic and whatever they are — intelligences — operate the way they do. That loss of being able to explain the world around us at any level of granularity, that’s more profound than I think people are giving it credit for. And deserves more reflection than I think people are giving it.
No, absolutely. I mean, I couldn’t agree more. I remember the article I read 12 years ago where that shift became clear to me. Oh, now we’re getting to the place where you can’t explain the outcome because of the complexity, because of the alienness of the operation, because of the density of the data. I mean, I almost felt it like a kind of nausea because it’s really significant.
And most of us, we’re not scientists. We are all used to living in a world where we don’t understand how our phone works. We trust that the guy who makes the phone knows how the phone works, so I don’t worry about it. Or I trust that the scientist that I’m reading about knows something.
And that kind of trust has obviously shifted more than we might have imagined. But we still operate in that zone so that such a radical shift in scientific production, technological production wouldn’t really hit us personally.
I may not know how the phone works, but Bob knows how the phone works.
You know, and then when that Bob doesn’t know, and when Bob’s like, I don’t know, we’re just going to ride this thing, one of the things — and this is where we get back to the weird — is that that is such a significant shift away from a deep, modern archetype of knowledge and power.
Because one of the things that I’ve been tracking is how when people try to talk about or articulate all of these very complicated, unnerving and urgent issues that we’re facing now, when and where they grab for myth, when they look for words like summoning or the golem or these sort of — those things are not insignificant. They might just be, oh, well, we’re just trying to illustrate or have a common cultural signifier for these processes.
And I’m like, well, yes and no. I mean, in a way, my whole work, my whole attitude towards technology has always been about finding those mythic dimensions and then taking them seriously, but not literally, but to see the way that they operate and what stories they tell.
And so I do think that we are in a situation where that zone of the weird, the Sorcerer’s apprentice moment where there’s a shift in the power and the thing that we have created moves outside of our direct control or even understanding. That has really profound directions that signify, to my mind, the beginning of the end of a certain kind of arc of human production and experience.
We can think about in terms of enlightenment values. We can think about in terms of the emergence of modern science. There’s some way in which we’re closing something, and the consequences of that, imaginatively, politically, they’re going to be very, very significant. And part of that has to do with this loss of Bob knows what’s going on. [LAUGHS]
[MUSIC PLAYING]
I don’t think everybody listening is going to love this area because I’ve over time cultivated an audience that likes things to be concrete. But I am always struck by the dissonance between the technical eligibility and the mythical legibility of these systems.
And in particular, it is the most miffed and storied up area I have ever seen or covered. I mean, we have however many decades of sci-fi. We have Ultron, and Hal, and Skynet, and the Matrix, and Asimov’s laws of robotics. And then going backwards, we have fantasy, and summoning, and then people talk about the golems, and the Sorcerer’s apprentice.
And I’ve recommended it before, but Meghan O’Gieblyn’s great book, “God, Human, Animal, Machine.” It’s all about a lot of the Christian mythology operating in a sub rosa way here. Singularity is a very mythic concept. Very similar, very eschatological to things you’ll see in raptures and so on.
And there’s this way in which we don’t understand these systems that well. And then we perhaps understand them all too well. I mean, you could argue we’re getting trapped in stories. That maybe it doesn’t net out that way at all. Maybe this stuff tops out at a fairly low level. It’s a pretty good chat bot. And we’re not able to get to these super intelligences. And we’ve let ourselves astray with however many years of imagining what we could create.
But it’s something I’ve appreciated about your work because I notice that just traveling through this world how mythed up it is, how much people are operating with stories running in the back of their minds, both consciously and unconsciously. And those stories are creating a lot of interpretive framework because they’re standing in for things we actually don’t yet know how to interpret.
Yeah, or might not be able to. There’s always a place where we’re dealing with these changing human models and cognitions. And now we know that as the technologies — not just in A.I., but technologies get more and more powerful — we were like, how do you wrap — how do we wrap our heads around this?
Well, we got all this science fiction lying around, well, that makes sense. So there is this problem about self-fulfilling prophecies, about getting caught by narratives that then shape your view so much that you’re not able to see other developments. So we absolutely have to be aware of these things.
And a lot of my work has been kind of like a two-step process, where, on the one hand, I’m even more open for the mythological potentials, the speculative possibilities, the wild dreaming things than your standard culture critic. And at the same time it’s like, yes, and we must deconstruct and see what that story is telling us because we are in a place of self-fulfilling prophecies.
Well, and that’s particularly true for this technology. I largely agree with people who say that no technology is truly neutral. But compared to a semiconductor or a band saw, A.I. trained on human language, where all these myths and all these stories are in the training set — so when we ask it to basically act like an A.I., what it understands — again, these verbs are tricky here — but what it is able to reflect back at us, because of the way it is pattern-matched across our language, is the stories we have written about how A.I.s interact and act in relationship to human beings.
There is this funny sense, particularly with large language models, where the more we have story told about them, the more they are trained on the stories, the more we have created a thing that is our own imagining of the thing that we have created.
It’s why I’ve always had a slightly different take on the Kevin Roose Bing Sydney conversation that went very viral at The New York Times. I mean, that was very clearly to me that whatever was powering Microsoft chat bot, it had read enough, it had been trained on enough data about rogue A.I. slipping the reins that it knew how to answer that question that he was beginning to get it to provoke it towards. And that’s, again, just very weird.
Yeah.
Right? This thing reflecting ourselves back at us and our stories about it. It’s a very non-neutral technology.
Absolutely. No, that’s a wonderful example. I mean, one of the things that my work is motivated by is that there’s better or worse myths that you can bring to trying to understand things that have some kind of mythological dimension. And in this case, I would say that the A.I. is — it doesn’t have agency in the way that we keep assuming that it does.
So that when people try to find its real motivation, it’s such an easy model to get into. And like you say, it’s just simulating a character that is responding to what it perceives as your question. So what do you have then is this immense series of simulations, of characters based on stories. It’s a Proteus. It’s not an evil genius. That’s what it’s doing is that it’s responding and reflecting and circulating. And indeed, one of the — I think the greatest things about the technology, from a humanist point of view, is that it forces us to think about all this stuff really seriously, really strongly. What does it mean to have our concepts about reality fed back to us in this way? How do we trust stories? Are we made of stories? All of these kinds of chin stroking questions become a lot more pertinent right now.
I guess one part of it I’d want to open up a little bit — and you can tell me if this is what you’re saying in a way or it’s actually a totally different point — but I think of one of the central — I can’t decide if I want to say blind spots or divides in the way California, in the way Silicon Valley thinks about technology as being about whether or not we create technology and then it is ours to control, or we create technology and then it acts back upon us.
I mean, you can say, look, it’s just trained on our language and so it is under our control. It’s simply just parroting us back at us. But the idea that that will stop there, the idea that that is somehow a way of putting it in a box and then you can be like, OK, it’s safely in the box, that that’s a pretty profound mistake.
Twitter changes people. Facebook changes people. The internet changes people. I mean, I’ve become a big Marshall McLuhanite in my middle age. And this idea that these mediums and technologies always change the people using them, in some cases do so more so than anybody realizes, is I always feel the single biggest missed understanding among technologists who you’d think would know better.
Yeah, I’m surprised at that as well. I mean, I was a Marshall McLuhanite from the get-go and always very interested in precisely the ways that there were unexpected affordances to media technologies, new technologies that shifted not just what we did or how we even imagined culture, but who we are, how our brains are constructed, how our senses of the world are constructed.
And to have respect for the unknown outcome — and that’s something that’s always terrified me about contemporary technological development is the lack, whether you want to think about it as hubris or a certain narrow-minded belief that we can control the meanings and effects of these technologies, and the willingness to just throw really powerful things just into society in general, just treat the whole thing as a petri dish — as a competitive petri dish, too. And in some ways, it’s just inherent in the way in which we ended up expressing capitalism in terms of how the competition operates.
But it has often surprised me how technologists themselves don’t necessarily acknowledge it. The people that I’ve always known were always most interested in that, they had that kind of McLuhanesque view, which, in a way, is kind of an animist view.
There’s something in the object that has its own — it’s got its own story to tell. It’s going to make a move. And we’re in a relationship. There’s an interactive relationship with these things that have effects that we cannot predict. And we have to work it out over time and work it out in a way together.
But there’s so little time for the degree of transformations that we’re making with these new technologies that it almost seems like those — the blinkered view that we can control something as powerful as the large language models released into society at large, it’s like one of those delusions that people have that enable them to continue to function in their job properly. But it’s very hard for me to appreciate that.
So to be McLuhanite for a minute and to take his famous saying, “The medium is the message,” if the medium is the message, if the medium encodes certain ways of being and thinking that change the people who use it, what do you think the message of the A.I. chatbot medium is? Which it’s worth noting is a medium being built on top of a technology. Chatbotting is just one of many, many applications. And the fact that that’s the one taking off is going to also shape the technology differently than it might otherwise shape. But yeah, what’s the message of the medium?
Wow, that is an extraordinary, extraordinary question. I must admit I’m spinning a bit here because there’s so much going on.
Let me try one on you, which is that something that I think is very present in the way people are thinking about A.I. is the idea that the output is what matters, and that the work of knowledge, of creation is this kind of — you run a search on the information in your head, and then you — or maybe now, to make your life easier and quicker, the A.I. spits out the output. It condenses it down and more or less predicts what you need.
And there’s much more, I think — if you pay attention to yourself as a human being — mystery in that process. So one thing I am skeptical of is that A.I. is actually going to make people better at as much as they think it will. For instance, the work of writing — and Ted Chiang, the sci-fi writer, has made this point — the work of writing a bad first draft is not just a waste of time on your way to a good fourth draft. It is often an intellectual space in which you realize you shouldn’t be writing that draft at all, in which you realize actually you should be doing a totally different piece in which you realize something that you had never thought of and that isn’t within the training set for that draft is actually relevant here. And it’s kind of mysterious, intuitive sense that leads to the creation of great work, also often just decent work.
The number of times that I’ve been driving around in the car and come up with the column idea that turns out to be exactly what I needed but very different than what I had when I got in the car is many. And that idea that we can just outsource that, I think it’s a way of thinking of ourselves as computers as opposed to the more slightly mysterious creatures we are.
But by applying that analogy then onto the computer and suggesting that people are going to be so much better off if they have the A.I. write their draft for them, I mean, that, to me, is a way we could then change in that direction. If people actually begin doing that and stop doing that work themselves, some things will be gained. It will be quicker to summarize a bunch of data into an output. But it’s also making yourself more like an A.I. and less like a human being.
Yeah, it’s remarkable the way in which there’s an invitation to let go of a certain space of the unknown, the mysterious, the novel, the unpredictable in our own minds. And perhaps one scenario is that it becomes clear that these things are insufficient. And so we become even more aware and we honor that aspect of ourselves even more. But there’s also the possibility that it wasn’t necessary all along.
And it is easy to imagine a situation where we become used to offloading more and more and more decisions, and thereby accepting that ourselves — that we, too, are predictable machines. So I think maybe part of that message is it has to do with prediction and pattern, and what is in us that is not predictable, that is not pattern. And then can we isolate that? Can we put our finger on it? But isn’t that very putting the finger on it part of the loop? Where do we put it?
And where do we use that as a way of saying, enough, or, like, no, I’m not going to do this, no, I’m not going to turn — even now I’m very aware of when — what is it — like 80 percent of things people watch on Netflix are based on the recommendation engine. It’s like, oh, that’s a lot, you know. I grew up — my whole world was cultural recommendations, opinions, how people constituted themselves through their taste, through turning people on, all these forms of sociality.
This whole world is already kind of passing as we just feed ourselves and we give up that kind of human negotiation of decision, of taste, of options. So we can already see that happening. And again, my hope is that it just makes it more obvious, let’s say in things like writing or poetry, where we have a revenge of the humanities in the sense that those things that humanities are pointing to, that’s precisely what eludes the repetition, at least hopefully. [LAUGHS]
You quote a 2017 blogpost by Sam Altman, who’s the C.E.O. of OpenAI, and I want to read a bit of it. It begins, quote, “A popular topic in Silicon Valley is talking about what year humans and machines will merge, or, if not, what year humans will get surpassed by rapidly improving A.I. or genetically enhanced species? Most guesses seem to be between 2025 and 2075.”
And then he goes on to say, quote, “Although the merge has already begun, it’s going to get a lot weirder.” There’s that word. “We’ll be the first species ever to design our own descendants. My guess is that we can either be the biological bootloader for digital intelligence and then fade into an evolutionary tree branch, or we can figure out what a successful merge looks like.”
So one distinctive thing about this culture — you hear it in that Sam Altman post — is it’s a bunch of people building something that they think has a non-trivial chance of wiping out or otherwise displacing humanity. And you might say that’s a weird thing to build. I probably would not build something that I thought had somewhere between 10 percent and 30 percent chance of upending the species. So you’ve done a lot of writing. One of your early books was called “TechGnosis” about the Gnostic mind-set. And I think that’s one way of understanding what is happening here. So can you talk a bit briefly about the Gnostic quest for knowledge? What that was and how it might be helpful interpretive framework here?
Yeah, absolutely. I mean, again, it’s important to emphasize that I’m using this as a model for thinking, a pattern, an archetype, if you will, rather than something specific about second century curious Christians. But the idea — boiled down to its mythic core, the idea of Gnosis is that there’s an order of knowledge that is transcendent to the world that we live in. And that often the world that we live in is seen as a mistake or a trap even, perhaps even constructed by a lower or evil deity.
So the idea shifts — if you imagine the Christian story is sin and redemption, the Gnostic story is ignorance and awakening, or ignorance and knowledge — a kind of higher knowledge. We’ve got to get out of here, and the only way out is up.
And one of the points that I made in “TechGnosis” is not unlike O’Gieblyn’s book where there are these religious structures and metaphors that recur with such intensity that we have to look at them as such. We can’t just say, oh, it’s just — oh, it kind of resembles this thing. Who cares? That was then. This is now.
And the Gnostic flavor is a kind of denigration of matter, of the conventional reality, of our bodies, and a willingness to put all your cards in some higher order. So the exuberant embrace of the idea of uploading ourselves into computers, which starts becoming a cultural point of singulatarian ideas in the 1990s or ‘80s, even though you can trace the idea back farther — that’s a good sign of that kind of attitude and that willingness to disidentify with our material conditions in a kind of transcendent mode.
And so what you can imagine happening is rather than having it be some kind of spiritual transcendence is that it gets mutated, if you will, into a technological possibility on the forward timeline. So rather than having it be something that I can transcend now through various esoteric practices, instead it’s something that’s adhering in the technological development is going to produce a moment in the future of something like transcendence.
I think it’s really important to acknowledge the similarities because I think it’s really important at this stage in the game to intensify, deepen our sense of what the human is by embracing the whole course of what we’ve come through, whereas the desire you hear in a lot of these voices is all that is just junk, it’s all BS. We’re at this one inflection point of evolution, and we either jump on it or we don’t.
And I’m like, I don’t know, maybe we have unleashed an apocalyptic situation. But the very least, we’ve got to take it all on board. It’s the whole story. All of the human experience is demanded at these kinds of points. And again, it might not work out. It might be in 10 years we look back at this and go, oh, boy, we were huffing the glue. I don’t think so, but it’s possible.
But that’s still is not — that doesn’t undermine what I’m saying because it’s about, in a way, embodying the density and resonances of human beings and all of our relations at a point when the fundamental question of the human is raised in our face culturally wide in a big way.
Let me get to the huffing glue question. I think there’s one version where these systems simply top out, the technology ends, fine. I think there’s another one, though, which you’re getting at in one of your recent pieces on the potential banality of all this, which is you’re dealing with models trained on what we’ve already said and thought and done, that is in a protean genre imitating way mimicking it back at us.
And so very far from making the future unbelievably different than the past. What it will do is make the future more like the past. It will be a boundary on human creativity and change and transformation. You write, “Far from serving away from a norm, these systems make the future by conservatively iterating the past. Even the apparent creativity of large language models relies on the novel shuffling of a gargantuan deck of cards that already exists.”
So I think that’s another way of thinking about what might fail here. That instead of being an opening to something totally different, completely unpredictable, it’s actually a narrowing to the completely predictable, literally built on prediction engines.
Yeah, absolutely. And is there something else going on in our own experience? It’s so clear that we are not just predicting, that we’re not just looping, that there is a space of novelty and of potential creativity, of wrestling with possibilities that is so intrinsic to how we operate that it’s very difficult to imagine collapsing that and leading towards something productive and something interesting.
And what does that look like on a culture-wide basis as we get used to enjoying cultural products that are produced by large language models and the way in which they recirculate, because you can cynically say, well, that’s already kind of happening. Look at popular music. What is popular music? Is it incredible acts of generative novelty? No, it’s more like mixes and matches of things, and a little bit of action thrown in, and a little bit of shifting here.
So it’s possible that reshuffling the deck over and over again, the deck is big enough, there’s still going to be enough novelty to entertain us, let’s say. But it’s hard to square that with any more expanded view of what cultural products do for us or cultural works — great literature, great movies, whatever. It’s hard to see it simply as an iterative process, that there’s some other dimension to those products. And are we actually getting to a place where we start to recognize that less and less, that it’s sufficient to simply be entertained by the reshuffled deck, or is it just going to be clear that there is this kind of difference that we’re losing?
Well, this gets to the third — or one of the third layers of weirdness that I think you get at in this piece that I found was really beautiful, which is I would call it the turning around of the question. This idea that maybe it’s not A.I. that’s weird at all. Maybe A.I. is banal, or, if not predictable, it’s a thin training on human data.
That what’s weird is human beings. And that the real thing that’s going to get highlighted here, ultimately, is not the weirdness of the chat bot, but the weirdness of the person on the other end. And that as A.I. colonizes some of the thin ways we’ve come to value ourselves, I think particularly through productivity, that it’s going to open space for more appreciation of the strangeness of human beings.
Yeah, I hope that that’s going to happen. And I think that a lot of the work to make that case is already happening. That the stakes are high enough that everybody’s playing their big game, which means that if you are on team human, as Douglas Rushkoff puts it, you’ve got to play the full hand.
Now’s the time to make the case. And also, to recognize inside yourself who you are, what you are. Where does efficiency stop as a value in your life and something else take off? Can we articulate those values, whether they’re interpersonal, whether they have to do with nature, or they have to do with how we relate with our own selves, with our higher potential, with death?
All of these elements that are clearly kind of weird from a machine point of view — look how these guys are behaving — those things will become more visible. And I think that part of even the discourse around it is starting to raise these questions in really interesting ways, which is itself indicating something.
[MUSIC PLAYING]
There was a piece that Jaron Lanier, who’s a — I’ll call him a techno humanist philosopher, but he was one of the founders of the term “virtual reality,” and brilliant guy, and I love his work. But he just wrote this piece in “The New Yorker,” and one of the things he says right at the beginning is that he wishes we would stop calling it artificial intelligence. What it is is not intelligence. It’s this social layer of human knowledge working through a technology. And that’s paraphrasing his argument somewhat.
But this idea — and I’ve heard it from a lot of people — that I wish we wouldn’t call this intelligence. I think I might write a piece about this. But I think this is getting at something deeper but that is a little backwards. I think these things are clearly, whatever they are, intelligent. I mean, they are working with information in a problem solving way. It might not be conscious. It might not be sentient.
And so I’ve been thinking, why are we so scared of giving up the term intelligent? Why are we so afraid that something else might get called intelligent? And I think it has to do with how much we have made that the dominant way we value humanity, particularly in a secular dimension. I mean, why is it OK that we treat cows and chickens and the natural world and other creatures the way we do? Well, we’re smarter, I guess? We’re smarter. That’s got to be it, right? Unless you have some kind of version of the soul, and it has to be that we’re intelligent.
And so then if you give that up, if you believe these things are intelligent, and maybe they’re going to be more intelligent on some dimensions than we are, then you’ve lost something really profound. But if that’s not how you value humanity, if you don’t think the worth of a human being is their intelligence, which on some level we obviously clearly don’t. I mean we think children are wonderful not just because they might become smart one day, but because they’re wonderful. The way they experience the world is delightful.
There is something about how much we have dehumanized ourselves that I think is getting laid very bare in A.I. discourse. If we have such a thin ranking of our own virtues and values that these programs can destabilize it so easily, given how limited they are and probably will be for some time, I think it’s getting at something that is a little bit more discomforting, which is that we have valued human beings very poorly.
And it would take a lot culturally and maybe call a lot that we have done into question to value ourselves and other creatures in the world differently. But if we don’t, then we actually have no defense against at least the psychic trauma of this thing we’re creating, which is aimed right at our own definition of intelligence.
Yeah, my reaction to that is to immediately think about animals. Because it’s not coincidental perhaps. It’s sort of very interesting that just at the point where we’re wrestling with this question of machine intelligence and whether we can call it intelligence or not, we are just getting more and more proof that our definitions of human difference don’t stand up to the realities that animals live in, that they are.
And the different reactions that that brings up — and I’m thinking in terms of animals — is that exciting? Are we happy to welcome a much wider sense of cognitive potential and to willingly step down from the throne? Is it threatening because of all of the moral issues that it raises, particularly in terms of how we treat animals and the horrific extinction rate that we face in the planet?
So in a way, that’s already playing with this issue. And in a world where A.I. was driven by a different value set than five corporations battling it out almost like archons in Gnostic myths, completely unconnected to individual human value in some way, then I can imagine a relationship with machines where we play with these edges of, well, how intelligent are you?
Maybe it is a kind of agent. Maybe there is a reason to honor its decision making possibility. Maybe it even has rights. How do we start thinking about rights? The rights of robots, et cetera, et cetera. All of that kind of stuff makes sense because we actually are at the limits of the human in a weird way. It’s like we’re just immersed in this set of transformations — climate change, the shift of the human definition, intense hypermediation, which is digitally intensified. And we’re at this limit of how do we define ourselves.
And in a way, I think it’s if you have the time and the willingness, it forces a kind of existential reckoning, that, again, my hope is that we will see more and more engagement with this problem and hopefully some kind of re-evaluation of what it is that we do do, which has more to do with children, with play, with wonder, with exuberant celebration and with existential reckoning with the conditions that we’re in. To really — there’s not much we can do about a lot of the things that we face right now.
We want to be able to change that perhaps. But how do we reckon with our own kind of limitations? How do we honor ourselves in relationship to these? These kinds of questions I think are being forced in a way. So if there’s enough time and if there’s enough space, there’s a potential to revalue. But a lot of my fears come in with just the sheer speed and the onslaught and the fact that everyone is profoundly anxious.
Well, there’s a potential to revalue, but there’s also the — I think there is a lot of barely submerged guilt and shame and a truly vicious judgment humans are making on ourselves in a lot of this conversation. I mean, I think some of the fear is that if you created something smarter than we are, it would treat us the way we treated everything else.
How remarkable is the intelligence of an octopus? And how often do we eat it in pasta sauce or sushi? To say nothing of a cow, to say nothing of — and we’ve been better and worse about other human beings. I mean, we have done terrible things to people we think are less smart or capable than we are.
And even now — you don’t have to go I think far back — but the kind of life we will leave someone to in modern capitalist society, and other societies, if they are not analytically sharp and not hard-working and not enmeshed in some other kind of human network that will save them. We’re pretty brutal and have been for a very long time. And so I always think that that’s one of the hard things to face up to in this conversation. That I think if we weren’t that way, we might not worry that anything we would create would be that way too. But we know what we’ve done.
Yeah.
And we wouldn’t want to be on the other side of it. And we’re still doing it. And I don’t think that’s all that’s going on here, but I don’t think it’s an irrelevancy. I think the shadow of the life we lead — I think we know the cost.
Yeah. No, that would make sense. The chickens coming home to roost, to use an animal metaphor. Yeah, I think it’s very easy to be at a point where you look at the whole course of what we’ve done and feel finished in a way. [LAUGHS] My mom said once — she goes, yeah, human beings, we had our run. [LAUGHS]
And it was this weird kind of defeatism, not like Sam’s there, but something about the inability to reckon with the consequences of everything that we’ve done. And that part of the frozeness I think we feel sometimes in terms of our own agency is just that we’re so aware of the consequences and even aware of the limits of what we know in our own communities and that that can — inevitably it affects how we imagine other forms of intelligence.
I want to end on a quote you end on from you, which is that “As machines colonize the human, in other words, a more fundamental mystery may leak through: the weirdness that sentient beings are and have always been luminous cracks in an order of things no longer ordered.” Tell me about that.
[LAUGHS] Well, earlier we were talking about this question of intelligence. Can we call them intelligent or not? And I don’t have trouble calling a machine intelligent because I have a model of human being, of human consciousness that’s multilayered. So intelligence I can even imagine is kind of a rational process. And I can imagine how a machine would do a rational process.
But I also believe or have experienced or have faith that there are other dimensions that don’t work along those lines and that we’re kind of polycreatures in a way. And that manifests in writing about difficult things. I mean, this stuff I find very difficult to write about and difficult to speak about because there’s so much on the table, so many different dimensions.
So one of the great things that writing provides — and in a way that I don’t think that the LLMs are going to get to any time soon — is a way to answer its own question or gesture towards the space of an answer without filling it up with reasons. And so that is the crack. It’s like a crack in the machine. It’s like a glitch in the matrix. But not just a technological product, but more like the Leonard Cohen line that everybody quotes about there’s a crack in everything, that’s how the light gets in.
Well, yeah, actually there’s a limit to all of our systems. All of the systems fail. There’s noise on the line. But that noise isn’t just a technical effect or an obstruction or entropy. It’s actually an opening to something else. And that’s just a kind of constitutional gesture I have towards the open, towards the beyond, towards what’s outside of the known.
But I think in this particular case, it’s really, really important to underscore it, and recall it, and gesture towards it again. Because I think that we have a chance for it to become more apparent now in whatever way that that manifests.
I think that brings up a big question of your work to me, which is an important framework and a dangerous one, which is how do you keep yourself open to the weird, open to strangeness, without tumbling off the cliff into what you might call the woo?
Oh.
How do you — I think to me the classic version of where this goes bad is quantum physics is weird and we don’t understand it. And therefore, you get a lot of what some people call quantum woo, which is this, well, because quantum physics is weird, the ultimate nature of reality is kind of whatever I wanted it to be already, this sort of detachment.
When you know what we can empirically prove and know is not the whole of reality, I think it’d be easier to lose any sense of skepticism and just get tugged around, I mean, as I think the dark sides of California do, to weirdness untethered in a way that does not move anything forward. So how do you balance that?
Yeah. I don’t know if my answer is very interesting because it’s really just the way that I’ve been constituted. I just — my father and stepfather were both engineers. And a lot of my friends are engineers. I’ve always just respected and understood science in a certain way, even though I’m kind of outside of it. I didn’t study that much of it directly in school. But I’ve spent my life reading it and understanding rational ways of understanding the world.
And I was always then curious about how I — and this goes back to California — growing up in Southern California, the late ‘70s and early 1980s, at the tail end of the counterculture surrounded by the spent fuel rockets of that whole experience. I was introduced at a young age to experiences — unusual experiences, altered states, different practices, different ideas, that came to me not as fantasy novels or as delusions, but as interesting ways that reality can manifest itself.
And so I’ve always kept both of those tugs going on in my mind. And I’m very interested to — I think any kind of alternate view you have, any sort of woo call you hear, it’s a legitimate call. Let’s see. Let’s go. What is it like to inhabit that? What’s it like to experience that?
But please, at some point in the game, you have to take that construct and kind of dip it in an acid bath of skepticism and see what remains. It might not be much, but that’s OK, too, because you’re just part of the iteration. So it’s like you’re just moving forward in this kind of openness. And in a way, it’s just provided — it’s just always the way that I’ve been functioning in these realms.
And I think it’s a good one because I think it’s really important to be open to the possibilities of experience beyond your knowledge and the possibilities that things are operating very differently than you can imagine. And at the same time, to respect the understanding that we have as a species, as knowledge holders, as members of a scientific society, as well as people who have inherited a kind of skeptical operation that creates a space of freedom. I mean, that’s the thing we forget. Skepticism can seem like it’s just deconstructing or saying no — like, no, no, that’s bullshit. No, no, no, that’s not it. No, no, no, that’s not it. When actually it’s a gesture of freedom, of emancipation from delusion, from limited thinking. And so to keep that play of freedom going in both sides, both in the exploration and then also in the conception of what’s actually going on.
I think that is a good place to end. Always our final question — what are three books you’d recommend to the audience?
Yeah, absolutely. I’m just going to repeat your suggestion because I just finished Meghan O’Gieblyn’s book, “God, Human, Animal, Machine,” and I just loved it. I was just — it just felt like a new friend. And there’s so many good things to say about it. But I will focus on her reading of Calvinism in American history and particularly in relationship to technology was absolutely brilliant. And to me, that’s the secret.
People always talk about America in terms of Puritans, and then they get involved in the moral dimension of Puritanism, and the City on the Hill, and all that kind of stuff. It’s actually Calvinism, and predestination, and the preterite and the save. That’s really the kind of weird Christian programming that I think we have to reckon with as Americans. Thomas Pynchon saw this. And she just ran with it really well.
But even more than that was the way in which she wrote as a model for how to think about all these big issues. It’s just this kind of thing — I mean, that’s an example of what we were talking about. That A.I. and these new concerns bring up all of these questions about philosophy and religion, and who we are and our own experience. And the way she wove those things together, invited us in, didn’t beat us over the head, but then was very, very smart, very accurate. I thought it was a great model for the kinds of conversations that we need to be having, that we are having, and an affirmation of that.
My second book is the new book by Mike Jay, who’s kind of our best drug historian. And it’s called “Psychonauts: Drugs and the Making of the Modern Mind.” And Jay’s been writing for years and years and years. And in a way, this book is kind of like a medley where a lot of the earlier works that he did looking at the history of drug taking in modernity has woven them all together into this remarkable story.
Because that’s one of the features of what’s going on now that we didn’t talk about that I spent a lot of time thinking and writing about, which is the psychedelic renaissance, so to speak, and the radical transformation of the possibility of drug taking as part of our modern condition. I mean, it’s really remarkable.
And he just does a marvelous job of showing from artists to scientists to seekers, how our relationship to psychoactive drugs, that we can take a material that then produces shifts in consciousness, what do we do with these experiences, how do we manage it, how do we represent ourselves, how do we write ourselves through it is really, really at the core of modernity in a way that we don’t often acknowledge because we keep pushing it to the side of drug abuse or crazy people or whatever we do.
And then my third is — I’ve heard guests pull this move before, so I’m going to offer a podcast instead of a book. And that would be, appropriately, “Weird Studies.” “Weird Studies” is the work of two very smart, very playful, very wonderful spirits, J.F. Martel and Phil Ford. And they’ve just done a remarkable job. It’s about 150 episodes of just looking at the literature — mostly at the literature and the cultural artifacts and the art associated with the weird broadly understood.
But their rapport, their range of, again, critical understandings, philosophical influences, but also an openness of heart and mind and spirit to possibility. They also model a way of moving through this territory, which, in a way, is kind of what interests me. It’s not just here’s this territory. It’s now the job is how do you model how you move through this territory.
And they do a remarkable job of reminding us of how deep and dense and rich these currents are in modern culture, but also how much meaning and insight can be had from thinking about them seriously.
Erik Davis, thank you very much.
Thank you. [MUSIC PLAYING]
This episode of “The Ezra Klein Show” is produced by Annie Galvin. Our show is also made by Emefa Agawu, Rogé Karma, Jeff Geld, and Kristin Lin. Fact-checking by Michelle Harris. Mixing by Efim Shapiro. Original music by Isaac Jones. Audience strategy by Shannon Busta. The executive producer of New York Times Opinion Audio is Annie-Rose Strasser. And special thanks to Sonia Herrero and Kristina Samulewski.
[MUSIC PLAYING]