Buddhism and Technology with Dr. David Kittay

Dr. David Kittay, Professor of Religion, Technology, and Philosophy at Columbia University

subscribe for free AND GET the latest PODCAST episodes in your favorite player:

In Part 2 of our interview, Professor David Kittay spoke with us about whether we are living in a simulation, whether we should treat AIs with compassion, and whether the singularity might be another term for enlightenment.

If you haven’t heard the first part of the interview, where Dr. Kittay talks about technologically assisted enlightenment and the nature of time, you can skip back an episode and enjoy it first.

Emptiness and the possibility of life as a simulation

Scott Snibbe: Another thing that you talk about, a very contemporary topic, is simulation; the kind of simulations we see in video games or virtual worlds. And I think one of the questions you asked your students is this teaching on emptiness, where it says that we should see reality like a reflection, an echo, an illusion, a mirage, you ask whether we should add “like a simulation” to that understanding of reality. Tell me a little bit about how you’ve thought of this and your students have thought about this. 

Dr. David Kittay: Yeah. This has become a big thing in fiction now, in science fiction. Is this right now a simulation? In other words, are we the chemistry experiment of some alien kid in junior high school? Because now of course we’re starting to run simulations and they’re getting more and more complex. And so, of course, you have to ask yourself, Well if we really got much better at this, then could we make this as a simulation?  

The philosopher Nick Bostrom at Oxford, he’s made what he calls the simulation argument, where he says that one of these three statements has to be true. One, in the multi-verse no civilization has advanced far enough where they could run simulations as good as this, as good as what we see and experience. Or two, somebody got there and got that advanced but they had no interest in doing it. And if neither of those two is true then, three, this is a simulation.  

If you think about the vastness of the multiverses, yeah probably most civilizations self-destruct before they get that good. There might be a bunch of them if they got to that point and they were so advanced they could run simulations like this, then who knows what they’d be thinking. Maybe they’d have no interest in running simulations. So it makes sense, I think Bostrom puts it at about one in three. Elon Musk says he’s just about sure this is a simulation.

And this is where we get back to what you were talking about Scott – that reality is like an illusion, like a magic trick, like all of those things.

Now, about reality like an illusion, like a magic trick, like a dream… we find this in the writings of Nagarjuna. And we find this fairly consistently. You know we find the same thing in the Bible. Genesis, chapter one, verse 26, Let us make man in our own image. What? Let us make man in our own image? And of course, there are some preachers who are really running with this and saying that, Wait a minute, simulation theory is the good news. 

Then we have in the Muslim tradition we have two very closely related concepts. We have one of the 99 names of God, al-Haqq, which means “the truth.” But then we have the concept of Al Khaliq which means “who created,” which could also have the sense of who simulated, the same thing. So we have an echo of this in the religious tradition. 

Now, what’s cool about it just from a pedagogical point of view is, Okay Scott, if you were advanced, right? You’re that alien kid, What kind of a universe would you simulate? Would you simulate a universe where there was pain or evil or mosquitoes? We have the same question about the singularity that’s popularized by Ray Kurzweil. When we reach such a state of advancement we become almost like God. We can live forever, we can manifest in whatever form or forms we want. What kind of a world would you create?

And so you get to the question which in philosophy and religious studies we call theodicy. If there is an all-powerful God, how can there be evil in the world? We all think about that. Why do we have to die? But usually when we think about these things we think about them through the interference of our upbringing and religious traditions and all that. But when you just think about the singularity or about simulations then you get right there through the back door. So it’s a very interesting way of thinking about all this.

Do AIs deserve our compassion?

Scott Snibbe: That brings up this awful proposition Westworld brings up, that the world that some people might create, if they could, is one where morality didn’t matter and they could do awful things without consequence. What about that question of whether morality or even enlightenment matter anymore in a simulation?

Dr. David Kittay: All we can do is fight for morality and fight for the right thing, whether this is a simulation or not a simulation.

There’s always the temptation to let go of that and just be for yourself. That’s another mammalian thing do. We all can be selfish like that. And again here’s where logic comes in. All we can do is do our best to help others.

Scott Snibbe: The Buddhist view is that by harming another person, even imagining harming another person, you’re the one who really suffers the most ultimately.

Dr. David Kittay: I’m such a skeptic that my whole dissertation was, on the one hand, I translated one of the explanatory tantras of the Guhyasamaja, a wonderful text called the Vajra Rosary. But on the other hand, what I did with it was I took a deeply skeptical approach to it. I said, Okay, you have the believers who say, Oh yeah this is all about becoming a Buddha. But you have most of the religious scholars who say, oh this is about power and control. It’s about keeping down women. It’s about keeping the lamas employed. 

And so I started to think about that. I started to think, What’s the best way to approach this? Because I was skeptical about both points of view, of course. So I started asking myself maybe we could do something really stupid: ask, how much of it is about social control? How much of it is about spirituality and becoming Buddha? 

So I created this little algorithm and part of the algorithm had a self corrective. Basically, like if you were doing it Scott you would say if you Scott if you think this is mainly spiritual then we’re going to deduct 20% because you’re probably biased in everything you’re doing. Not as an algorithm that says what it is but just as a way of starting to think about it… and to be able to engage in dialogue.  

Without the 20% corrective, this wonderful text, the Vajra Rosary, would have been more about social control et cetera than about real spirituality. But when I corrected for my own bias, I just had a narrow victory for this is really spiritual and really great. So my skepticism really came into play in everything I do. 

And I gotta say that this skepticism really is right in the middle of Buddhist practice. I remember when I first talked to the great Lama Locho Rinpoche about doing a retreat and imagining myself as some kind of deity. He said the important thing is, remember everything you say is empty.

Scott Snibbe: Yeah

Dr. David Kittay: In other words, yes, you’re making yourself into whatever deity: Yamantaka, Guyasamaja, Chakrasamvara. But it’s empty.

And this is where we get back to what you were talking about Scott, that reality is like an illusion, like a magic trick, like all of those things. I like thinking about it that way because it’s so playful. It means that things aren’t set in stone. A magician’s trick or an illusion or an echo, you can play with it. And so again, I think it gives us more of a sense of freedom.

Scott Snibbe: Yeah absolutely, I like that “how much” framework you use to avoid dualities in ourselves. It would help so much with a lot of the debates we’re having today. So another thing — it’s amazing asking you all these questions, a lot of them are actually the questions the Buddha refused to answer you know so we I guess we get them.

Dr. David Kittay: Yeah, they say fools rush in! 

Could an artificial intelligence attain enlightenment?

Scott Snibbe: So you talk about artificial intelligence, a great question about whether computers can become conscious, whether they can have emotions, morality, a sense of self. And you’ve taken this even a step further and asked whether computers can attain enlightenment. I know your method is one more of questioning than answering. But can you talk about this question a little bit? Should we treat artificial intelligence as people? Can consciousness come to inhabit a machine? Can AIs attain enlightenment? What have you guys explored in this question?

Dr. David Kittay: Yeah these questions are really difficult and they’re way beyond me, but that doesn’t stop me for some reason. Thomas Doctor just wrote a great article fairly recently about this question of robots and artificial intelligence. And one of the things he points out is, look how we’ve treated animals, and our species chauvinism. And that’s very un-Buddhist in a way, that’s very egotistical. Why should we limit our consideration, our compassion to creatures like us?

It’s so selfish. How could you be wrong in being compassionate to a robot? What would be the downside of that? You know, you might be silly, like a little kid with a doll. If we could be compassionate toward all creatures, toward all intelligences whether they’re conscious or not, then that would be a better way for us to be compassionate. And no matter what your degree of skepticism, I think you have to agree that the world probably gets better with compassion. Why limit compassion?

Scott Snibbe: You know, we’re both parents, right? We get concerned when we see our kids torturing their doll. So why don’t we get upset when we see our kids playing a video game and killing simple artificial intelligence?

Dr. David Kittay: You know it’s a really interesting question and I think we have to be humble about it too. Because I know people in my generation get all upset about video games. But I grew up, and when I was a kid, most of my games were army and I was fighting against Nazis, I was fighting against Japanese, I was fighting against Chinese, and I was killing them in my imagination. Maybe not so different? Maybe video games are a little bit better. And we’re just at the beginning with technology. It’s a new toy for humanity so we shouldn’t jump to conclusions. We should be careful about it and mindful of it but we’re just at the beginning.

Scott Snibbe: Something one of my teachers said to me is that why do we enjoy going to a movie to watch people die and suffer and so on, and he said, Because it’s not real? And maybe it’s actually like a very positive thing because that reminder of the unreality of the media or the fiction can help us start to question the reality around us, hopefully though without letting go of morality.

Dr. David Kittay: That’s really true. There’s a big difference between thinking something and acting on it. Part of being human is that we have all kinds of thoughts, some of them dark thoughts, and we usually suppress them. And as long as we’re not acting on them that’s okay. We’re not born into perfection. Part of integrating this is learning who you really are, and then having some level of acceptance about it.

Scott Snibbe: Yeah, so if you could play Call Of Duty without anger or attachment, then it’s not at all harmful, maybe beneficial. Which actually I think a lot of people do. I think when you’re playing that game they enjoy it then it’s a way of connecting with their friends

Dr. David Kittay: Yeah and maybe also it expiates some of the violent feelings within us. 

Scott Snibbe: Yeah we need some outlet for those things. they say even more highly realized being still have these thoughts but they just don’t get attached to them. They just let them pass by.  

The singularity and enlightenment

So another one of these great topics you talk about is the singularity. And you brought up a point I had never thought of myself, asking whether the singularity is another word for enlightenment. And if not, how is it different from enlightenment? Can you talk a little bit about that before we address whether the singularity is possible or not, talk about this idea?

Dr. David Kittay: Yeah the singularity was an idea that as technology got better and better, basically it’s saying that humanity has progressed in a linear way, but now that we have artificial intelligence and computers being able to start to redesign themselves, the intelligence curve of a combined human machine intelligence goes straight up. And Kurzweil describes this as we become more God-like, that we will be able to live for as long as we want, in whatever form we want, in whatever world we want. And the science fiction writers have been on this for awhile. 

Will artificially intelligent technology enable enlightenment in the same way Buddhism understands it?

So the question is enlightenment, could this be enlightenment? Well It sounds very much like enlightenment, doesn’t it? Most of us think about it in terms of the impact on ourselves. But at least in the Mahayana Buddhist concept of enlightenment, it’s for others. So even in thinking about this singularity, could it be possible? Could I be a part of it? It’s like a Rorschach test about what we’re thinking. 

So the question would be, somehow if there were some infinitely intelligent combination carbon- silicon hybrid, would that be enlightened? Now enlightenment as Buddha talked about it had to do with beings. And in the Buddhist tradition it is people who get enlightened. And people are and have always been partly logical, partly computer brains in a sense, and very much mammal. 

So the question would be then, Is the model of enlightenment for a mammal the same as the model of enlightenment for a being that is maybe less mammal? Now Kurzweil says that the intelligence that an advanced being would have as we approach the singularity would include emotional intelligence. Oh, this is a big thing among the developers of AI now. How can we instill a sense of ethics in AI? Will AI be influenced because they train upon things created by humans with all of our good points and bad points? Will they adopt the same things we have adopted? It’s a big open question.  

To ask whether the singularity is enlightenment, you’re talking about two terms that are almost incomprehensible to understand, right? What is enlightenment? Who can say, what is enlightenment? It was a monk and I’m forgetting his name but he wrote something back few years ago where he said that everyone experiences Nirvana. Nirvana is just when you let go of those things that imprison us, when we just experience some relief from those things. Nirvana is not something that’s unattainable, that’s up there pie in the sky. So I think the answer is I don’t know. And that’s okay.

Scott Snibbe: I like what you suggest, that there is possibly a moral aspect to the singularity, that perhaps there is a naturally emerging moral element of wisdom that comes about through increasing artificial intelligence power. Is that something you are suggesting?

Dr. David Kittay: It gets to the deepest questions of why we’re here, of what this is. Is this simply a random event where now the different forces are playing out, biological and computational, technological? Or, really, is this a simulation? Were we created by an advanced being, whether you call that advanced being God or something else? We really get to all these questions. And then I go back to what my dad always said. My dad was in some ways very pessimistic but he was married to an optimist. And my dad always said, “The optimists always win.”

Scott Snibbe: So far it seems like it.

Dr. David Kittay: So being that there’s so much we don’t know, shouldn’t we be optimistic and at the same time have our eyes open? Isn’t that really where the skeptic might come out?

Dr. David Kittay: Right? I’m skeptical about there being any particular solution to all of this, to there being a theory of everything. Although I understand why we want it. But then what seems to really work, right? When I test it with my skepticism. Optimism is important. 

And I think some amount of faith is always important too. Because I see how faith sometimes gets people through very difficult circumstances. And so I think that technology and science for modern people when we look at them as being possibly consistent with the ideals of ethics and, for example, Buddhist practice can give us a basis for faith. And that could be very good for us, and could help us get through this in the best way possible. 

An introduction to the four mindfulnesses meditation

Scott Snibbe: Well that might be a nice opportunity to admit the limits of intellectual discourse and that you’ve also agreed to lead us in a meditation, which takes us to that intuitive way to come to some of these questions beyond words and through inner knowledge. Could you talk a little bit about the meditation that you’re going to guide us through for our listeners. They’ll hear this the week after hearing this episode. 

Dr. David Kittay: Sure. I’ve been doing these meditations weekly with students and former students and others who are interested since the pandemic struck. And what I’ve been modeling them on is exactly what Buddha said about meditation, where he talked about meditation on the breath and the four mindfulnesses. 

And it’s so interesting that Buddha actually said this is the way to do it. This is the only way to do it. First, we meditate on mindfulness of body, then mindfulness of feelings, then mindfulness of mind, then mindfulness of mental objects. And, of course each of these has about four parts more or less. You could just spend a few years on mindfulness of the breath, that’d be okay.  

But the meditation we’ll go through really takes us along a good part of the Buddha’s path all the way to Nirvana. I’m not saying you’ll get to Nirvana. But it’s so interesting and it’s prescribed by Buddha and also Thich Nhat Hanh, the great Buddhist teacher, has come up with his own version that is closely based on this, but with some nice ways to think particularly about feelings. Our feelings are so important. So that’s what we’ll do. 

Scott Snibbe: Well thanks for joining me in this incredible interview and conversation with you. I’m really excited for me to do this meditation right now and for our listeners to try it next week. So, really grateful for your time.

Dr. David Kittay: Scott, it’s been a great pleasure. You’re terrific. What a great thing the skeptics approach is. Thank you so much.

Credits

Hosted by Scott Snibbe
Produced by Tara Anderson
Audio mastering by Christian Parry and Chris Boulton
Theme music by Bradley Parsons of Train Sound Studio

SHARe

Related Posts

SHARe

JOIN OUR MAILING LIST

JOIN OUR
MAILING LIST

Sign up and receive a free “Simple Ten-minute Meditation” MP3 and PDF plus 20% off your first online course.