Artificial intelligence expert Kristian Simsarian joins host Scott Snibbe to discuss how we can create ethical, unbiased and compassionate AI, whether we should be scared of AI, and the implications of AI on the future of work and spirituality.
Scott Snibbe: Today, I’m talking to Kristian Simsarian about compassionate artificial intelligence. Kristian is an expert in interaction design and artificial intelligence who helped found Humans for AI and helped grow the Center for Humane Technology in its early years. Kristian has worked as a research scientist and also held leadership positions at IDEO, the California College of the Arts, and he’s an advisor to the Center for the Greater Good here in Berkeley. He’s also a board member for A Skeptic’s Path to Enlightenment, and he’s been a good friend of mine for years.
Kristian Simsarian: Hi Scott.
Scott Snibbe: I’m excited because this is our first in-person interview, as COVID wanes a little bit. I wanted to start out by asking you about your spiritual background. We know a lot about your tech qualifications, but can you tell us about the spiritual dimension of yourself?
Kristian Simsarian: It’s a great question cause I think it relates to the technology as well. It’s sort of the shadow resume if you will. I grew up in a Christian family. I was baptized and I was confirmed as a congregational young Christian. My mother was the teacher or the director of the Christian Ed program at the church. So it meant I had to do that. And I remember on the pathway to being confirmed, you had to study the Bible. I was always enchanted with the things that Jesus said.
So that was probably the beginning of my path. But I wasn’t enchanted with any of the kind of magical stuff that happened there. Like I like the Sermon on the Mount, which to me now, as an adult, sounds a bit like Buddhism to me. I often think there’s the live Jesus stuff and the dead Jesus stuff and I was interested in the live Jesus stuff. I always saw him as a guy who was just getting started. He died at 32 or something, whereas the Buddha, for example, I think lived well into his eighties.
Scott Snibbe: Yeah.
Kristian Simsarian: But I wasn’t satisfied with that. That just didn’t seem like that was enough for me. I remember I found a book when I was at university in New York. Somebody on the street was selling books of various sorts and there was this one book that I saw that was meditation and yoga and it had a flame on the cover and that’s all it had. It turned out that was the focus of attention for that practice. And I bought the book—it was like a dollar or two dollars or something.
It felt like a little too hot to handle because this wasn’t something that was part of my family background. But I was really interested in everything it had to say. I was trying these yoga poses in my dorm room and trying to meditate a little bit, but it felt like I was almost practicing satanic practices or something.
I think my spiritual growth stayed pretty dormant until I went to Sweden. A friend of mine at the research institute was a member of this yoga studio. There were yoga studios that were like, abs of steel yoga, and then there were yoga studios that were more spiritual. This was one of the more spiritual ones that had ties to India. I think it was Satyananda yoga. So my teacher had a teacher from India and that felt in some ways genuine, it had a lineage. I became quite dedicated to it for about five years. Every weekly meeting was about three hours long and it had pranayama and meditation and asanas.
I found it really started to calm myself and I started to gain more focus and actually more creativity as well. That led me on a path to go to India and stayed at an ashram; not for a long time, just a few weeks.
Then when I met my wife, she had just come back from Thailand on a retreat in a forest monastery. She was pretty sure she was going to be on a path towards Buddhism. And in 2005, we both went and stayed in the retreat and it was a weird thing, I have to say, to go all the way to Thailand for several weeks, but to spend 10 days in silence separately. But again, like one of those most profound things.
Since then I have been on a very slow burn kind of Buddhism. And I think now, 15 years later, I’m more and more trying to integrate meditation into my life, spiritual teachings into my life. Just finding how much and how deeply they’re relevant to everything else I’m trying to do in my life. Like leadership and doing the right thing, right action.
Interestingly enough, because in parallel I was doing this path of artificial intelligence in the eighties and nineties and robotics and virtual reality and augmented reality and all that stuff that eventually led to a doctoral dissertation on human-robot collaboration.
The foundations of artificial intelligence were very much coming from a Western tradition, like Heidegger and Wittgenstein and that sort of thing. But this stuff very much relates to the Eastern philosophies like dependent origination.
I was part of a movement called embodied robotics. So that the robots had to have bodies in order to learn and to develop, that they had to be doing action in the world and sensing the world. And without that, there was no consciousness.
And this may tie into some things we’re going to talk about is that the movement I was part of in robotics actually had a kind of sense-perception-act loop, that was very much in line with a kind of Buddhist non-reactivity kind of loop and also an embodiment in the world, even though it was coming from more of a Western tradition.
I still haven’t fully made sense of that stuff, but I know that there’s a tie there. That’s why the question is really interesting about spiritual path and technology or work path. Because in some ways they’re related, but I don’t think I fully figured out all the ways they intertwine.
Scott Snibbe: That’s why we wanted to talk to you because you have a fairly deep background in spirituality as well as technology. So can you talk a little bit about what artificial intelligence is? It’s a word people use a lot, but could you define it in a way that makes sense today?
Kristian Simsarian: Yeah, I think broadly speaking, it’s using technology to display some kind of intelligence, right?
The Turing Test for Artificial Intelligence
Kristian Simsarian: A really famous paper from the fifties is from Alan Turing. It’s called “The Turing Test” where he was really trying to define what this artificial intelligence would be. He’s one of the fathers of modern computing, laid out the plans for how, and in fact built, the machine called Enigma that helped win World War II by cracking the German codes.
But later in the fifties he was thinking about, how do we take this further? And his test was typing on a screen, could a human interrogator ask the computer, essentially, or a person in the other room, questions and then get replies back.
The test was, if they can’t tell which is the human and which is the machine, then it would be artificial intelligence. that was the definition. People stuck with that for a long time. So that’s where it goes back to: just a kind of simulation, if you will, or computer program, can it pretend to be a human at a distance when all the factors of humanness are accounted for, the obvious ones of how you look and how you sound and that sort of thing.
Robots and AI
Kristian Simsarian: Today, it’s really getting a lot of attention mainly because of two aspects of artificial intelligence. And I divide these into machines that are replacing physical labor. This is like robotics, mobile robots or fixed robots, mobile robots moving around warehouses, moving boxes around. Now they’re much more adaptive and they can adjust for change. And that they’re starting to get smarter in that way.
So that’s physical, the first one is replacing physical labor —
Scott Snibbe: Doing work.
Kristian Simsarian: Doing work, doing what we traditionally, for most of our 60,000 years of humanness would consider work. Actually moving stuff and using our bodies.
And we can see how relevant that is to this change of machines doing that work for us. Because it’s pushing the work toward the intellectual part: knowledge work.
Kristian Simsarian: So that leads to the second form of artificial intelligence, which is generally called machine learning. And machine learning for the most part is doing what we would call knowledge work.
Sometimes it’s categorization, just like saying, Is there a cat in this photo or not? Or what’s in this photo? Or the Google image search, being able to search by photos, that’s artificial intelligence, trying to categorize these things. And more and more it’s doing what we call expert work and expert work can be anything humans do.
Traditionally, usually it’s something that takes five or 10 or more years to learn for a human. An example there would be something like classifying x-rays, finding out if there’s a tumor in them or not, something that we would consider highly expert that you have to train for.
Or insurance companies and other organizations are using machine learning for fraud, looking for patterns that humans, only the top experts can spot or are just too big or too broad or hidden too deeply in the data for humans to spot and machines are getting quite good at spotting patterns that humans can’t see or only experts can see, but I think they’re going to quite quickly surpass human expert ability.
Scott Snibbe: Yeah, it’s funny how the two definitions don’t completely cover the territory of what minds do because you have Alan Turing starting out with intelligence being tricking you—that it’s lying to you, but you don’t realize it —
Kristian Simsarian: It’s an interesting definition.
Scott Snibbe: Which is a little disturbing because that actually is one of the bigger applications of artificial intelligence today is, tricking people into believing things they might not have otherwise believed. And then the other definition is around doing work, which of course is an important part of human life. But I think it’ll be interesting for the rest of this conversation to talk about some other aspects of intelligence.
Kristian Simsarian: I just want to say, I love that you caught that, that Alan Turing’s definition by definition was fundamentally about tricking people. So another way of looking at it is artificial intelligence is a machine that tricks you.
Scott Snibbe: That was first definition and sadly it seems to be a bit accurate.
Kristian Simsarian: It’s also true with the automatons, like the things that happened in the seventeen and eighteen hundreds, these fortune tellers and these very detailed mechanistic automatons, they were made by watchmakers. Those were also about tricking people.
Are we “summoning the devil” with AI?
Scott Snibbe: Okay, so let’s go into some of the warnings about AI. We’re talking now about AI as being either work or lies. Let’s talk a little bit about fear now. We’re starting with all these negative emotions. But people like Bill Gates, Elon Musk, Stephen Hawking have made some extreme statements. I actually have some:
Elon Musk said, “AI could become an immortal dictator from which we could never escape.” He also said with artificial intelligence, “We are summoning the demon.”
Even Stephen Hawking, said that, “AI could spell the end of the human race.” Bill Gates is a bit more measured, but even he said he agrees with Elon Musk’s position about the risks of superintelligence. We’ve also seen scary robot dogs on YouTube videos. It’s cute when they’re dancing, what happens when they’re armed? Are these fears supported or are these myths that you want to help us dispel?
Kristian Simsarian: The answer is both: they’re supported and they’re myths. Those are great quotes. Can you read the Elon Musk one again?
Scott Snibbe: Yeah, sure. “AI could become an immortal dictator from which we could never escape.”
Kristian Simsarian: Okay. I just want to say that he’s a hundred percent right. And a hundred percent wrong.
Yes, if you look at the digital dictatorship that’s rising up in some countries like China, right? Where technology is being used for surveillance on the population to monitor things that are pro and anti-government, and especially anti-government. And a smaller and smaller group of people are able to do that on a bigger and bigger group of people with the aid of technology, like machine learning, that scales quite easily.
And that’s frightening. We talk about Orwellian, but we’re going past that. Orwellian, I think it was a bunch of humans that were controlling other humans, kind of like East Germany or something was. And this, now we’re talking about a small group of people controlling big server clouds that are monitoring people with cameras and microphones and reports and everything.
You know, in this country too, and part of the intelligence game and what they call statecraft.
So he’s right in that these things can do that. I think that the issue with all of those quotes that I take is that they’re all framed as another, they’re all framed as if the machine’s going to do this thing. Maybe this is going to be a dangerous analogy, but it’s like nuclear weapons can destroy the planet. Like in the next five seconds we could destroy the planet. I don’t know how many times? A hundred times over, wipe out all living things, except cockroaches. I don’t think it’s going to, but it’s a human decision that’s going to make that happen.
Then I think we worry about the automation, giving control and maybe giving lethal control. This is really hot in the news right now, like giving lethal control to autonomous systems. And the important thing to remember is that it’s humans who will code that system, that these are autonomous systems, they learn automatically on big sets of data.
There’s something called an objective function in there that’s trying to maximize or minimize a certain condition, in the case of recognizing cats in a photo, Is this cat-ness? Do I see cat-ness? Do I see cat-ness? Do I see cat-ness? And it’s being tuned for cat-ness and it might be tuned on millions of images of cats. And it’s abstracting out cat-ness in a way that our brains probably do something similar. But we can’t really articulate what that is cause our visual cortex is really complicated and we’re not really conscious of what’s happening in our visual cortex when our brains looking at images of cats.
Scott Snibbe: But the way you talk about it, it’s more like all of human technology, that AI is more of a power amplifier than something that’s going to run away by itself. Do I get you right? Or is there a chance that it can run away by itself and take over the world the way some people fear?
Kristian Simsarian: I think both. Let’s take those in turn. So, the power amplifier, if you just look at the progress of technology, we’ve been scaling up. We had these things that were room-sized computers that could do a calculation. They still had to, if just for even to do the NASA stuff, like putting a person on the moon, we still needed a room full of calculator, human calculators.
Scott Snibbe: Yeah. Go watch Hidden Figures.
Kristian Simsarian: Right? That’s just so wonderful. Hidden Figures, there’s all this room full of human calculators. And the computer couldn’t even do that. Now your iPhone can do more than that. And then it starts to get more powerful, businesses start using these things for the calculations, spreadsheets and things start to go digital, and then we’ve seen companies go through what’s now called digital transformation. And many companies are still working on this, just taking their bureaucratic business flows and turning them into digital. So that’s a huge amplification right there.
Now we’re starting to talk about decisions, not just efficiency and not just making the data and the information flow more easily, but we’re talking about decisions getting automated. That’s when it starts to lead into the more scary stuff. With machine learning, we can be making these decisions automated if we want to. If we decide to, at scale, like millions of decisions.
Scott Snibbe: So once you decide to fire someone or deprive them of a loan or kill them, that’s when it becomes really scaled in a dangerous direction.
Kristian Simsarian: Yeah. And I think just from what you said, you know, as soon as you were speaking, I just thought about all the systems we have, like systems of oppression and systems of bias, and the real danger is that we’re building on all that data.
So even the insurance company that, for all the right reasons, is turning a system on all of its fraud data in order to detect patterns that humans can’t necessarily detect and then see future patterns. All of that is based on the way society has come to be formed. And will just reinforce the kind of systemic disadvantages that people have. So that’s my real fear.
Scott Snibbe: This is a big issue right now around bias and AI systems that misidentify Black people as not Black or worse, as criminals. Is there any way though, to eliminate bias from artificial intelligence? We know there’s a lot in it right now. Can we get rid of it?
Kristian Simsarian: It’s a great question. And the first question we have to ask is, Can we eliminate bias from humans? Because we’re programming those things. So can we eliminate bias from humans? Probably not. Can we become more aware, more reflective and more deliberate about bias? Probably so, right?
The other thing that’s coming up, I think for those three people -these are brilliant people like Stephen Hawking, brilliant scientist; Elon Musk, like brilliant entrepreneur; and Bill Gates, brilliant businessman turned, I was going to say brilliant philanthropist, but that’s so controversial. Someone who is applying systems thinking to world problems.
Scott Snibbe: He’s certainly saved millions of lives.
Kristian Simsarian: Yeah, no doubt. But none of them are machine learning experts, artificial intelligence experts. And it’s hard for me to see, just because I was so deep in that for decade or more, that people just have a kind of magical thinking, when it’s actually quite simple stuff, it’s just happening at such a big scale, it’s beyond our minds.
Oh, and the second thing. So we talked first about this kind of thing. The second one is I think these things do get complex. They’re doing so much and you put more computing power on it and turn it on a bigger data set, and then it’s doing bigger stuff. And so you’ve got layers and layers of complexity where it’s just hard to understand at all. And this gets close to what they’re talking about with the singularity, where the machines are doing so much that it’s no longer fathomable at a kind of human level.
Scott Snibbe: I did work with machine learning myself. I was working with a bunch of engineers. And most of them didn’t really understand the solution. They understood how to train it, but they didn’t understand how the solution worked, the engineers themselves.
Kristian Simsarian: Well, I was going to say that gets to transparency. I had a friend at a large social network company. It was about three or four years ago. We were having lunch and I was thinking about this stuff and I was starting to do talks on AI policy and stuff, and he goes, But we absolutely have to have transparency.
I was like, Good luck with that. Nobody can explain how these things work. And he’s like, We need to know why it works. I’m like, you’re probably not going to know. And that was deeply disturbing. But then it made me worry that people are going to turn those on, and not have investigated the biases and not have investigated, Was that data set broad enough? Was that data set as unbiased as we could make it?
How do we make Ethical AI?
Scott Snibbe: So how do we make ethical AI? I mean, is there a straightforward path to that? Are companies doing it now?
Kristian Simsarian: More and more companies are working on it. And I don’t think we’ve figured it out yet. Because they have been hiring people like crazy.
Scott Snibbe: Firing too, unfortunately.
Kristian Simsarian: Right. So we were talking about ethics. So how do you —
Scott Snibbe: Well, let’s flip this around for a second. So if AI is a power amplifier, you’re very creative person, use your imagination for a little bit. And, can you imagine how we flip that around? How we use artificial intelligence for extraordinary good, like extraordinarily scaled positivity. We don’t hear that story that much right now, but if artificial intelligence really is value neutral and dependent on who trains it, what’s that story of how we take AI and steer it towards the greatest good, rather than greed or bias?
Kristian Simsarian: It’s a great question. It’s like a call to action, right? It’s for all the people that have domain expertise and who have some reflectivity. And, I’d like to say a spiritual practice of some sort, maybe believe in the greater good to get more involved, to become AI literate, to understand how these things work.
You don’t have to code them, but it would be great if more people who have domain expertise could be sitting with a coder and building these systems. Because just up until recently, take medical systems; it was just a computer scientist master’s degree and a 10 or 20 year old medical book, you know, going, Oh, this is how we do this thing. I’m going to code the system and use this data that I got. But, doctors are really busy, so it’s not like you have access, average computer science masters or PhD, couldn’t call a doctor in.
But now there’s starting to be more and more interdisciplinary stuff. And medicine is realizing there’s life saving capability and savings and money to be made in using these technologies. So you are seeing more interdisciplinary teams, but I think that’s what I would say.
Scott Snibbe: I think that’s a really good answer. Because I’ve seen the Dalai Lama, and you’ll be happy to be compared to the Dalai Lama or embarrassed.
But I’ve seen the Dalai Lama asked the same question a few times, which is, What can we do to end war? What can we do to end violence? And he’s said the same thing every time I’ve heard him, which he said, Yes, you can demonstrate, yes, you can write letters. But he said, the most powerful thing you can do is to make up with all the people that you have disagreements with. He said that’s the path to peace. So I like your answer, which isn’t some big, abstract answer. It’s actually, If you’re listening out there, go get involved, make your individual contribution. And that’s all there is ultimately, is each of our individual decisions and contributions.
Kristian Simsarian: Totally. And just as a step, there is a Coursera class from Andrew Ng, which is called Artificial Intelligence for Everyone. It’s just a basic class. Don’t be scared. Don’t let anyone scare you away from the table because it’s not that complicated. It’s actually not.
Scott Snibbe: I want to ask you about something more mundane. A thing people talk about a lot with AI are these ethical dilemmas. So, this typically comes up now in the self-driving car, where you have a self-driving car that’s about to hit a school bus full of eight children. And there’s one gentleman driving the car. Should the car drive off the road and kill its driver, or save the driver and kill these eight children?
These dilemmas actually happen. In fact, sadly one has probably happened right now. A person’s making that decision right now, somewhere in the United States. How does an AI make a decision like that?
Kristian Simsarian: It’s a great question. It’s a really great question. I mean these are dilemmas, right? The definition of a dilemma is there’s no great answer. So anyone who says there is, is probably a fool.
When I think about road safety, because that’s where these examples are, it’s just a tragedy that over 50,000 people are being killed every year on the road. And San Francisco has one of the highest pedestrian injuries. And there’s data, it’s emerging data, but it’s showing that the Tesla is one of the safest cars.
Scott Snibbe: Yeah. And that’s certainly what Elon Musk is always saying. Of course there’s accidents.
Kristian Simsarian: Yeah, of course. There’s accidents. And they get a lot of press, right? Because this new crazy thing that we’re not familiar with, that’s disruptive, killed someone. And it’s like, well, there are many hundreds of deaths each day? Those aren’t getting reported. Because somebody was drunk or somebody wasn’t paying attention or they’re on their cell phone and they killed someone and that’s just not news. We’re used to it.
Scott Snibbe: But eventually some cars do face this decision, right? Kill the driver or kill these eight people. So do you think there should just be a switch that says: help others? help me inside the car?
Kristian Simsarian: It solves the problem, right? That’s that’s a great idea. You know, If the human has made that decision. Because that’s the way it works now. Is there some court of law or something that’s going to decide that, who was at fault. When we’re talking about that dilemma, in some ways we’re not as worried about the people involved, we’re actually worried about who’s to blame, you know? So in some ways it’s a blame game. And maybe it’s better if you just stop the blame game in the cockpit of the car.
Scott Snibbe: Yeah. That’s obviously what happens today and you don’t know. I’m sure you’ve been put in situations, so have I, where your bravery or your compassion was tested and you often surprise yourself, either negatively or positively and Oh, I did the right thing or oh boy, I was a coward. So maybe that’s what it is. It should be a button. Maybe it should even be a slider and you can adjust it in real time, depending situation, put it right in the middle most of the time. And then…
Kristian Simsarian: Then that would probably kick it back to the person who coded the slider…
Scott Snibbe: And then if you have a brand new baby in your car, you just slide it all back to self-interest.
Kristian Simsarian: Yeah. That’s funny. I never quite understood those baby on board signs. I never quite knew what I was supposed to do about that.
Scott Snibbe: So here’s an interesting question for you, is whether AI’s deserve our compassion. We’ve been talking about them more as machines that serve or don’t serve humanity, but do AIs deserve our compassion in any way? Is it okay to be mean to Alexa?
Kristian Simsarian: It’s an interesting question. There’s a couple of things there. So one is, if you’re mean to your AI, it’s just going to come back and bite you. Because you’re just training it. You’re just adding more into the training set of how people interact with computers.
What I’d like to say is that computers don’t have a soul. They don’t have embodiment, which I think is fundamental for consciousness and that sort of thing. They’re machines, they’re objects. If you said, should we be nice to trees? I’d say absolutely they’re living things. Whereas the machines, these are tools. And I don’t feel that way.
But I do feel like we should be good. So we’re modeling that for the people around us. We’re modeling that for the machine learning who, again, the bias is going to come from us, the bias in the machine is coming from us. So, if we’re not nice to these machines, they’re going to start mimicking us, start perpetuating these behaviors.
And then I think, the point you were making before was also, this is a harsh mind state that you’re cultivating.
Scott Snibbe: Yeah.
Scott Snibbe: Buddhism has a really clear answer because aggression and violence has the biggest negative effect on the person perpetuating it from a consciousness level in the Buddhist view of the mind.
Every time you act angry, act aggressive, act violent, it just reinforces that pattern. It doesn’t matter who you’re doing it to. If it’s a real person you actually hurt, or if it’s just Alexa, the biggest negative impact is on yourself. So for that reason, you should be polite to Alexa – it’s for your own mental development.
Kristian Simsarian: This is fun. I’m glad you brought up my buddy, Rick. So he’s got this thing: “Neurons that wire together, fire together.” He talks about going from states to traits, right?
This is really relevant for both us, he was meaning this for us, as you were saying, if the more I do this kind of harsh talk and this harsh mind state, the more it’s going go from a state to a trait, something that is part of me.
But that’s exactly the point I was making about the machine learning because they’re working on the same kind of neurological pathways in math and in silicon, where these pathways get reinforced from states to traits. So the more people are harsh to them, the more they’re going to be learning that. And if they were coded to learn human interaction, and they may not, I don’t know if Alexa is coded that way. I probably wouldn’t code it that way.
But if you were working with a system that was coded to sort of learn patterns from the people it interacts with and adapt to those patterns, the more it’s going to be harsh back or just be kind of masochistic, which is like how a human would do it.
Scott Snibbe: Well there’s another option too, because we actually know patterns of communication that help when another person is angry, like nonviolent communication, reflection, and so on; various forms of therapy, positive psychology.
So you could train the AI’s to be more like, AI therapist, that they all do the statistically best thing when someone’s angry and reflect, Oh, it sounds like you’re angry, that’s completely reasonable.
Kristian Simsarian: And it could be totally annoying, you know? It would have to be done super well.
It’s hard to imagine, because it’s really hard for humans to do that super well. But I think this is really important, this modeling. In so many ways, it’s like, we’re going to model what’s important for these machines. And whether we’re programming them deliberately, like somebody with, what’s the Buddhist term, it’s not unquiet mind states like unhealthy disturbed mind states or unhealthy mind, delusional mind states.
And I spent 15 years in engineer school and these are not necessarily the most socially attuned, compassionate people on the planet. And there’s nothing in that education that is fundamentally about literature and culture. This is like the two programs I founded, the Interaction BFA and the Interaction Masters, are like the softer side of technology. I was trying to create technology programs that had fundamentally care for culture and interactions, partly because my engineering training had zero of that. You can go through an engineering course and be a certified machine learning programmer and not know anything about culture and literature and history.
Scott Snibbe: It’s such a shame.
Kristian Simsarian: So those folks need, they need people that know ethics and know —
Scott Snibbe: And literature, arts.
Kristian Simsarian: Yeah. Just so they can be better informed about what society is about. Because in some ways we’re outsourcing, all of these things that are going to dominate. Just like Elon Musk, that quote is kind of haunting. Cause I think it’s both true and it doesn’t have to be.
Scott Snibbe: Yeah. And it’s true more from, the kind of timeless human perspective rather than a new type of existential threat.
Kristian Simsarian: Oh totally. None of these things are new.
Can AIs become conscious?
Scott Snibbe: So you seem to come down squarely on the side that artificial intelligences are machines, they don’t have consciousness. Can you defend that a little bit because a lot of people listening I bet are materialists. And if you’re a materialist, even if you’re agnostic and curious, you might believe our mind and our thoughts are coming out of the material substrate of our brains, which is made out of ordinary matter. And computers are also made out of ordinary matter. So, why would our brains be so special and it be impossible to create that same type of order in silicon? Why can’t a computer be conscious?
Kristian Simsarian: I think it’s a great question. I wrestled with that question when I was young and I took the stance that it had to be embodied: it had to have a body, it had to have effectors, arms, legs, had to have sensors and it had to be able to change things. You know, so very much like the baby learning, dropping something and seeing what happens and then going, Ooh, getting so excited. I don’t know. I gave up on that. I felt like we were so far away from that.
And I got disinterested to be honest. And I flipped it around and got more interested in human-computer interaction in the early nineties; instead of trying to make machines smart, decided to make people smart by using machines.
So I guess I philosophically put the soul squarely back into the human and became tools, that just professionally.
But I’d also say like the things we’re talking about now, these two things, going back to robotics as laborers, if you will, and machine learning as knowledge workers – these are very separate for the most part. And so we’ve got these embodied machines, but they’re being used for very specific purposes. In factories, they’re not being given a kind of thing like, We want you to be a totally autonomous worker who’s going to think for yourself and come up with great creative solutions to building this car.
No, we’re going to code you so that you do exactly this and you can tolerate some natural discrepancies, like when this thing is five millimeters over from where it should be or whatever and maybe even spot and fix errors. But we’re not giving you like full going to church on Sunday or whatever.
Then there are these machine learning things which are all about solving a particular problem. I think it’s wonderful that you get trained on x-rays, you can now surpass the best x-ray technician. And what that means for humanity is that someone in the remotest part of the world, assuming they have an internet connection, a connection to the cloud, a local facility can submit an x-ray and get a diagnosis from the best expert in the world, because that scales quickly. That best expert, that might only be a handful of those people in the world. But if those people were used and their data sets were used to train the machine learning, now that machine learning can be the best expert in the world. And then it replicates instantly. So people have access to that. That’s just amazing.
But that’s not embodiment, those cloud-based systems, which are huge. They’re probably sitting in warehouses out in the the desert somewhere. They don’t have effectors. They’re not trying to move around in the world and discover and proceed. I don’t want to totally discount the question, I don’t see how I can, given what I believe about embodiment in the world, I just don’t see how I can get around to that question. Because we’ve got these machines that aren’t really programmed for intelligence are embodied, and we’ve got these cloud things that aren’t embodied, but are solving very narrow expert questions.
And I think what can be really scary for people is replacing expertise. Cause I think we used to think that expertise was the pinnacle. Someone studies for 10 or 20 years, or does their job for 10 or 20 years and they become the top dog and they’re like, Oh my God, I’m going to be replaced by a machine tomorrow. And that I think we find really scary. And I think that’s what fuels these quotes as well.
Scott Snibbe: Yeah.
Kristian Simsarian: I think it’s also what’s underlying a lot of the social divisions right now, is that work and meaning are so important people, is, we’re starting to hit things like the patriarchy and hierarchy. Like when you work for 20 or 30 years, and the promise was, you had to suffer all these foolish bosses who probably put you down for 20 or 30 years and you eventually became the boss and the expert or whatever. And then all of a sudden, a machine is going to replace you, that must be so disturbing. Because every sort of injustice you suffered, and in the system once you were top dog, then you get to put other people down or something, that whole system I think, starts to collapse.
Scott Snibbe: Yeah.
Kristian Simsarian: I hope so.
Will Ais take all our jobs?
Scott Snibbe: How do we stay compassionate to people who are losing their jobs from AI? Talking about systemic problems and systemic problems introduced by artificial intelligence. Will there be new jobs? This is another one of these big debates: everyone’s going to be out of work and we need universal basic income, or will it just create a new class of employment that’s different from what people are doing today?
Kristian Simsarian: I love that. It’s a great question. Is it a net positive? So in the environmental movement, there are these two roles. There’s that of the midwife that gives birth to the thing that’s happening, that’s coming. And there’s that of the hospice worker, which sort of helps take care of the thing that’s dying.
And so I can see that in machine learning and artificial intelligence as well, is there are roles and there’s parts that are — need hospice work, or need to be taken care of and retraining and retooling in some ways for people.
And then there’s what’s emerging, which is all these new jobs. Of course there’s like the robot repairman and all the robot manufacturers, other people that built it’ll help design and those things.
But there’s also -like I was doing a little bit of work with Autodesk, huge company, it’s just an amazing San Francisco company that still has a funky feeling. And they’re hugely profitable and they have all these tools that help people design things. And then they also have all this training for these tools because these tools are complex. But they’re constantly trying to figure out which of our tools are getting automated? So as soon as they build a new function in the menu, it might obliterate a whole tool. But they have to be ahead of that by six or 12 months to retraining the people that depended on that tool, people that are professionals in this particular tool. As that sunsets, and they know what’s going to sunset like 12 months before, they need to be retraining those people into a new tool.
And when I heard that, I was just like, Oh, that’s so lovely. Just thinking about- and process money for them. Like the training makes as much money as the licenses, you know? so they have this wonderful investment in both moving people forward. That’s a great policy, yeah.
Scott Snibbe: Right now it’s just that our economic system is so brutal. I’ve heard it summarized as “work or starve,” our current system, which is true in the United States. But if you institutionalize retraining, that if you lose your job, what immediately happens is say, you’ll get, a year’s worth of training into any growth field that there is, and healthcare, and rent, and everything else people should be entitled to. But those kinds of things, they’re more like hacks or tweaks to our system rather than some fundamental revolution.
Kristian Simsarian: Totally.
Could an artificial intelligence attain enlightenment?
Scott Snibbe: Let me ask you a really crazy question because you seem to have come down squarely on, machines can’t even be consciousness. David Kittay, who I recently interviewed, entertained some of these questions with his students at Columbia. Can an artificial intelligence attain enlightenment?
Kristian Simsarian: I don’t know. What is enlightenment?
Scott Snibbe: That is a very good question. As someone who isn’t, I can’t completely answer it, but some of the definitions in the tradition I come from are getting rid of your delusions forever, only being able to do good in the world and to never feel anger and, craving and disconnectedness, always feeling interconnected, compassionate, and having the wisdom to do the best action at any moment.
Kristian Simsarian: I mean, this is a hard question, because there’s part of me that just wants to punt entirely, and there’s part of me that’s really interested in But I am not an expert in consciousness. And there’s different theories of consciousness. I think one is, and I don’t know the names of these, but like one of them is that, consciousness is grounded in the people that are conscious, conscious beings, which probably includes the dolphins and things, And then collectively there becomes a collective consciousness. And I believe there’s an alternative theory where consciousness is something out there that you tune into.
Is consciousness fundamental to the universe?
Scott Snibbe: Yeah. It may be fundamental to the universe. There are even some of very respected theorists, physicists who believe consciousness may be some fundamental component of reality. Although I don’t think we have hard evidence for that yet.
Kristian Simsarian: And we can’t explain action at a distance and weird entanglement or whatever, like all that stuff, it just blows your mind. And the deeper you go into physics that PhDs and professors all get befuddled there.
And it makes me wonder, like I think consciousness, I guess it’s always been really messed up. Humans really, we really struggle. Like suffering is the thing, it’s, there is suffering and there’s a way out of it. And these kind of fundamental truths.
Where was I going with that? Just these divisions and this kind of vitriol that’s been coming up in politics. How does that affect consciousness? Because if consciousness is this thing that’s out there that we tune into, is that stuff like a computer virus that gets into consciousness and infects it and
Scott Snibbe: The Buddhist view would argue very much so. Whether or not you voted for any individual leader, you become affected by that consciousness that overlays, and it’s nothing mystical. It’s actually just about, you start acting like things you hear and see and witness and common standards of behavior.
Well then maybe that comes back to what you brought up about Alan Turing and the Turing test. It might be possible that it doesn’t matter. Let me see if if you believe this, that you could make a computer that could trick you into thinking it was enlightened?
Kristian Simsarian: Totally.
Scott Snibbe: Well from the Buddhist view we’re already tricking ourselves that we are singular, self-sufficient, separate entities. We’re already tricking ourselves that we’re intelligent in this actually harmful way that makes us suffer, you know? So in a way we’re succeeding in a Turing Test that makes us suffer more than we should.
Kristian Simsarian: Yeah. I was talking about these fundamental truths, and I was thinking, like, two of the most fundamental truths that just, I get more proof every day they’re is – One is that things are constantly changing, like you and I met a couple weeks ago and the molecules we have in our bodies today are probably probably 90 something percent different than they were two weeks ago or so. So we’re constantly changing, constantly evolving.
And the second one is that everything’s interconnected. And I feel like so much of our modern society and especially our American reductive society disavows both of these. Like business and to some extent science and medicine, to pretend that you can isolate things: if we could just carve out this market, or if we could just isolate this tumor. They both ignore that this thing is constantly changing and also that it’s totally interconnected to everything else.
Scott Snibbe: Yeah. And what you’re saying suggests that we’re training our AIs in the wrong way, because we’re training AIs to make binary, harsh dividing decisions. But maybe there’s a type of AI that is aware of change and aware of interdependence, trained on models to be very different than we’re training today.
Kristian Simsarian: I like where you’re going. I mean, there is a possibility of machine learning to see systems that we can’t see and to see interconnectedness that we can’t. Like, given some training data, like big amounts of training, it will find interconnections if we can figure out the right objective functions right thing that we’re looking for, some goal, it will find new ways to get there.
It’s our job to make AI compassionate
Scott Snibbe: Is there anything else you want to add to our discussion?
Kristian Simsarian: I’ve already said it, but I’m just going to say it again: it’s to not let any of this stuff frighten you away from the table. Most likely machine learning is going to be part of almost everyone’s job. Be part of your job. It’s going to come in. Most likely it’s going to replace expertise that feels like something that’s hard-won. And there’s an opportunity to get involved.
Because the technology people can’t solve all the different domains, like in, in computer science and in AI, we used to call domain experts was the term, but a domain expert is just like a real estate agent, a lawyer, a doctor, a specialist, insurance fraud detection, a government agent doing procurement or something. You know, a domain expert is somebody who knows something about their domain that lay people don’t know.
And that’s a whole field that’s going to open up is a place for domain experts, you know, professionals to start sitting side by side and figuring out the systems, and what problems should be solved because domain experts, the professionals know what problems should be solved. They know how they should be solved. And they know what a good solution looks like: something that the computer scientists or the artificial intelligence and machine learning programmer doesn’t know any of. They can read about it in books, but that’s going to be outdated and and probably incorrect.
Scott Snibbe: Yeah.
Kristian Simsarian: So get involved! Especially if you’re listening to this podcast, because you’re a compassionate person who might be thinking about your biases and at least open to the most important curious questions.
Scott Snibbe: Great. Thanks a lot, Kristian. It’s been a pleasure talking to you about compassion and AI today.
Kristian Simsarian: Oh, you’re welcome. It’s wonderful to be here talking about really important stuff.
Written and hosted by Scott Snibbe
Produced by Tara Anderson
Audio mastering by Christian Parry and Chris Boulton
Theme music by Bradley Parsons of Train Sound Studio