The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity w/ Byron Reese
The Fourth Age: Smart Robots, Conscious Computers, and the …
Gigaom CEO, publisher and author of "The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity" stops by virtually to c…
March 13, 2020

The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity w/ Byron Reese

The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity w/ Byron Reese
The player is loading ...
This Anthro Life

Gigaom CEO, publisher and author of "The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity" stops by virtually to chat with Adam and guest host Astrid Countee to help us make sense of just what Artificial Intelligence is, what are its promises and limits, and what this means for the possibilities of conscious computing and smart robots. Byron breaks down the philosophies behind our ways of thinking about AI in way that gives us new social tools to approach the deep technological revolution we are undergoing in a more human and even optimistic manner. 

 

Website: https://byronreese.com/

Twitter: @byronreese

Facebook: @byronreese

LinkedIn: https://www.linkedin.com/in/byronreese

--- Send in a voice message: https://anchor.fm/thisanthrolife/message Support this podcast: https://anchor.fm/thisanthrolife/support

Transcript

Byron Reese AI and Robotics

Byron Reese: [00:02:51] Well, when I came out of university in the 90s computers were kind of a thing. I moved to Silicon Valley and I got interested in the internet right away. I kind of got interested in technology kind of with the capital E like what it is and our relationship with it as a species.

And it's just, you know, we learn. We used to multiply what we were able to do, and it's the whole reason we're here today. There was a time they think 75,000 years ago when we were down to 800 mating pairs of humans, and somehow we made it the year we made it a year because we learned this trick of being able to multiply what we were able to do.

And a lot of times it started out where we built technology to replicate what our muscles and bodies did, and then all of a sudden there's this technology that reports to replicate what the human brain does. That  just fascinated me philosophically. Cause you know, we're the top dogs on this planet, not because we're the fastest or the strongest or we are because we're the smartest.

And then if there was some technology that could make us smarter or it might be smarter than us, that seemed really kind of important to me. And so. I just, you know, followed that lead and the interesting thing about intelligence, artificial or otherwise, is that there's no definition for it.

And that really intrigued me. And then when you look into it, there's no definition for life or death either. And these are the things that we somehow can't define. And yet we, we know they exist. And that, and that kind of took me on, you know. Are we smart because we're conscious? Are we smart because we experienced the universe?

We don't simply measure it. Like I feel warmth. I don't just measure temperature. Is that what makes us smart? Could computers ever be conscious and so forth? So I came about it. although I've been in technology professionally forever. I came to AI as an interest. Just not for any particular financial reasons or professional reasons, but personal ones.

Adam: [00:05:08] Very cool. Yeah,  I love that introduction that like there's kind of this notion of this philosophical interest cause both Astrid and I are trained as anthropologists and Astrid who can speak for herself too has worked in tech for a number of years as well, and I'm finding my way into it.

 And I come from kind of the design angle, but it's so fascinating too because. I really appreciate this idea that there are these fundamental questions of what it means to be alive or what death means, right? Or what it means to be human. Also, don't have solid answers. Things like AI or conscious computing and tech  further dissolve where those boundaries might even be in the first place. I really appreciated that kind of framing of your book, The Fourth Age. One of things that really struck me too, is this was kind of an introduction to AI in the different forms of AI. Like, narrow AI or AGI, artificial general intelligence, which we could, kind of break down.

But, what was interesting to me too, about you frame this book as these four ages, right? This is not, you know, the next stage of AI, but it's more like. Humanity has changed over time in these four ages. How did that kind of framing come into your head? You know, you kinda mentioned a bit about that.

You're thinking of these interesting questions of what it means to be human before, but what kind of set you on the path of saying, all right, let's think about AI and computing and robotics as what we now call the fourth age? Like how did that, how did that timeline pop in your head?

Byron Reese: [00:06:31] If you take at this point 75,000 years ago that we were down 800 mating pairs and somehow we've made it a year and you say, okay, we did that because. You asked, you know, is it gradual? Did we just get a little more and more, more, a little more gradually made it here? Or is history a series of step functions where we have something really big and profound happen and then we kind of grow, grow, grow, grow, grow, until something big and profound happens. And so I ask myself that now you can debate what the big and profound things are and how many there are. it's not dogma in any sense. I mean, to me, clearly the first one was language. The historian Will Durant  said, it's what made us human.

Interestingly, if you're an anthropologist, and even though the notion of language is kind of interesting, but animals may have it, but what they don't have are kind of the abstract languages we have. Or maybe what they don't have is storytelling. Maybe that's really what it is.

So we can tell you stories and, and it's a way to preserve information. It's a way to pass knowledge beyond people who witnessed whatever it was. So whatever it is, I kind of just thought like, okay, language has to be this big pivotal moment. And then from there we went on 90,000 years, probably with language, but wandering around.

And then, we got agriculture. And I actually don't think agriculture is the thing, agriculture gave us, the city, because we settled down. And I don't actually think that's the thing. What the city gave us is the division of labor. And that says of course you specialize and you specialize, and I specialize, and together we can all be better off.

And so what that meant is you all of a sudden had prosperity and you had excess. You didn't have to have everybody spend all their time just surviving. You could all specialize and you could create wealth in the modern age. And that I think of is the second one. And then the, what, I think I was the third age.

And again, it's, it's so arguable, but I think it's really interesting that two technologies came into the world at the same moment. It's coincidentally, by the way, and they forever changed us once again, and that's writing in the wheel. And when you had those two technologies you had everything you needed for nation states because you could proclamate laws and you know, move troops around and collect taxes and all the rest.

And so I like to think that those two things gave us the modern nation state, the world we live in. I mean, all of it. And so I kind of thought of those as the three big watermarks in technology and humans relationship. And so then I wondered if the technologies that we are stumbling into namely artificial intelligence and robotics, if that notion that you can automate human brains and to automatic human body, if you can build mechanical versions of those two things, is that a watershed moment for the species?

And I think it is. I think it's, I mean, you know, I'm not alone in thinking that, but it probably isn't universal either. But I believe you can think of our relationship with technology as we outsourced different things with fire. We were able to outsource fire for digestion. We predigested food by cooking it. With writing you outsourced human memory. You could write something down and you didn't have to remember it. Well, if all of a sudden we can outsource human thought and human action. With AI in robots. Oh now that’s very interesting. And that's what I wrote The Fourth Age about. 

Adam: [00:10:41] That's well said and it's really great to think about too, cause what I appreciate about this as well. It's like when you think about sort of the arc of human history or the multiple arcs,  I think the question of outsourcing is actually spot on. And like, really it's a really great way to think about what we're seeing happen by these technological changes. And again, think of language as very much a social technology that we are able to outsource ideas, right? So they can pass on. And I think that is one of the other compelling points about storytelling is this one of the pieces that humanity uses to differentiate itself from other animals. But of course now I'm thinking of course, but I'm going to say humans are animals, which is, I know one of the questions in the book of how do you posit and think about what it is that makes us, what is human? Is it a machine, an animal or something else? What makes ourselves? And so, actually Astrid, I'd love to kind of pitch a question to you to kind of think about this too.

You know, as we think about these four ages and outsourcing's like the idea of thinking or cognition,how do you feel about that, you know, coming to this from, you're working in data for a while too? How does this idea strike you as we're kind of outsourcing human cognition as one of the major leaps going towards something like AI or computational thinking?

Astrid Countee: [00:12:03] Oh, I think it's huge. I mean, I think what Byron's saying is spot on because I think we underestimate how much progress we've been able to make because of the fact that we could store things outside of ourselves. which was brought up in the discussion about the different ages, but the idea that we could outsource more than just our memory, but things that have meaning inside of that memory, which is a lot of what artificial intelligence aims to do. That's unprecedented in a way that it's hard to predict how that's going to change things because now when you look back and you look at things that were pivotal, you can see the thread of how we got to where we currently are, but it's really hard in the moment to see how this thing will change as you move forward. Because like the idea of writing, being able to create entire societies with laws seems like a huge leap from just being able to write something on a clay tablet and, and you know burn it and to pass it around a group of four or five people, which is a lot of how it worked in the beginning. But with the idea that you could take something that’s not only data, but it's also data in contextual meaning and then put it in something else and let it propagate on its own. That's really hard to predict what that could mean for where humans go in the future and even what it means to be a human in the future. 

Byron Reese: [00:13:30] Well said. I agree. I mean, you only have to look back at the internet and say, this was a technology but all we did was connect to computers and let them talk with a common protocol. That's it. There's no smarts to the internet. We just said, let's just connect all the computers so they can talk to each other, and then what did that do to the world? What did that and all the unforeseen implications that that is nothing compared to the question of what if you could make everybody smarter.

What if you could have, as you were just saying, a kind of human level cognition. At light speed, in a million instantiation all at once. What, what does that do? It's humbling to believe. 

Astrid Countee: [00:14:18] I think we have a little bit of a microcosm with social media because you can see how ideas spread and you can see how you can have cooperation among lots of different people.

And in that case I'm thinking about how there's been some instances where surgeons will operate and they'll use social media, and then you can see their thought process and have other people and other medical schools contribute to that thought process. And it's more than a 10 X effect. It's  totally different. It's really hard to like give it a title of like what that can do.

And then you see other ways in which social media can take a small idea and then it can grow and it can change and morph in a really small amount of time, which is very different from anything else we've had in history. And we are all still struggling with how to kind of throw our arms around that and what we do with that and, and how do we be responsible with that.

And it's a really big challenge, but I think it's kind of like a nice little small version of what is happening with the progression of artificial intelligence. 

Adam: [00:15:30] That's a really interesting idea. Yeah. I hadn't thought about that social media like that, but you're right, it's sort of something that has been programmed by people in like the input in this case is social data, but then the outcome is not quite predictable in terms of how these different pieces mixed together.

The social media itself is really just the algorithm that kind of, the data goes through. But, we can't predict that. And that's, that's the thing that's right on.. So I wanna think about this idea, also about what AI is. I know some of our listeners have heard of it.

And  in general, when people think about the idea of outsourcing cognition, I love what you said Astrid too, it's not just data, but it's data plus is contextual meaning. And so when we think about AI, a lot of us, if our introduction to AI generally tends to be through pop culture.

Or, you know, a very vague idea of Skynet, Terminator or, you know, that I want my grub hub recommendations to be a little more accurate in order to stop giving me all these pizza options when I don't want pizza or something. But, yeah.  Byron, if you could break down a little bit, like what, what is AI, how do we approach it?

Cause you write in the book of these two ideas of both narrow AI and artificial general intelligence and kind of what these two. A sort of frameworks or, or just like kind of maybe even kind of stairsteps of, the complexity of AI, what they are. But, can you break down a little bit about what AI is or what is not also?

Byron Reese: [00:16:57] Sure. And again, I'll start it off by saying there isn't a consensus definition, but I can say that it's an unfortunate term in that we use it to describe two very different things. It's like the word, pool. You know, that's both a game you play on a billiard table, and it’s a swimming pool. Right?

And they have nothing at all to do with each other. And so if  you're using them indiscriminately with someone who doesn't have any familiarity with either of them, it gets confusing and AI is like that. So one thing people mean when they say AI is general intelligence. The AI, you see, as you were saying in science fiction, that’s cp30, that's commander data, and that is the one that's an AI that can do what a human can do. And when you hear people say they're afraid of it, that AI is an existential threat that Elon Musk, the late Stephen Hawking, Bill Gates, that's what they're afraid of, that technology. Nobody's afraid of the other meaning of AI, except to the extent that it may take jobs away, but nobody's afraid it's going to go all Skynet on us.

So that's one thing, general intelligence and the thing to know about that, that everyone agrees on, by the way, there's, nobody knows how to build it, nobody knows how to build general intelligence. And interestingly, shockingly few people are working on it. I mean, probably the number of institutions working on it are probably twenty, just to make up a number.

The other thing we mean by AI is narrow AI. That’s the growth stuff that you were talking about. That's a computer program that can do one thing, one very specific thing and nothing more, and it's a simple technology. It basically says, let's take a bunch of data about the past and let's study it. Let's look for patterns in it and use it to make projections into the future.

That sure kind of takes the wow factor out of it, but that's all it is. It's a simple idea, which is good at it, better at it now because we have faster computers and more data and we have better tool kits to do it with  a few other little things, but that's what that is. And there is probably not a link between those two different completely different technologies.

So when you hear somebody say AI is going to take the jobs, or AI is an existential threat, or AI is here, or experts say AI is 500 years away, it all sounds very contradictory because we're talking about two very different things. 

Back to general intelligence, the one I said nobody knows how to build, but some people are afraid of. You might ask well, why is it that we don't know how to build it, but everyone's so confident that we can build it.  I mean, I have a podcast on AI and I ask every one of my guests  over a hundred. Do you believe we're going to build a general intelligence? And 95% of them, literally say of course. And isn't that interesting that virtually all believe it, but nobody knows how to do it. And the reason that is, is because, they’re acting under a single assumption that people are machines. And if you're a machine, then your brain is a machine, your mind, whatever that is. Consciousness is mechanistic. And if you're a machine, someday we'll build a mechanical you, and then two years later it will be twice as good and two years later, twice as good.

And that's what they're afraid of. Interestingly, when I speak to general audiences and ask for a show of hands who in the audience believes people are machines. It was a big disconnect there.  And so anyway, that's, that's the two different things we mean the one we don't know how to do, which may or may not be possible, but some people are afraid of. Other people think it will, you know, you'll make it on Monday and on Tuesday you'll ask it to cure every disease. On Wednesday you'll ask it how to make unlimited clean energy. On Thursday you'll ask it and then I'll just answer those things.

 There's that, and then there's this other  kind of mundane bread and butter technology that 99% of all the effort goes into. That’s the narrow AI. I mean that's what all the venture money goes in, That's what all the companies are made to apply. How can I use this technology? To better route, brooks through cities, those kinds of problems. That's what we know how to do. 

Adam: [00:21:52] That's actually really fascinating that the idea that so 95% of the guests on your podcast, Voices in AI tend to believe that it's possible to build this to, they don't know how you said only 15% of audience members when you're talking to are more general audiences seem to think we can. And, you mean, do you have any beat on why that is? You know I mean partially, obviously, it's, I think the way that they're approaching what they think the humans are fundamentally the machines they're taking more scientific research or scientific mechanistic perspective to how nature works, I suppose. I guess, can we say that then in general, the general audiences do not take that approach to what humans are? 

Byron Reese: [00:22:34] Yes. I think so. When I ask these guests, do you believe people are machines? I don't just get a “I suppose”. I get almost a scornful light, of course. I mean, what else would we be? Anything else requires magical thinking. Anything else is unscientific. Anything else is superstitious? I mean, they all have kind of the same construct, which is, look, if you take every neuron in your brain and replicate it in  a computer and, and set all the states correctly, wasn't that be you?

All of a sudden you had the computer and that seems like a very straightforward proposition. General audiences. And what's interesting is one of the five people on my show who don't believe we can build general intelligence was Esther Dyson and she said, well, no, because machines don't have souls.

And I think that's kind of what is at the core of it. Is that when you ask most people, are you simply a bag of chemicals and electrical impulse, or are you something more. Most people think they are something more right or wrong. Most people believe that, sometimes they attribute that to a soul. Sometimes to a  life force, sometimes to consciousness, sometimes to some high degree of strong emergence. There are all these different things you can attribute it to, but in the end, they're not reductionist in the sense that they believe that people are simply. the sum of the individual cells in them.

Clearly there's something interesting going on, right? Like you're, made up of billions of cells, none of whom know you exist. And they go about their lives you know, marrying, having kids, getting in trouble, all that kind of stuff. And they don't know anything about you. And yet there you are, but you're not the sum some of them.

But in a real sense, you are as well. And I think it's that little core, what's going on. What makes you, you. Why does your brain have your sense of humor? Why does your brain and the sense of humor, but your heart doesn’t? Where do ideas come from? All of that kind of stuff.

To most of my guests, that's all. And you know, most of my guests are used to breaking down complex things into simple parts. And if you hold all those simple parts together, you get the complex thing. There's something called the Human Brain Project in Europe, which is explicitly trying to build, with billions of dollars in funding, something like a general intelligence modeling it after human brains and minds.

So there is this notion that we are simply a machine. And  I don't mean that in any pejorative sense. If we are a machine we are about the best one there is. But, there's nothing that can't be explained with the laws of physics. That's to me, the thing that bifurcates those two groups so much.

Astrid Countee: [00:26:08] Byron, I have a question for your guests who think that humans are machines, what do they think about twin studies where you're looking at genetically identical twins and trying to understand their differences? 

Byron Reese: [00:26:25] Well, I haven't asked them that. That's a great question, but I mean, I would assume, it's really interesting that.

There's a book by Pedro Domingos called The Master Algorithm, and the question is, can you build an algorithm you can just plug into the internet, it'll figure everything out? And he thinks you can. And he even argues that it could be very simple because your DNA is only about 600 MB of data, but the amount that's different than you want to chimp is maybe 6,000 something like that. And so somewhere in that 6,000, the reasoning goes is the difference between us and there's a big difference. Right? And so maybe there's just something very simple in us, some switch, but it may not all be code. Right? It may be one of those twins is on the right side of the mom and the other ones on the left hand side, and that's a whole different life experience. Nine months, they have all these very different things happening, so I don't know that. Yeah, I can't speculate with this or put words in their mouth. I will say that when I was writing The Fourth Age and I put people who believe they are machines. My editor, who of course he's a book editor, wrote in the margin of my book “come on, does anybody really believe that?” And so it's interesting that the idea is alien to people who don't hold it as the opposite is the people who do hold it. There's a huge gulf there. There's no middle ground in that. You either can make a human in a lab or you can’t. There's no kind of, well, let's flip the difference and say we are “machinie”. 

Adam: [00:28:39] Or humanoid, right? 

Astrid Countee: [00:28:40] Or there could be another construct that both machines and humans actually are mimicking, but we don't know what that construct is.

Like if there was something. So we're thinking that from a machine you can derive, you know, other things that we think of as machines, but also other animals, including humans. But there could be something that actually sits above both that we're not actually recognizing as what's inheriting from, you know, machines and humans are both inheriting.

Byron Reese: [00:29:11]  Wow, Yeah. And you know what's interesting it’s continuing with the idea that you're a trillion cells that live their lives, marrying and having problems and all of that, and they're not aware of you. The gaia hypothesis puts forth the idea that all life on earth forms exactly the same way that we're all supercells and that there is a higher organism and this isn't spiritualism. This is just simple emergence that there is a higher awareness that we are the cells of who live our lives oblivious to it. I got very interested in that cause when I was writing the book, in the end you want to ask the question, how would you know if computers were conscious? How would you know if a computer was alive? How would you know if a computer could feel pain? What would you, and so I started thinking, how would I know if a tree were a tree or conscious or it could feel? I share over half my DNA with a tree. I share none of it with a computer. So I'm super closely related to that tree and then, so I got to looking around and trying to find out how would I, how would I know if the internet where conscious? If it achieved an emergence, or was alive?

How would you know? You know, there are people who posit that the sun is. The activity in the brain looks a lot like the activity of the sun in terms of what’s going on. And it's interesting that children all over the world when they draw a picture of the outdoors, when they draw a picture of the sun, what do they always put on the sun? 

Adam: [00:30:55] Smiley face. Right?

Byron Reese: [00:30:56] Smiley face, right. And so I really wanted to know how would you know if those things were alive? Because someday a computer's going to say, I'm unconscious of your pain. I'm alive. You believe it? Or did it just figure out from a programming standpoint, it could reach a more desirable outcome by asserting it’s alive so you could leave it on. And what will we do when it picks it.

I'm of the belief, and you know, we're a little far out at this point, but as long as we're there might as well look around, you know, in this country, the United States, we did open heart surgery on babies without anesthesia until the 90’s. Under the theory they couldn't feel pain. Veterinarians until the mid nineties were taught animals couldn't feel pain and then operated on animals. And you kind of understand the logic. You know, if you poke an amoeba with a pen, it recoils and  moves away, but you don't say that it feels pain. And so the theory was that animals were just doing things that look like pain but they didn't actually feel it. And you know, I'm not terribly proud of that like this person and I think maybe the lesson from that is there's something that looks like it feels pain. You give it the benefit of the doubt, even if you can't be sure. But in the spirit of empathy, I think if something says that hurts, like it or not you’re gonna have to believe it. Cause you sure don't want to create mechanical conscious beings and essentially enslave them. What will we have learned at that point?

Astrid Countee: [00:32:57] I feel like that’s what  people's real fear is.

Byron Reese: [00:33:03] What's that? 

Astrid Countee: [00:33:04] That we will create artificial intelligence that will just be another colonizer, conqueror entity and this time it'll just colonize and conquer all of humankind.

Byron Reese: [00:33:17] Of course, that is all predicated on the notion that we are machines. We can build a machine with those capabilities. I am unconvinced of it for three reasons because first you have a brain that we don't understand how it works. And that's being generous. We don't know how our thoughts are rooted or our memories and coding, and people say, well, there's a hundred billion neurons. That's why. And it's like, no. There's something called the nematode worm that has 302 neurons in it's quote unquote brain and  The Openworm project has been trying to model those 302 neurons for 20 years to build something that behaves like a nematode. They're not even sure that it's possible. So we can't even figure it out how a 302 neuron brain works. Like even the basics of how it works.

You say, okay, and then you say, and on top of that, we've got the minds. Your mind again, is all this stuff your brain does seems like it shouldn't be able to do, you know? And that's emotion. Does your toe have an emotion, you know? No, but your brain does. You have this mind and we don't know how that comes about.

And then finally we have consciousness and that's again, we experience the world. We feel warmth whereas a computer measures temperature.And not only do we not know how it is that we are conscious, it's called the last great scientific question. We don't know how to ask scientifically. We don't know what the answer would be, we don’t know what the answer would even look like. 

 So, I don't have enough faith and we have these brains that we don't know how they work. They give rise to a mind that we don't know how it works, that exhibits this property consciousness that we don't even have a capacity to understand how it works, and yet we're going to build it. We're going to build it, 3-5 years we’ll have it. I'm unconvinced, I would be one of my 5% of my guests who don't believe we're machines.

I don't know that we're not, but I have no reason to believe that we are. We sure don't seem like a machine. 

Adam: [00:35:50] That's fair. Yeah, but we make them, well, this is actually what I've been thinking about when I was reading the book too in the idea of just AGI, artificial general intelligence, I was actually thinking that A would make more sense as anthropomorphized intelligence.

And partially I think like Astrid, you were saying, I think one of the reasons that we are afraid of the colonizer being is because we anthropomorphize what it would do. And like we can make things in a very general sense. Like AI could just decide in a very neutral sense that there we have, must get rid of threats to existence. And that includes humans. But, I love this breakdown too, of the brain, mind, and consciousness. And we don't really understand any of those three levels. So the only way we understand this is by anthropomorphizing machines. But you have this flip side too where we “machinize” humans, I guess I'm actually also in the camp too.

I don't think we're just machines. I don't believe we're just machines. But the funny thing is like, that's, but I also really understand you do have to think about humans in the mind and consciousness. I think consciousness, but at least the brain and the mind as mechanistic devices so that they can somehow be replicated and, or, you can get your neurons, counter your neurons and measure the connections and then somehow encode those into electronic data signals.

I don't know. It's, I don't know. It's interesting cause yeah,

Byron Reese: [00:37:12] That’s a very interesting thing. There’s very little that we’ve learned for human reasoning that we’ve applied to AI. 

And I stand behind that. We say that sometimes name things, neural nets inspired by how the brain works. You know, we would call it artificial intelligence, but there really isn't any mapping. There is no insight that has been generated from how humans learn that has been particularly useful in machine learning, I mean, machine learning. Think about this, if you want to train an AI to tell the difference between cats and dogs. Give it a million photos of cats and a million photos of dogs, it slices them up in the little two by two pixel images, the little three by three pixels and little four by four pixel images. It instantiates them into all of these images and some are labeled cat, some are labeled dog, and this goes through and it picks through your photo and tries to find those two or three and four and it comes up with a score and says this, you're taking a little kid, you show them a drawing of that, with Crayola and they can spot a cat.

Then here's the cool thing. Let's say they know about cats and then one day they're out and they see one of those Manx cats, those cats without a tail. They'll say look there's a kitty without a tail and nobody even told them there was such a thing as a cat without a tail. Every data point they ever had cats had tails. Evidently it retains enough catness to that child, that it says oh, that’s a cat.

I don't think that you can train a person with a sample size of one. I could draw an alien with the Crayola, and then I could, I could give you photos and say find that alien. You could say it’s hiding behind the tree here or that it’s underwater here. Oh, I can only see his foot here. A computer can’t do anything like that.  And that is the great mystery of what we do and it’s some kind of press for learning going on. But it's even more than that because imagine two fish, two trout of the same weight and one is swimming in a river right now, and another one is in a jar of formaldehyde.

If I were going to rapid fire question, I would say. Are the two fish the same or different and blank? Are they the same color? You would say? No. Do they smell the same? no. Are they the same weight? Yes. Are they the same temperature?no. And you could just do that without even thinking. You've never seen that set up.

Probably never handled a fish in formaldehyde or may not have handled trout. And yet, you know what attributes persist and which ones don't. And we have this view of the whole world that's like that. And we seamlessly just move that stuff from place to place and it's a huge mystery and computers can’t do anything like that, nothing like it.

So it's almost marketing. Somehow we're building computers. We're building AI the same way.

Adam: [00:40:50] Yeah. I think that saying it's marketing is a really great way to think about that too. Because one of the more common examples that your average person might come across is if they're like type AI and they'll see on Google that, Oh AI beat a chess master, you know, AI learned how to play a video game, like a very simple Atari game. After losing twice, then it wins every single time. And the idea of it learning how to do a set of tasks, especially in the video game, which is already a program, we then sort of extrapolate out from that that, Oh, if it can learn how to beat a chess master in 20 minutes.What else can't it do? You know? But the idea of human learning as a great mystery, I think is really compelling. and you're totally right. You know, you can teach a child with a Crayola cat and they can see cats everywhere, right. And understand whether it's not a cat. Right.

And that if there's two animals lying on top of each other, they can point out and say, that's a cat or that, but that's not a cat. And then you, Oh, it's a dog. That's a dog. Okay. Right. And there's like this contextual knowledge that your AI computers cannot do. And that's something interesting too, even thinking about.

 I think you, you wrote at one point about the sprinkler system, right? And as one of the good examples, it can like detect if there's a change in moisture and so therefore turn-ons sprinkler. Right? And that is AI, but, right. That's it. It detects one thing and then it can do one thing, turn water on or off, and it's interesting how easily in our head, and maybe this is actually the point, right? We have a sample size of one AI.  A sprinkler can turn on and off because of the sensor changes. But then because of our human learning, we then say, Oh, well, with my sample size of one, maybe this thing is AI. Maybe that thing is AI.

And then we kind of extrapolate from there about what we think AI can do. Not at all helped or hindered by The Matrix and Terminator and Star Wars in other ways. Right?

Byron Reese: [00:42:46] Movies are, look, I like to see Will Smith battle I, Robot it gets my $11. I can see Ex Machina and I'm like, Oh my gosh, that's cool. I watch WestWorld and I'm like, Oh yeah, man. But  all the robots are played by people in most movies. That aside what happens is when you see enough of those movies you do something called reasoning through fictional evidence. I do it. You reason from fictional evidence, you start to say yeah that looks familiar. I've seen that before. It's like, yeah, you saw it in a movie last month too. It seems to look like evidence when it isn't.  

You mentioned games, they're a really interesting case and the reason AI’s do so well at them. For what we were just talking about. You don't have transfer learning. Games have a finite number of rules, it may just be three or four. And they have a finite size board and they have clear winners and losers.You get points, you lose points, you move ahead, you fall behind. And, and that is the kind of thing that you just put two AI’s together. Run a battle and they can learn how to play in all that but there is actually little applicability to the real world. I’m fascinated by the whole AlphaGo thing. I followed that and I think there's a lot of really interesting things  to come out of there. A lot of learning to come out of it. You know to move 37 game 3.

It had to be a creative move, a human would have never known, I love all that stuff. But the point is, is I think they use $25 million worth of hardware, and a whole team of people to make that thing work. And that's just to solve for a game. Relatively, it's not an easy game, but the rules are simple.

And so trying to take that and say, can we build an AI that could write Hamilton or Harry Potter. Or are they graffiti like Banksy or it's just a whole different thing and then succession. I think the Turing test, it's often maligned, but I think it's, it's really interesting. If you can chat with a computer and  not, know. that you're talking to a computer, not be able to tell. That, I think that's very interesting. I, up until recently, never found one that could answer my first question, What’s bigger; the nickel or the sun? Cause what's the nickel? Metal. Oh, it's also a coin. It’s round, but the sun is also round.. Okay. Those are two round things it’s probably the same. That's where we're at. And you know, another, Oh, interesting. Like  with the technology as we know it now, I have the two home assistant devices on my desk.

I can't see their names, or they'll start talking, one's made by Amazon, one by Google. And I did an article that featured 10 questions they had different answers to. And these are questions like how many hours are in a year? Who designed the American flag? Ten questions and they gave me different answers.

Can you imagine why? Why would they have different answers on the number of hours?

Adam: [00:46:52] And what was one of them wrong or one of them? Right. 

Byron Reese: [00:46:56] Well, that's the challenge. They were both right and they were both wrong. One of them  gave me a number of hours in 364 days. One of them gave me the number of hours in 364.24 days. So it's a summer year versus likewise with the American flag. One said Betsy Ross and one said Robert Peck. And if you don't know who he  is, I didn't. He's the guy that figured out the 50 star configuration. So the questions are inherently ubiquitous. Well, do you mean the original flag or the current flag? The calendar year or the solar year? The AI can’t ask that, because it doesn’t understand the input. Yeah.

Astrid Countee: [00:47:40] So it doesn't even have the context to kind of understand what you really mean when you ask the question. 

Byron Reese: [00:47:46] Correct Correct. But when it answers so convincingly, it looks like it does and then we don't have a separate language. To track what it can and can’t do. We have to use human language. We have to say the computer sees too many wrong password injuries and figures out that there's a security breach. So it tries to shut down. The computer doesn't see anything. He doesn't know anything. It doesn't have any agency of its own. You know, but it's too burdensome to say when the sensor detects the program interprets it as. And you know, we just, it's easier to just say, Oh, the robot saw you. It doesn't see me. 

Astrid Countee: [00:48:47] That was hard to understand. I keep thinking about, 

Byron Reese: [00:48:57] It sounds like I’m down on the technology but here I wrote books. I'm not down on it anyway. Astrid, I interrupted you.

Astrid Countee: [00:49:00] No, I was, I was just going to say, I keep thinking about, when you were talking about machines and humans. Are humans machines?

It seems like we, by we, I mean the general public, confuse ourselves about what artificial intelligence really is because we keep changing the definition of what it's actually trying to be. So like to your point, the computers can't see, that's a human thing. But if humans are machines, then either machines see like we do or they don't see, or we are not really a machine, but we keep trying to be like machines.

And so that's why if the AI can beat someone at a game, then we start to get worried because we think, well, if a human could do them, if they could work that quickly, if they could be that accurate, what else can a human do? But we're not putting it in the context of, well, if other machines do that, then what else did that other machine do?

Cause we still separate it. So  it seems like we kind of pick and choose when we are going to ascribe it certain status. Based on what we think it's supposed to be doing, but we're kind of mixing and matching what are human traits that are not supposed to be, mechanized versus what a machine trait really is.

And we just keep mixing it up. 

Byron Reese: [00:50:24] Right. Fair enough. But I would say on the other side of that coin, everybody can agree we're conscious, right? Humans are conscious. And most people would say machines don’t yet have it. It's unlikely your iPhone experiences the world and has good days and bad days. So at least at this moment in time, my webcam seeing something and me seeing something those really are different experiences. You all know the story of the, The trainees room logic or, the Chinese room problem. 

Adam: [00:51:11] Never tell us about it. 

Byron Reese: [00:51:12] Let me tell this one. This is done by a professor out of Berkeley and it's a thought experience and it goes like there's a  room, a special room that's full of all these hundreds of thousands of very special folks, which we'll come back to in a second.

And in the room is a person, we'll call the librarian who doesn’t speak Chinese. Very important,  they don't know anything about Chinese. Outside of the room are a bunch of native Chinese speakers and they take index cards and they write questions in Chinese on these cards and they slide them under the door and the librarian picks up the card and it looks and looks at the first symbol.

It goes and finds the book that has that symbol on the spine and pulls that book down and looks up the second symbol. And that directs the library into the third book, which directs them to a fourth. But when they finally get to the last symbol, the book says copy this down. And so they get an index card and they copy it. The symbols that they don't understand, and they slide it and the Chinese speaker picks up and looks at it and it's just brilliant. It's perfect Chinese, it's witty it’s poetic, and so the question is, does the librarian understand Chinese. And you can see both sides of it.  You know, one would say, you corresponded, you conversed  in Chinese, but the library doesn't understand. You just can't say that. But other people would say, no, the librarian doesn’t know if they're talking about color or coffee.

These are what they're just copying symbols down. And you can see the analogy, which is that is all a computer does. It runs a program, it goes and commits different locations to memory, pulls different things out, and outputs something it may or may not understand. So if you ever could, and the room passes, the Turing test.

So you have to ask, if you were ever in front of the computer and you were having a conversation with a chat bot and you couldn't even tell it was a chat bot. Okay. And somebody said, does the chat bot understand? What would you say? Would you say, well, no the chatbot doesn't understand anything. It can mimic, or if you say, look, mimicking, that the same as understanding. You know.

Adam: [00:54:04] Yeah.

Byron Reese: [00:54:05] Anyway, yes. I'll let that stand. 

Adam: [00:54:13] I liked that too. It actually, cause it makes me want to say in this case,  this chat bot can recognize what I'm saying, but I feel like there's some difference between recognition and understanding, but that sounds, you know, semantically like splitting hairs. But that's what was interesting about this kind of Chinese room example is that on one level it feels like it gets very small in terms of when we're making an argument of does this understand or not? But the argument gets so small that it is very, it's clear to see both sides. We can say it, Oh, of course it understands, or it does not. and yet somehow somewhere in the middle feels okay too right?

It's like, I don't really think it understands, but it sure seems like it does. and that may be operative, I guess for a lot of people, they're just like, okay, well, it seems like it's doing something. 

Astrid Countee: [00:55:09] Well, it seems like you're asking though, maybe it seems like you're asking the question iis transferring data successfully enough?

Adam: [00:45:16] That's interesting. 

Astrid Countee: [00:55:19] And it's, and in some situations, I think probably was more than enough. And then I guess in other situations, maybe there's a question that it isn't enough. And like if you tell somebody, I love you. And they compute what that means and they understand what you're saying. Is that enough or do you want them to have some other response to that, which has something to do with a deeper level of meaning?

Byron Reese: [00:55:51] Yeah, I mean, you can see the challenge with it. I mean, I think you're both hitting on it. Language is difficult to kind of grapple with each distinction. We didn't build the English language and all the words we used to describe each other with the intention of having these machines. We just didn't have anything else.

So it does Astrid to your point, you get all kind of muddied here. We go back and forth. Yeah. So he thinks he's people, but doesn't really think he people, or does he really actually think he’s people? No, he doesn't even know what that means. Or is it just your view? 

Astrid Countee: [00:56:33] It doesn't matter if he knows what it means.

Does it, does it matter if my understanding and, and the dog's understanding is the same?

Byron Reese: [00:56:41] Yeah. Yeah. I also, I never want these conversations to be like late night college coffee houses. They're all meaningful because they suit what we will and will not. I really think that, and again, if we aren't machines then no machine will ever be able to do what we do and no machine will be conscious. No machine will surpass us in ability. And humanity  will go off in one direction. And then if we are machines then we will inevitably build general intelligence and it will inevitably be better. Some people take that so far as to believe in something called super intelligence. If general intelligence is hypothetical then super intelligence is even more so. But the logic is, you know, people, most people, you know, their IQ is between 80 and 120. And you could probably imagine talking to a person who had an IQ of 200 and you could carry on a conversation with them and it would be fine. But why suppose that the AGIs IQ was 2 million. What is that? And so, anyway, and so that's the real fear that we become unrecognizable to it. Somebody said, this isn't my quote. It doesn't love you, it doesn’t hate you, you're just made of atoms which you could use for something else.

Adam: [00:58:41] That's us. Right. Maybe as a wrap up question then, cause you, you also, you end the book on a, on a optimistic note, right? That there's a lot of good things that can come out of, you know, not AGI, but AI and just like computers get smarter, and perhaps AGI. But, I guess like, I mean I've kind of a twofold question.

One is, what are the, what are some of the things that you're most hopeful for that humanity is going to be able to accomplish with advances in tech in the next century or two centuries, or whenever the time frame is? And then also, and maybe think about this with this other question of like, is there anything that we have not covered so far that you feel is important for a general audience to know about AI or smart computing that probably don't? Which would be most things, I guess 

Byron Reese: [00:59:26] Well, we can talk about jobs. And that what technology does is it increases human productivity and that is always a good thing. If you don't think it's a good thing then maybe you should lobby your Congress person to pass a law that we all have to work with one arm tied behind our back because you were lower productivity and you would create a bunch of jobs by the way, but they wouldn't pay hardly anything.

You create jobs because you need twice as many people. They wouldn't pay because everyone's productivity would go down. So the converse, what if you could increase everyone’s productivity. Or at least make people smarter? It makes you more effective, smarter. You're smart as that device you're holding. That's always a good thing. If you don't think so, you might say, boy wouldn't be great if we all went to bed and woke up with ten less IQ points tomorrow. That would be a better world. You know, knowledge is power. You will know the truth. The truth will make you free. It's empowered. What happens is you get this false construct that people say, well, look, I get that these technologies are really good at making new jobs up at the top, very high skill jobs like in  genetics.

But what they do is they destroy all the jobs at the bottom, like, order-taker at a fast food place. And then you always see this question. They say, you really think that order taker has the skills necessary to be a geneticist. At first  you say, well, I guess they don't Oh, they're really out of luck aren’t they?

Their job's gone and they don't have the skills for the new job because that isn't how it ever works. What happens is that a college biology professor becomes a geneticist. High school biology teacher gets the college job, then the high school hires the substitute teacher on full time and then all the way down the line.

The question is, can the people whose jobs are destroyed by technology, do the new ones questions? Can everybody do a job a little bit harder than the job they have today?  And I think the answer is yes. In this country. The United States, unemployment has never been over 2% other than the Depression. And yet I think we destroyed about half the jobs every 50 years.

So how is it that we've always had full employment and yet we're throwing half the jobs every 50 years and we have rising wages. It's because technology destroys all the jobs at the bottom, advances jobs at the top, and everybody shifts up a notch and stories. 

So there's no reason here what these technologies are going to do. They increase your productivity, they make you smarter, and that is always good. With regards to the future, I am a techno optimist. I wear that badge proudly, as I said at the beginning of this podcast, when there were only  800 mating pairs of us, our future was bleak.

Look what we learned. This trick of multiplying what we're able to do and the history of human species is of scarcity. There's never been enough of the good stuff. Never been enough food for everybody, never enough medicine, never been enough education, never been enough leisure. And so some people got it and some people didn’t.

And that's the story. But what technology does is it overcomes scarcity.  I don't work harder than my great grandparents work, but I do have a much more lavish life than they ever did because an hour of my labor just done so much more. And we went from needing 90% of our people making our food to 3% and so that's the trajectory that we’re on.

You know your, I'll say I'm wrapping up here. Your body has a hundred watts of power, this bright lightbulb. And if you were dropped on a desert Island, you would feel like, wow, I only have a hundred Watts to work with here. If you were in the United States, you've got your hundred Watts plus 10,000 Watts of electricity.

If you divide the energy consumption of the US by 35 million people, you've got 10,000 Watts working for you right now. So you have a hundred times your ability just because doing all this stuff for you and that's what our future is going to be like. Technology is going to enable us to do more and more.

You create more abundance. We're going to be the generation that ushers in this utopia, not cause we're better than anybody else or smarter. We just happened to be born at this moment when our technology is growing fast enough and I believe that in the future, absent the stray comet taking us out is not only likely but inevitable.

Adam: [00:64:31] Right on. And, I think it certainly through the entire argument of the fourth age to it and kind of how we were talking that it makes sense. This feels like a trajectory and I appreciate the idea of thinking about agriculture because that's one of the older or more previous ages where we saw a massive shift in humanity.

Thinking about that, where we've gone from 90% of people working in agricultural production to now 2-3%. Right. And that does show you this idea of how the jobs kinda shifted. People move, move up over, out and around or in other directions. So yeah, I think that's right on that.

That's super interesting. I want to be respectful of your time. We were a little over an hour, so I don't want to keep stealing your wisdom, but thank you so much for sharing this conversation with us. It's been a lot of fun and fascinating to dive into robots AI and consciousness and where we might be going.

Byron Reese: [00:65:20] Well, I would love to come back here. We have more than an hour more to talk about. Anytime you want me back, I'm happy to join. 

Adam: [00:65:34] Awesome. We'd love to have you.

Byron Reese Profile Photo

Byron Reese

Entrepreneur, Futurist, Author, Speaker

BYRON REESE is an Austin-based entrepreneur with a quarter-century of experience building and running technology companies. He is a recognized authority on AI and holds a number of technology patents. In addition, he is a futurist with a strong conviction that technology will help bring about a new golden age of humanity. He gives talks around the world about how technology is changing work, education, and culture. He is the author of four books on technology, his most recent was described by The New York Times as “entertaining and engaging.”