AI for the Rest of Us

What Is AI, Anyway?

Episode Summary

What do we even mean when we say “artificial intelligence”? And how do we make sure it’s safe and useful? Here, with all the answers, is AI and robotics expert Peter Stone.

Episode Notes

For our first episode, we’re starting with the big picture. What is (or isn’t) “artificial intelligence”? How can we be sure AI is safe and beneficial for everyone? And what is the best way of thinking about working with AI right now, no matter how we use it?

Here with all the answers is Peter Stone. He’s a professor of computer science at UT Austin, director of Texas Robotics, the executive director of Sony AI America and a key member in the 100 Year Study on AI. He’s worked for many years on applications of AI in robotics: for example, soccer-playing robots, self-driving cars and home helper robots. He’s also part of UT Austin’s Good Systems initiative, which is focused on the ethics of AI.

Dig Deeper

An open letter signed by tech leaders, researchers proposes delaying AI development, NPR (interview with Peter Stone)

AI’s Inflection Point, Texas Scientist (an overview of AI-related developments at UT Austin)

Experts Forecast the Changes Artificial Intelligence Could Bring by 2030 (About the first AI100 study, which Peter Stone chaired)

Computing Machinery and Intelligence (Alan Turing’s 1950 article describing the Imitation Game, a test to determine if a machine has human intelligence) 

Good Systems (UT Austin’s grand challenge focused on designing AI systems that benefit society)

Year of AI – News & Resources (News from an initiative showcasing UT Austin’s commitment to developing innovations and growing leaders to navigate the ever-evolving landscape brought about by AI.)

Episode Credits

Our co-hosts are Marc Airhart, science writer and podcaster in the College of Natural Sciences and Casey Boyle, associate professor of rhetoric and director of UT’s Digital Writing & Research Lab.

Executive producers are Christine Sinatra and Dan Oppenheimer. 

Sound design and audio editing by Robert Scaramuccia. Theme music is by Aiolos Rue. Interviews are recorded at the Liberal Arts ITS recording studio.

Cover image for this episode generated with Adobe Firefly, a generative AI tool.

Episode Transcription

Casey Boyle: Today on AI for the Rest of Us:

Peter Stone: Today's elementary school kids are going to be learning how to write and they're going to be learning how to use generative AI to help them write, we need to be teaching them from the beginning. What can you expect from them? How do you use them in the same way we teach kids how to appropriately to use calculators when they're doing math. 

Marc Airhart: Hi Casey.

CB: Hey Marc. For this first episode, we're going to start with the big picture. What is or isn't artificial intelligence? How can we be sure AI is safe and beneficial for everyone? And what is the best way of thinking about working with AI right now, no matter how we use it?

MA: Joining us today is Peter Stone. He's a professor of computer science at UT Austin, director of Texas robotics, the executive director of Sony AI America, and a key member in the 100 Year Study on AI. He's worked for many years on applications of AI in robotics. We're talking things like soccer, playing robots, self driving cars, and home helper robots. He's also part of UT Austin's Good Systems initiative, which is focused on the ethics of AI. So he's really in a great position to help us kick off this series. 

MA: So I thought maybe we'd start this conversation off by talking about what AI is. You know, I think a lot of people out there have kind of a nebulous idea, kind of, you know, when you see it definition, and I'm curious, as a computer scientist, and someone who studies this, you know, what is AI? And what is AI not?

PS: Yeah, that's a great question. And it's difficult to define just because we don't actually have a clear cut definition even for the word intelligence. So one thing that artificial intelligence is is a field of study. I've been involved in it for about 30 years. And I got into it out of curiosity about what I consider to be, and some of my colleagues consider to be one of the great scientific questions of our time, What is the nature of intelligence?, which can be answered by examining the human brain, that's an existence proof of intelligence, which neuroscientists and psychologists do. But it's not necessarily the case that the human brain is the only way to produce intelligence. And so artificial intelligence is about trying to get computers to exhibit behaviors that we would consider intelligent if a person did them. It's a scientific discipline, it's studying, how can you get computers to exhibit intelligence behaviors, and it's the algorithms behind that. My favorite definition is one that's that we wrote in the first study panel report of the 100 Year Study on artificial intelligence, the definition essentially said that artificial intelligence is a collection of technologies that are inspired by but operate quite differently than people do when they use their brains, bodies and nervous systems. So an important part of that is that it's a collection of technologies. It's not one thing. There's not just like this progress in artificial intelligence, and as you know, as we make computers smarter, we can sprinkle that intelligence on any application. That's one of the big myths of artificial intelligence actually is, is that, you know, people tend to say, Oh, there's a computer that can beat the best chess player in the world, or the best Go player in the world. That must mean that it's smarter than me. And if it's smarter than me, it must be able to do everything I can do. But that's not true. We have computers that are better than the world champion chess player, but we still don't have robots that can fold your laundry or put your dishes away in a reliable way. This is, you know, there's different types of intelligence. And so it's, you know, AI is a collection of technologies. And it's been around. The field has been around for 75 years. Now. The term was coined in the in the early 1950s. And there's been been a lot of progress. So it's not just in the last year and a half that artificial intelligence has been invented or awakened. It's been around, there's been tons of progress. There's been algorithms that have been developed over the years. And it's a very exciting time now for it, but there's already artificial intelligence in the recommender systems that you use and even in your appliances and everywhere. They can help with healthcare, they can help with reducing traffic congestion, with public security, there's all kinds of things and then there's realistic risks as well. So that's part of the moment we're at right now as people are wrestling with both these benefits and risks.

CB: It seems though, like new applications are AI, and then when they become not new, so more, they're no longer AI, right? They're just dumb intelligence or just dumb actions, or they're automated or whatever. Is that something you've seen happen? And should we still be paying attention to AI that has no longer is no longer at the forefront? 

PS: Yeah, so that's sometimes called this the AI paradox. But it's yeah, the way I often say that is when computers can't do it yet. It's artificial intelligence. Because that's one of the definitions of AI is like trying to get computers to do the things they can't do yet, once computers can do them, then it's engineering. An example I often use when I was a Ph.D. student, it was an active area of study in artificial intelligence, to get computers to be able to listen to a person reciting their credit card number or telephone number and be able to translate that into the digits that you just said, people were using hidden Markov models and new to try and publishing papers on how can you do this. And now, you know, that's routine, you you call your bank, and you just say the numbers you expect them to understand. Many people don't even realize that say, I don't even consider the AI anymore. And I think that's, that's just the nature of the field, it's always going to be that way. And so it might be that in some number of years, the people who are third graders right now, and they're using chat GPT, and large language models, you know, when they're teenagers, they'll say, oh, that's, that's not artificial intelligence. That's just a large language model. I've had that since I was a kid. AI is, you know, the thing we can't do yet. 

CB: For those of us who are not in computer science, is there something that we should know about AI that you all know, or take for granted? That someone who's not involved in the development of AI or automation in those ways, should Know? 

PS: Yeah, I mean, I guess the first lesson is just it's not magic. It's an algorithm. Somebody has programmed a computer to do something. And it's exhibiting intelligent behavior. But it isn't. It isn't magic. It's a computer executing code. And it is tempting to look at something that's exhibiting behavior, you don't know how it's working. It seems like magic. If you don't know how a car runs, it looks like a car is magic. If you don't know how television works, it looks like it's magic. Artificial Intelligence is just like these other technologies. There's solid engineering behind it. There's algorithms that we understand. And I think that's important to keep in mind to remember

CB: What are some of the ways that AI is already integrated in our daily lives, perhaps without us even realizing it? 

PS: Yeah, there's many, I mentioned recommender systems, you're on Netflix, and you've watched a few movies, and it recommends some new movies for you, that grew out of the field of artificial intelligence, the auto completion on your computer or on your cell phone, that comes from from artificial intelligence, any of this sort of semi autonomy functions in your car, cruise control kinds of things, but intelligent cruise control that are sensing the distance to the car in front of you, that's artificial intelligence, you can go on forever. All these, there's, there's so many things that seemed like magic at some point that computers couldn't do 30 years ago, and they can do now, and people take for granted. And so you've already everybody listening to this podcast has interacted with artificial intelligence, you know, at least eight to 10 times already today, if not more. 

CB: It reminds me of a quote from someone in my own field, Katherine Hayles, talks about the sort of cognitive capacity of humans and how it really doesn't differ that much from humans 15, 20, 30 years ago, her quote is, something roughly around the line of humans haven't got smarter, our rooms have gotten smarter. And so the ability to like take the cognitive load off of having to get food, having to stay cool, having to do that, and the more and more we automate or make our environment or rooms intelligent, the more we might be able to even offload, not even offload, but focus our attentions in different ways. Any thoughts on the ways in which artificial intelligence might play with what we might call natural intelligence?

PS: Absolutely. I mean, there's a lot of sort of handwringing right now about computers, or robots, or artificial intelligence technologies replacing people or replacing jobs. But I think that the more likely story and the story we're striving for within the field is is rather than human replacement. Instead, human augmentation, making people more capable, making life more enjoyable, enhancing your entertainment or enhancing your productivity, allowing you to do the things that you're best at, and not the things that just consume time. And that is, I think, a big part of the story of artificial intelligence. It's making us as individuals more productive, it's making us as a society more productive. So you know, think of it as as, as an augmenting technology.

MA: You’re kind of presenting a vision of the future that's very positive, right, that AI could actually help take tasks that we don't want to do. But there's also you know, people voiced a lot of concern about, like you said, Maybe taking away jobs, increasing income inequality, messing with the criminal justice system and causing unfairness in these important decisions that are made in criminal justice or in hiring, things like that. How do you ensure that we get to a future where AI is actually doing the good things that we want it to do and isn't actually Increasing harm.  You're

PS: Yeah, well, first of all, AI is not doing anything by itself. It'll be AI technologies and it'll be people who are using AI technologies that are doing things that are either beneficial or harmful. It's the same story of any technology that automobiles are fantastic. They help us move through the world faster, they increase our productivity, there's all kinds of things we can do now that we couldn't do before the invention of automobiles. And there are 40,000 people a year who die from accidents caused by human error and or malfunctions in, in cars. And we've paved over roads, and there's emissions. And there's all kinds of benefits and risks. And that's true of every technology. How do we as a society, tilt any technologies' affects towards the benefits and away from the harms, it's through policy, through regulations, through industry standards, and through education. And then also you know going back to the taking away jobs. I mean, again, all technologies changes the jobs that people are suited for, you know, there used to be a lot more people employed washing dishes, before dishwashers, right, those jobs disappeared. And yet, people found other things to do, because there's new jobs created, including in the factories where you make dishwashers. So it's not a unique-in-human-history moment in terms of new technological development, and there's tried and true methods for not ensuring because you can never ensure that there won't be harms. In fact, there probably always will be harms. But to try to minimize the harms and increase the benefits,

MA: When ChatGPT came out and kind of burst onto the scene, I think for a lot of people who are not in computer science, it sort of felt like, oh, wow, the future is here, right now, AI has come of age. And even though you know, it's built on many years of work on artificial intelligence, it kind of felt like AI has arrived, in some sense, because it was so easy for people to interact with, you know, this sort of text based chat bot and, and then suddenly, you know, image generation was suddenly easy. You could create an image of cats playing chess. But I wonder that sort of led to a lot of hype about what AI can do currently in this moment. And I just wonder if, if that's been kind of overhyped, if that hype is kind of waning, and what effect that has on the way non computer scientists think about AI?  Like

PS: Yeah, this is I mean, this is a pattern in artificial intelligence. This isn't the first time that the story is that artificial intelligence has arrived. I mean, it happened when Deep Blue beat Garry Kasparov, it happened when AlphaGo beat Lee Sedoll, it happened when Watson beat Ken Jennings at Jeopardy, there's I mean, there's always been the sort of stories that appear to the general public, like there was this huge breakthrough, and it's going to change everything. And every single time, it's not been a huge breakthrough, it's been a huge landmark. But a landmark that's that sort of was achieved through years of incremental research. And you just got to that moment where you tipped over the balance, and now you can do better than then people are doing or you can do something that that you weren't able to do before. And that's true for Chat GPT, as well that these large language models, I mean, it's it's version 3.5 of GPT, there was a GPT, and the GPT two and a GPT Three, and those, and those were all built on neural network technologies that had been doing image recognition, and, and so it wasn't out of nowhere. But it seems out of nowhere. And that does drive a lot of hype. The hype is overblown. Yes, it seems to people like there's this huge leap in progress. And you know, all of a sudden, computers can do things that they couldn't a year ago. And if you continue progress at that rate, then it'll only be a matter of two or three years, and computers will do everything better than people. But of course, now people are realizing that these large language models, they have some—and you know, don't get me wrong, they're fantastic. They they're, they're very surprising the kinds of things that they could do, I wouldn't have predicted five years ago that we would be at this point. And yet, there's also things that they're not close to being able to do right and and they're not better than people at everything. They're not going to take away all our jobs. But that's what it felt like I think and it still may feel like that to some people, 

MA: Like, I went through the same cycle of getting really excited about it and feeling like this is something I need to get on top of and now feeling kind of disappointed and almost kind of not lied to but but you know, that it was overhyped and ... 

PS: I mean, from my perspective, it is changing a lot of things, the world won't go back to the way it was before, in the same way, that I mean, I'm of an age when I remember when the first microwaves came and that changed the way I interacted with food. And I remember when the first smartphones arrived, or even cordless phones [or even internet or, yeah] or social media, or like there's all kinds of technologies that become permanent parts of our daily experience. And they do change the way we do our jobs. But we just sort of they become, you know, we can't even imagine life anymore without these, right? And I used to have to go to you know, get a paper map if I was going on a trip and you know, I'd fold it and unfold it and try to figure out where I was in the map. You don't do that anymore, right? And so, yes, third graders like I say, elementary school kids are growing up in a world where they can interact with a large language model when they're given a writing assignment, and we need to be teaching them from the beginning, how do you, you know, what can you expect from them? How do you use them, in the same way we teach kids how to appropriately use calculators, when they're doing math, they have to learn arithmetic, they have to learn how to use calculators. Today's elementary school kids are going to be learning how to write and they're going to be learning how to use generative AI to help them write and to learn what they they're good for, and what where they make mistakes. And it'll just be second nature. And it's not, you know, it's, I don't see it as fundamentally a different thing than these other technologies we just mentioned.

CB: You do a lot of work with robotics. So I want to hear your thoughts on current state of robots and AI. I'm thinking of the robotic sort of cone-looking police robots, or the Boston dynamic robotic dogs, or delivery drones, that kind of thing. Anything you want to say about the development of those or where you see the future going with those?

PS: Yeah, I mean, you know, the history of robotics is, is in some ways parallel to the history of artificial intelligence, and maybe a little farther behind in some sense that it hasn't, it's a lot more difficult to make something work in hardware to make something interact in the real world than it is to make it work just in software. There's been a lot of progress in navigation. So tasks that only need navigation are things like vacuum cleaning, and you can buy a Roomba that can vacuum your home. And it just needs to be able to move through an intelligent pattern. Autonomous cars, the technology is there, that's essentially a navigation task. But there are still many things that are that are still very difficult for robots, manipulation, being able to, I mentioned earlier, being able to fold laundry or put away dishes, that requires dexterous manipulation, being able to pick up objects, maybe even deformable objects like bags or, or towels that aren't rigid. It's difficult, it's a lot more difficult than it seems to us it comes very naturally to people. Some of the tasks that are hardest for artificial intelligence, and robotics are the things that are easiest for people. And vice versa, I wouldn't want to get into an arithmetic competition with with a computer. But yeah, I'll out fold laundry, you know, any day, so. But one way of thinking of the development in robotics too is that, you know, all of the progress we've seen recently in terms of large language models, and the the fascinating and amazing, really impressive capabilities they have, this is completely disembodied, they're learning from text. And so you know, they it's a pattern matching, it's really autocompletion, you know, they learned to just from from strings of words to predict the next one, but they don't have any way of testing whether they have the experience that is correct, right. So so when you tell me that this bottle is heavy, I can pick up the bottle and experience that there's force on my arm, and say, Okay, that's what heavy means. A large language model can't do that. But a robot could, it could pick up the bottle. And when you say, you know, this is a heavy object, and this is a light object, it could translate that to the forces that it's feeling on its joints, that's, we call that grounded language learning. And it could also, you know, just like our babies, you know, babies in, you put them in the playpen and their little scientists, right, they're testing hypotheses all the time. But not just by listening to strings of words, by actually trying to do things, by putting a block on top of another one and seeing if it falls over, you need a body to be able to do that. And that's what robots will bring to intelligence is being able to to connect this string of of words that are just sort of abstract tokens to meaning.

CB: You talked about the autonomous vehicles, for instance, and it being a navigational task, but it seems like the task is bigger too, [I] keep seeing stories about people reacting badly to them, putting cones on them, or you know, but also other kinds of robots that are being integrated into social space, you know, kicking over the police officer robots, or blocking somehow delivery drones and stuff like that, what goes into designing for social space? And what should we be on the lookout for when people are reacting badly to these?

PS: That's one of the biggest ways in which the field has changed since I got involved. And when, when I got involved in artificial intelligence back in the early 1990s, it was a field about programming and sitting at the computer trying to get the computer to do things and nobody said computer scientists need to, to also to understand ethics and to understand social scientists and to talk to stakeholders, that wasn't part of our language. Now, that is a part of the language. And there's been I think it's a fantastic development. There's been a lot of meetings of the minds between people who approach technology from the computer science and engineering perspective and people who approach technology from the humanities and social science perspective. And I'm a member of a couple of organizations that do that, that bring those those people together. The 100 Year Study on Artificial Intelligence is one. And here on campus at UT. There's Good Systems, which is basically our ethical and responsible AI initiative, and it includes people like myself from computer science and engineering. It includes people from social science, humanities, public policy, communications, School of Information, and one of the sub projects, the one that I'm most deeply involved in, we call Living and Working with Robots. And so it's about creating robots that can be on in our environment for long periods of time, but also about doing it in a way that won't make people want to kick them over [right], block them, that will make them accepted. And it's not just what can we do? It's what should we do? How can we do it in a way that people will accept? There was an article here in Austin, just I think, a year, year or so ago, maybe a little more were the the food delivery robots were using the bicycle lanes [right] And that got all the bicyclists understandably upset? And nobody thought ahead, like what what's, what's this impact going to be? What's going to be the negative? Who's it going to help? And who's it going to harm? And how do we design for that in the first place? And so more and more, we're hearing people advise both computer science students and computer science companies and robotics companies to talk to stakeholders from the beginning, don't just build a technology and throw it over the fence, but bring in the people who are going to be using it from the beginning. And I think that's becoming more and more ingrained in the culture of artificial intelligence. 

CB: Since we're talking about AI here at UT. What's happening here at UT about AI that people should know about?

PS: Yeah, there's a whole bunch of great artificial intelligence research going on here. One of our big success stories in the past was was the National Science Foundation launched a batch of sort of artificial intelligence institutes and we were one of the first universities to be awarded one of these, the Institute for the Foundations of Machine Learning. Good Systems is a big way in which the University of Texas is leading artificial intelligence, this this notion that it's not just a technical field, but that that we need to have a real meeting of the minds between the engineering and computer science and social science and humanities. And then there's just artificial intelligence being used in many, many ways to enhance people's creativity in music composition, and in, you know, to enhance the way people are thinking about building buildings in the School of Architecture and, and thinking about designing smart cities, and thinking about how artificial intelligence can can boost our healthcare system. There's companies being spun out from faculty who are who are working in artificial intelligence, there's people at the borders of, of artificial intelligence and neuroscience that are going back to the roots of really trying to help us understand the nature of intelligence. There's more than I can list here. 

CB: Do you know if AI is going to help UT win a national championship in football anytime soon?

PS: Well, if I, what I would assure you is if UT does win a national championship anytime soon, there will be some AI that plays a role in that. I think every sports team now is using artificial intelligence for all kinds of things for your nutritional plans, for play selection for scouting. So yes, you know, I assure you that if and when we win the next national championship, there will be AI that plays a role in that.

CB: I was hoping to stump him but it didn't work. Peter, thank you so much for joining us on this and we look forward to seeing what comes out next of your labs and your work. So just thank you. 

PS: My pleasure.

MA: That's our show. Next time on AI for the rest of us: 

Greg Durrett: Are LLMs going to lead to widespread disinformation? I think my answer is a definite maybe. 

CB: AI for the Rest of Us is a production of the University of Texas at Austin's College of Natural Sciences and College of Liberal Arts. Our show is part of the university's Year of AI to learn more visit yearofai.utexas.edu.

MA: For links and more resources on today's topic, go to aifortherest.net

CB: Big thanks today to our guests Peter Stone. Our executive producers are Christine Sinatra and Dan Oppenheimer. Sound design and audio editing by Robert Scaramuccia. Our theme music is by Aiolos Rue. Our interviews are recorded by the trusty audio engineers of the Liberal Arts ITS recording studio. 

MA: Thanks for listening.