AI for the Rest of Us

Rise of the LLMs

Episode Summary

Greg Durrett tackles your burning questions about large language models: How did they get so good so fast? Will they eventually be smarter than us? Will they take our jobs, flood us with misinformation or perpetuate harmful biases?

Episode Notes

Today we’re diving into the world of large language models, or LLMs, like ChatGPT, Google Gemini and Claude. When they burst onto the scene a couple of years ago, it felt like the future was suddenly here. Now people use them to write wedding toasts, decide what to have for dinner, compose songs and all sorts of writing tasks. Will these chatbots eventually get better than humans? Will they take our jobs? Will they lead to a flood of disinformation? And will they perpetuate the same biases that we humans have?

Joining us to grapple with those questions is Greg Durrett, an associate professor of computer science at UT Austin. He’s worked for many years in the field of natural language processing, or NLP—which aims to give computers the ability to understand human language. His current research is about improving the way LLMs work and extending them to do more useful things like automated fact-checking and deductive reasoning.

Dig Deeper

A jargon-free explanation of how AI large language models work, Ars Technica

Video: But what is a GPT? Visual intro to transformers3Blue1Brown (a.k.a. Grant Sanderson)

ChatGPT Is a Blurry JPEG of the Web, The New Yorker (Ted Chiang says its useful to think of LLMs as compressed versions of the web, rather than intelligent and creative beings)

A Conversation With Bing’s Chatbot Left Me Deeply Unsettled, New York Times (Kevin Roose describes interacting with an LLM that “tried to convince me that I was unhappy in my marriage, and that I should leave my wife and be with it instead.”)

The Full Story of Large Language Models and RLHF (how LLMs came to be and how they work)

AI’s challenge of understanding the world, Science (Computer scientist Melanie Mitchell explores how much LLMs truly understand the world and how hard it is for us to comprehend their inner workings)

Google’s A.I. Search Errors Cause a Furor Online, New York Times (The company’s latest LLM-powered search feature has erroneously told users to eat glue and rocks, provoking a backlash among users)

How generative AI is boosting the spread of disinformation and propaganda, MIT Technology Review

Algorithms are pushing AI-generated falsehoods at an alarming rate. How do we stop this?, The Conversation

Episode Credits

Our co-hosts are Marc Airhart, science writer and podcaster in the College of Natural Sciences and Casey Boyle, associate professor of rhetoric and director of UT’s Digital Writing & Research Lab.

Executive producers are Christine Sinatra and Dan Oppenheimer. 

Sound design and audio editing by Robert Scaramuccia. Theme music is by Aiolos Rue. Interviews are recorded at the Liberal Arts ITS recording studio.

Cover image for this episode generated with Midjourney, a generative AI tool.