AI for the Rest of Us

Can We Trust AI to Make Big Decisions?

Episode Summary

Are you okay with a machine deciding if you’ll get hired for your dream job, what kind of tests your doctor orders, whether you look like the person who just robbed a jewelry store, or how much interest you’ll pay for a car loan? Welcome to the world of AI decision making.

Episode Notes

Today on AI for the Rest of Us, we’re talking about the ways that AI is being used—or might be used—to help make high-stakes decisions about all aspects of our lives—from who gets hired for a job—to what interest rates people get on loans—to whether or not someone who’s been convicted of a crime gets parole. Are AI systems better than humans at making these decisions? Why is it so tempting to give up our decision-making authority to machines? And what can we do to make sure these systems are fair and unbiased?

Craig Watkins is a professor in the Moody College of Communications at UT Austin who’s been wrestling with these questions.Watkins is executive director of the IC2 Institute and a principal investigator with Good Systems, a university-funded initiative that supports multi-disciplinary explorations of the technical, social, and ethical implications of artificial intelligence.

Dig Deeper

Video: Artificial Intelligence and the Future of Racial Justice, S. Craig Watkins, TEDxMIT (Dec. 2021)

Designing AI to Advance Racial Equity (Craig Watkins’ Good Systems project)

Dr. S. Craig Watkins on Why AI’s Potential to Combat or Scale Systemic Injustice Still Comes Down to Humans, Unlocking Us with Brené Brown, (Apr. 3, 2024)

Opinion: Are These States About to Make a Big Mistake on AI?, Politico (Apr. 2024)

Assessing the potential of GPT-4 to perpetuate racial and gender biases in health care: a model evaluation study, The Lancet (This study found that GPT-4’s accuracy at diagnosing medical conditions varied depending on a person’s gender and race/ethnicity. Also, it was less likely to recommend advanced imaging for Black patients than Caucasian patients.) (Jan. 2024)

Wrongfully Accused by an Algorithm, New York Times, (the story of a Black man arrested for a crime he did not commit, on the basis of faulty facial recognition software) (June 2020)

Companies are on the hook if their hiring algorithms are biased, Quartz (2018)

Episode Credits

Our co-hosts are Marc Airhart, science writer and podcaster in the College of Natural Sciences and Casey Boyle, associate professor of rhetoric and director of UT’s Digital Writing & Research Lab.

Executive producers are Christine Sinatra and Dan Oppenheimer. 

Sound design and audio editing by Robert Scaramuccia. Theme music is by Aiolos Rue. Interviews are recorded at the Liberal Arts ITS recording studio.