Artificial intelligence is not a silver bullet
Artificial intelligence is increasingly being used to predict the future. Banks use it to predict whether customers will pay back a loan, hospitals use it to predict which patients are at greatest risk of disease and auto insurance companies use it to determine insurance rates by predicting how likely a customer is to get in an accident.
"Algorithms have been claimed to be these silver bullets, which can solve a lot of societal problems," says Sayash Kapoor, a researcher and PhD candidate at Princeton University's Center for Information Technology Policy. "And so it might not even seem like it's possible that algorithms can go so horribly awry when they're deployed in the real world."
But they do.
Issues like data leakage and sampling bias can cause AI to give faulty predictions, to sometimes disastrous effects.
Kapoor points to high stakes examples: One algorithm falsely accused tens of thousands of Dutch parents of fraud; another purportedly predicted which hospital patients were at high risk of sepsis, but was prone to raising false alarms and missing cases.
After digging through tens of thousands of lines of machine learning code in journal articles, he's found examples abound in scientific research as well.
"We've seen this happen across fields in hundreds of papers," he says. "Often, machine learning is enough to publish a paper, but that paper does not often translate to better real world advances in scientific fields."
Kapoor is co-writing a blog and book project called AI Snake Oil.
Want to hear more of the latest research on AI? Email us at [email protected] — we might answer your question on a future episode!
Listen to Short Wave on Spotify, Apple Podcasts and Google Podcasts.
This episode was produced by Berly McCoy and edited by Rebecca Ramirez. Brit Hanson checked the facts. Maggie Luthar was the audio engineer.