It can write your essays for you, invest in the stock market, and generate music. Artificial intelligence (AI) can do anything…but can it, really?
Over the summer, I listened to a Planet Money series that was powered solely by AI. AI was tasked with conducting interviews, researching an engaging topic, and piecing together an accurate and informative script about the history of telephone operator automation. After listening intently to the first minute, the AI-generated voice introduced Dr. Sarah Roberts, who the technology credits with saying, “The introduction of the automatic switchboard was a game-changer for the telephone industry.” The glaring problem is that Dr. Sarah Roberts has never said anything about the telephone industry. AI completely made that up.
ChatGPT and similar AI applications are language prediction tools: they don’t actually know anything. These types of programs simply infer the relationship between words based on huge quantities of data. Now, this isn’t to say I don’t appreciate AI; it is incredibly helpful in increasing efficiency and helping with tedious tasks. However, I’m doubtful of its benefit in certain fields, especially those governed by bias.
When much of the historical data that AI draws its conclusions from is flooded with racism and inaccuracy, AI will reproduce these same conclusions. These consequences are especially dangerous in healthcare: an AI-based system labeled Black patients as being considerably sicker than white patients who are recommended for the same care. The thought is unsettling—we provide AI with unfiltered access to trusted, invaluable information without consideration of its dangerous faults. Humans aren’t able to remedy these individual cases of discrimination, since these biases are a human phenomenon. If anything, these biases could be exaggerated through health disparities already present in healthcare. In the United States, for example, Black patients with lung cancer are 15% less likely than their white counterparts to be diagnosed early.
In many cases, AI not only projects these biases but amplifies them to damaging lengths. The investigative site ProPublica, for instance, found that a criminal justice algorithm in Florida mislabeled Black defendants as “high risk” at almost twice the rate as it mislabeled white defendants. AI facial recognition software like the one utilized in Florida magnifies racial disparities in policing. Especially with the deterioration of trust in law enforcement, these programs pose a glaring detriment to both communities and individuals.
I’ve always viewed technology as infallible and reliable. Science fiction movies like Star Wars and Guardians of the Galaxy depict AI as an all-knowing database, coming to the rescue of humans in seemingly impossible situations. A quick Google search seems to have the answers to any question I could possibly have. Artificial intelligence is just that—it’s supposed to be everything humans aren’t. We don’t expect it to make mistakes. The unfortunate reality, however, is AI was created by and for humans. Firstly, data used to train AI can be riddled with historical inaccuracies and social biases. Additionally, data sampling can underrepresent marginalized groups, resulting in higher error rates for these individuals. The generalizations, stereotypes, and biases humans are privy to extend to the so-called foolproof, dependable AI programs. We continue to hold AI on a pedestal, and as such, regard the information it introduces as reliable. If and when biases are presented by AI programs, they may act as inaccurate justifications for systemic injustices.
Furthermore, the direct impact biased AI information can have on loaning money is harmful. AI technology has seeped its way into loan approval programs. An investigation by The Markup found that Black applicants are 80% more likely to be rejected than white applicants. As housing disparities worsen racial inequalities in the United States, it is crucial for banks and credit unions to adopt a comprehensive approach to loan approvals. The potential consequences of not doing so are far-reaching, as they exacerbate economic inequalities. Financial institutions could consider diversity training data for their staff and AI programs to correct potential biases and allow for equitable loan approvals.
I don’t think AI is inherently bad. The potential it has to improve healthcare, law enforcement, and economic inequalities can’t be ignored. But in order to effectively use the technology, we have to address the biases and inaccuracies AI has. After all, human-made programs are always going to be vulnerable to mistakes. If anything, these biases may serve as a reminder that AI is a tool and not an infallible oracle. Before embracing its recommendations, I urge you to think for yourself. Ultimately, the responsibility falls on us to ensure that AI technology reflects the world we want to live in, rather than one surrendered to biases.
Lyah Muktavaram is a member of the class of 2026 and can be reached at lmuktavaram@wesleyan.edu.