At an event organized by the student group Code_Wes and the Department of Mathematics and Computer Science, Senior Research Scientist at Google Vinodkumar Prabhakaran gave a lecture on Thursday, Feb. 29 about artificial intelligence (AI), titled “Culture as a Lens on Responsible AI.” Members of Code_Wes, faculty and students of the department, and others attended the lecture in the Woodhead Lounge at Exley Science Center.
Code_Wes is an organization for University students interested in coding, and it allows for group projects with real applications, incorporating app, web, and game development. Many of these are designed to improve quality of life for Wesleyan students. Additionally, the organization hosts yearly hackathons and prizes for the best Wesleyan coders.
The club also serves as a resource for students after their time at Wesleyan. Students can be connected with alumni in the computer science and software engineering fields, as well as current faculty advisors, and they can also gain practice for coding interviews.
Prabhakaran is a computational social scientist studying AI. He specializes in discussing, researching, and advocating for natural language processing (NLP) techniques.
“With such fast-growing technologies, it is important to pay attention to how they are being used and how they impact society,” Prabhakaran said. “We want to deal with this technology more boldly, and not just technically from a computer science perspective.”
In the evolving landscape of chatbots, recent breakthroughs such as the Language Model for Dialogue Applications (LaMDA) and the Pathways Language Model (PaLM) demonstrate huge strides by Google in generative AI. Prabhakaran argued that it is also crucial to discuss the responsibilities toward society associated with developing and using these technologies.
Prabhakaran is one of many researchers that compose the Technology, AI, Society and Culture (TASC) Team at Google. The team is dedicated to studying the societal impacts of AI. They engage in interdisciplinary research with qualitative, quantitative, and other mixed methods to understand its implications and promote Responsible AI (RAI).
“As Vinod’s talk demonstrates, the development of AI algorithms is not a purely technological task but has fundamental ethical implications as well,” Assistant Professor of Computer Science Sebastian Zimmeck wrote in an email to The Argus. “Thus, it is important that those are discussed in training the next generation of computer scientists.”
Prabhakaran has researched bias in algorithms that are used to detect abusive and derogatory content, referred to as toxicity in the field. He studies ways to assign toxicity levels to language samples that include references to identities, derogatory terms, slurs, and slang words. He was able to show that models overestimate and underestimate the toxicity levels of certain words, depending on the cultural and geographical context of the training data.
“There is a default Western lens used in responsible AI by researchers situated in Western institutions,” Prabhakaran said. “There are differences in cultural norms and moral values that should be looked into.”
Students were enthusiastic about the research, and Code_Wes members hope to employ these ideas in their future projects.
“As an international student and computer science major, I found this talk very interesting because I could understand the cultural biases going on with different AI models,” Code_Wes President Anan Afrida ’26 said. “I will use this in the future.”
Afrida, Social Media Manager Chi Phan ’25, Financial Manager Daniel Goldelman, Event Coordinator Nishant Aggarwal ’26, and Club Officer Gunn Jungpaibul ’24 all attended and coordinated the lecture. They found the discussion about ethical AI very practical to the University community.
“I am glad that Dr. Vinod shared his research,” Afrida said. “It is focused on making AI more culturally sensitive. As a president of [Code_Wes], this talk impacted the whole Code_Wes community at Wesleyan, making computer scientists more conscious about the ethics of AI.”
After listening to the talk, members of the audience were left considering the importance of RAI for the near future.
“What we build has an impact on society,” Zimmeck wrote. “It is our responsibility to make sure, as much as we can, that AI is deployed safely and responsibly.”
Carolyn Neugarten can be reached at cneugarten@wesleyan.edu.