c/o Soren Stokes

c/o Soren Stokes

Wesleyan began collaborating with 19 other universities on a two-year research project exploring the capacities and consequences of generative AI in the classroom at the beginning of the fall 2023 semester. Although Caleb T. Winchester University Librarian Andrew White and Associate Director of Assessment Rachael Barlow are the primary facilitators of the project at Wesleyan, faculty and staff from a range of departments and offices are involved. The non-profit Ithaka S+R is leading the project and will release a report on the first year of research in 2024.

Generative artificial intelligence (AI) programs such as ChatGPT, Synthesia, and Duet AI have rapidly grown over the past year, prompting Wesleyan to update its Honor Code as well as some individual professors to reevaluate class policies regarding plagiarism. The generative AI project aims to provide guidance to universities establishing such AI policies through inter-institutional research.

This is not the first time Wesleyan has collaborated with Ithaka S+R, an organization that works closely with higher education institutions on inquiries into technology. The non-profit invited Wesleyan to participate in the research project earlier this year. Other participants include both public and private research universities, such as University of Chicago, Stony Brook University, Yale University, Carnegie Mellon University, and Princeton University. Wesleyan, however, is the only liberal arts university. The diversity of institution types was part of the appeal for some faculty who joined the project.

“We could learn something from [large research institutions], that we might not be able to learn from just talking to another school that was like us,” White said. 

The main goal of the project is to explore the effects of generative AI on learning and its use as a learning tool. The project will accelerate in the spring, when each institution will interview faculty about their responses to generative AI. This will be followed by a mass collection of data on AI usage and productivity. By the end of the project, researchers hope to have enough of an understanding of AI to be able to draft appropriate policy around it.

“By year two, we’re trying to use the information we collect, to get a sense of how we might sort of be thinking about policy on these campuses regarding AI and other things like that,” Barlow said. 

At Wesleyan, the project involves faculty and staff from the Office of Advancement, Information Technology Services, Student Academic Resources, Student Affairs, the Shapiro Center for Creative Writing and Criticism, Student Conduct, and the Gordon Career Center. The Olin Memorial Library facilitates the project, and nine faculty members serve as ambassadors. The ambassadors meet five times over the course of the semester, each time discussing a different topic on AI accompanied by a reading list.

“First we had the basics, like what’s a large language model?” Barlow said. “And then we had a tool day [presenting] 10 different [AI] tools—not just ChatGPT, but some of the other ones. Let’s play with them.”

The third meeting was on the relation of AI to class assignments, and the fourth covered institutional policy regarding AI. The fifth meeting will discuss ethics and bias concerns about generative AI. At these meetings, Barlow has noticed a range of attitudes toward AI from faculty.

“I think the faculty are all over the map, which is not surprising,” Barlow said. “I think every school would be like that. And it’s sometimes just the personality of that faculty member, and sometimes it’s the discipline that is causing that range.” 

Barlow explained that some professors may encourage students to experiment with AI while others enforce strict bans on the technology. Many professors lie somewhere in between. 

“I have my own thoughts about how I would use it in a classroom,” Barlow said. “But I think variation is not a bad thing.”

Both White and Barlow acknowledged that the rapid growth of generative AI and its constantly changing nature make research on the technology more complex. White emphasized that the dynamic nature of AI was one of the motivations for Wesleyan to join the project, to switch from being reactive to proactive.

“It’s a moving target because the technology is evolving seemingly on a daily basis,” White said. “What we thought we knew, at the beginning of the fall semester, is different from what we know now.”

Barlow named the writing assistance program Grammarly as one example of an artificial intelligence model whose interface has changed rather recently. She noted that even Grammarly has developed a generative program comparable to ChatGPT. 

“Grammarly is the best example,” Barlow said. “You might even be in some classes where faculty have said something like, ‘we don’t want you using GPT to produce a paper, but you can use Grammarly. That’s fine.’ But Grammarly itself has come up in the last couple of months with tools that look a lot like ChatGPT. Generative AI technologies are just becoming part of a tool we already use, so then you have to kind of rethink what you’re doing.”

White and Barlow also spoke to the stigma around AI both at Wesleyan and across other institutions. They acknowledged that sometimes it may feel as though AI should not be discussed, or that there is something shameful in wanting to use the technology.

“[AI] can be a very interesting tool that could sort of help scaffold students, help them develop writing skills and problem solving skills,” White said. “And I think we want to make those more obvious.” 

Barlow and White emphasized that it is important to be aware of AI and recognize that its presence is likely permanent. Noting that AI is already a part of daily life, from email to online search engines, White referenced “Star Trek” to describe the pointlessness of working against AI.

“Resistance seems futile,” White said. 

This increasing spread of AI raises some ethical concerns: Paywalls or paid models of AI programs could create disparities in access to AI as a toolkit, and detection tools could introduce biases. However, both Barlow and White ultimately concluded that the use of AI will become normalized. They compared the growth and accompanying fear of AI to the rise of other mainstream technological advancements that have been made in the past few decades, giving the example of the calculator, the cell phone, and the internet.

“The advent of Google was going to be the death of libraries of librarians as a profession,” said White.“Everyone was going to be an expert researcher. Turns out not so much.”

By the end of the semester Barlow hopes the project ambassadors will share their wisdom among colleagues about what they have learned so far about AI and where it might be headed. Barlow and White also expressed their excitement to facilitate a conversation among faculty and students about AI and its prospects.

“Im looking for opportunities to get students in a room talking about this as well,” White said.

White and Barlow acknowledged the concerns of the growth of AI and the questions of ethics that it brings, but both expressed that they are more enthusiastic about the prospects for academia that it could facilitate. They stated that their goals for the generative AI research project extend beyond just two years of the Ithaka S+R partnership, they hope to continue exploring ways in which AI can supplement the higher educational experience. 

“I would love us to be as excited about the possibilities, as we are anxious about the unknown consequences [of AI],” White said.

 

Gabrielle McIntosh can be reached at gmcintosh@wesleyan.edu.

Twitter