In an all-campus email sent on Thursday, Aug. 17, Assistant Dean of Students/Student Conduct Kevin Butler announced that starting with the Fall 2023 semester, any unauthorized use of “words, ideas, and images generated by artificial intelligence” will be defined as plagiarism under Wesleyan University’s Honor Code.
The change was the result of the efforts of a task force which consisted of Wesleyan Student Assembly (WSA) co-chairs, faculty representatives, and members from the Office of Community Standards. The task force worked over the summer to implement a new provision that would combat the growing use of artificial intelligence (AI) among University students. After extensive research, broader community discussions, and faculty and student input, the new clause, listed under “Regulation 2: Plagiarism,” now serves to clarify the rules regarding AI use in the University’s academic community.
The University’s Community Standards Board (CSB) began evaluating cases of suspected AI use in Fall 2022, which namely consisted of students attempting to pass off AI-generated text as their own original work. However, this issue is not a recent one: for years, University faculty have struggled to discern whether students have used writing programs such as Grammarly, Hemingway, Elicit, and GitHub Copilot to complete their assignments. Some faculty members have argued that the emergence of tools like ChatGPT represents a natural progression of these relatively established technologies.
“I keep thinking not just about ChatGPT, but also other generative AI tools, some of which have actually been around for a few years,” Associate Director of Assessment Rachael Barlow said. “All of these tools are important, and they come both with opportunities and risks.”
OpenAI’s ChatGPT launch on Nov. 30, 2022, became a new fountainhead of AI plagiarism, as hours spent poring over word choice and paragraph structure were transformed into seconds. As students began to employ the tool, perplexing ethical questions emerged regarding how the originality of a work should be defined, as well as how AI-generated ideas should be cited.
“The problem that we were running into is that students are using AI to complete their assignments, and not being forthright about where the information was coming from,” Butler said. “ChatGPT spits out an answer or verbiage out there, students will take that verbiage, put it into their assignment, and hand it in as if they hadn’t gotten it from someplace else.”
Wesleyan is not the only institution striving to find equilibrium between the legitimate use of AI and its potential to be used in academically dishonest ways. Universities across the country have begun to combat the use of AI as a writing tool, with many professors re-employing paper exams and requiring students to verbally explain their thought processes.
However, some University faculty members are taking a more liberal approach.
“AI-enabled work is here to stay,” Caleb T. Winchester University Librarian Andrew White said. “The key is to know when and how to use it. Many of us on campus remember life before the ubiquity of smartphones and even the entire internet…. We have yet to really understand the impact of the mainstreaming of this technology.”
This sentiment has been echoed by several professors, many of whom have encouraged students to use AI (with proper citation) to complete classwork. Some professors have given students the option to use ChatGPT for ‘insight’ on papers, as long as they describe the extent to which they used the tool.
Professor of government, East Asian Studies, and environmental studies Mary Alice Haddad described an assignment she gave to students in which she asked them to write a research paper using AI and reflect on how helpful (or unhelpful) it was at each stage of the paper.
“I’m calling it an ‘AI-enhanced research paper,’” Haddad said. “I’m hoping that [my students’] experience[s] will help me inform future classes about the ways that AI can be useful in writing better research papers. I fully intend to use it in my own work.”
With the incorporation of AI in classrooms, Wesleyan and other universities around the country agreed to participate in a two-year research project led by the academic consulting nonprofit Ithaka S+R to analyze the impact of these rapidly-advancing technologies. The University of Chicago, Princeton University, Duke University, and other highly-ranked institutions are currently collecting data for the project’s ‘landscape review,’ examining AI’s impact on students today to assist students in the future.
“I am part of a group of faculty and staff at Wesleyan who are part of a multi-university collaboration, ‘Making AI Generative for Higher Education,’ organized by Ithaka S+R,” Barlow said. “The Wesleyan folks involved in this have been in frequent communication since January, when we organized a faculty session about generative AI.”
Barlow is also interested in what sort of inequities may develop as these technologies become more popular and advanced.
“As premium versions of these tools slip behind paywalls, some will have access and others won’t,” Barlow said. “What will this mean? Who will benefit? Who will suffer?”
As of yet, these policies have not been put to the test on a large scale. However, with midterms beginning in a few weeks, their uses and limitations will quickly become apparent.
Carolyn Neugarten can be reached at cneugarten@wesleyan.edu.
Miles Pinsof-Berlowitz can be reached at mpinsofberlo@wesleyan.edu.