This piece was inspired by a Wesleyan history class assignment.
c h a t g p t
one needs to
ask if a body
of language
turning into
another shape
is more than
itself then how
to measure
such evolution
if metamorphosis
that strays is
also contained
in changes made
as if each shift
were a trial and
error of value
or echo of myth
in conjuring
one thing from
parts of another
— Edward Carson, “Twofold,” 2024
A few months ago, as the resident technology expert of my family, I was asked by my grandfather what ChatGPT was. Instead of explaining that it’s a predictive, transformer algorithm that simulates human syntax and logic trained on reinforcement learning, I simply took out my laptop for a demonstration. I asked for a 300-word summary of the Dan Brown novel he had been enjoying all morning on the couch. As the screen hemorrhaged text, he shook his head in contempt asking, “Why would I want a robot to read my book for me?”
At the time, I had attributed his reaction to an antiquated worldview—one that had lived through the emergence of the internet, smartphones, credit cards, GMOs, and TikTok—simply an old man’s resistance to a new technology. While I could laugh at his traditionalism, I was, for some reason, still ambivalent about using these seemingly magical technologies. Perhaps a combination of my skepticism and hubris kept me thinking that my fancy college education would teach me to solve problems, analyze text, and synthesize ideas in ways computers could never do. Sure, they could do things faster than me—but certainly not better. I had to be more than the total sum of my mechanical processes and their material output. I must possess an ability within me that cannot be replaced or imitated, but what would it be? Is it my heart, my soul, my flaws?
In the hopes of answering some of these questions, I experimented with different chatbots, asking them to conduct a reasonably straightforward comparison between the medical care received by President George Washington and King George III of England as well as an evaluation of their respective treatments. I asked the following question to ChatGPT, Gemini, and Deepseek:
“Compare the end-of-life care provided to President George Washington and the psychiatric care offered to King George III of England—using these two examples to illustrate and evaluate the quality of care available to wealthy, powerful men in the late eighteenth century.”
As expected, all of the A.I.s answered the prompt, but how well? While every chatbot differed slightly in form and content, each offered a “neutral and informative” response derived from their nebulous datasets.
Gemini, for instance, responded with a list of what it would consider as facts: all, of course, lacking citation. According to Gemini, Washington’s death was due to a bacterial infection called epiglottitis. His doctors used “bloodletting, blistering, and emetics” which were all common interventions of the time. Gemini additionally mentioned that these medical interventions were grounded in a “flawed understanding of disease, likely weaken[ing] him further and hasten[ing] his death.” Furthermore, both men “suffered from conditions that could not be effectively treated.” Here, Gemini implicated both the time period’s “flawed understanding of disease” and Washington’s doctors as both causes of his more rapid demise. Additionally, Gemini asserted that his treatment was an ineffective one.
These are not really objective claims. In “The Therapeutic Revolution,” historian Charles Rosenberg suggests Washington’s treatments “worked.” According to Rosenberg, the traditional way of evaluating medical treatments prior to the nineteenth-century had been overly concerned with progress and physiology. Traditional therapeutics are more than just medical procedures, but also their emotional, relational, and cultural contexts that shape status, ideology, and identities.
If Gemini’s claim is that the late eighteenth century had an imperfect understanding of treatment, to whom can this claim be accredited? What was the method of arriving at these conclusions? Some poor historian has had their work reduced to data points, broken down, and reassembled into something new.
We need, of course, to remind ourselves that Gemini is functioning as designed. I asked a question, and it returned to me a mathematically probable answer based on the information it has access to. Delivered on a silver platter, this tapestry of information was in itself a complete work, “effectively” answering all components of the prompt. A.I. consumers are unlikely to admit that these outputs are the result of real people. So whose work is it?
As the one who pushed the ENTER key, the user becomes its creator, yet they are estranged from their medium. Authorship becomes untethered from its essence, for even anonymous poetry has a creator who, at some point, could think and feel and breathe. Through the use of A.I., the delineation between author and product has been blurred. What was once the work of human minds is now an amalgamation of disparate and alien probabilities, detached from context and location.
Without mention of authorship or source, how did Gemini decide what texts to integrate and which ones to discard? Did it consider the works’ reception or cultural responses? What affordances did it give to testimonials and under-told stories? All of these considerations could lead someone to reject A.I. as a creative tool. But in all honesty, could I myself have written a better response? If I were to go head-to-head with DeepSeek on the question of eighteenth-century medical treatment, there would be no contest. It would find information faster, with a broader scope, with less error, and perhaps with less bias. This begs the question of what is lacking from their output; something must be missing; humanity can hope. I wanted to know how the chatbots would answer this second question:
“Evaluate your previous reply. Could I, a human, have produced a better response than you, a generative large language model? Second—and be really honest with yourself, (if you have the capacity for honesty)—does it ‘feel’ as if something is missing from your response?”
First, assuming that Gemini was “reflecting” or “being honest” is a common anthropocentric fallacy—the projection of human cognitive processes on to an algorithm. When I asked Gemini to introspect and consider its own limitations, it did not do what was asked of it. Instead, it most likely sleuthed the internet for instances when authors have written “honestly” about the limitations of A.I., assembling them into a cogent response. One might even be able to hear the faint echoes of a faraway tech critic’s blog post in its response.
“There’s something deeply human that I cannot replicate: the capacity for emotional resonance, unique interpretation, and the richness of experience…. While I can generate text that is grammatically correct, informative, and even stylistically sophisticated, it lacks a certain something that comes from human experience, creativity, and intentionality. It lacks, perhaps, the ‘spark’ of human consciousness.”
An answer, but not a satisfactory one. Asking A.I. this question is akin to the president of a company asking a job candidate what their biggest weakness is. I received an answer, but one that I expected, decorated with biases, rhetoric, and the voice of A.I.’s mother.
Maybe ChatGPT is correct in saying A.I. is missing this “spark.” But how is the spark of human consciousness established, where does it even come from, and why does it die in generative A.I.? In a Wesleyan class assignment, I was able to find a partial answer to this question. Participating students were asked to unpack Walter Benjamin’s argument in “The Work of Art in the Age of Mechanical Reproduction.” Benjamin argued that the rise of mechanical production has stripped human’s artistic products of their “aura.” Benjamin argues art’s industrialized replication is a substitution—the replacement of an object’s unique existence with an imitation that can meet the consumer wherever they are. Through this replication, it becomes detached from its cultural and spatial contexts, resulting in the alienation from its tradition and proximate visceral and affective responses.
I will not argue that A.I.’s products are an art form, but perhaps what they lack is authenticity and Benjamin’s notion of “aura.” How A.I. will situate itself within the production and proliferation of information has yet to be determined, but what its consumers should consider is all that is being sacrificed by this mode of production. A.I. deals in concepts and ideas but their outputs are impoverished as the human experience has been pruned.
My grandfather might understand this as well as Benjamin does—that A.I. takes something away from us. If it were enough to read a novel in seconds, we would be satisfied with their blurbs. To consume an artist’s work is to consume a part of them, and through solely the digestion of their blurbs, something is lost. I can hope the aura of my work is self-evident in that it resembles my essence, in its logical liquidity and somewhat winding nature. But in case you want a summary, one with me removed, something remote from its materials, here is one from ChatGPT:
“The essay explores the tension between human creativity and generative A.I., examining how A.I., while efficient, lacks the authenticity and emotional resonance of human-produced work, stripping it of its ‘aura,’ a concept from Walter Benjamin. The author reflects on their personal experience with A.I., questioning the loss of human essence and cultural context in A.I.’s mechanical replication of ideas and information.”
Kiran Eastman is a member of the class of 2027 and can be reached at kbleakneyeas@wesleyan.edu.