AI at Amherst: Professors Grapple with Impact on Teaching, Trust & Learning
Amherst College professors are grappling with the rapid emergence of artificial intelligence (AI) tools, leading to a patchwork of classroom policies and a debate over the very nature of learning in the liberal arts. The shift began noticeably last semester, as students and faculty alike encountered new stipulations in syllabi regarding AI use, ranging from outright bans to cautious experimentation.
The issue surfaced as more than just a testing concern, prompting a broader inquiry into how faculty are navigating the challenges and opportunities presented by AI. Interviews with five Amherst professors reveal a lack of unified institutional guidance, leaving many to forge their own paths in addressing the technology’s impact on pedagogy and academic integrity.
Amherst recently granted students and staff access to several AI services, and the Provost’s Annual Retreat on Teaching and Learning in 2024 was entirely dedicated to generative AI and its implications for liberal arts education. However, this institutional engagement has been met with student calls for more careful consideration of AI’s potential corrosive effects on core academic values.
While a single, standardized approach to AI remains elusive, faculty interviewed generally agreed that the technology cannot be ignored. Some are actively integrating AI into their courses, while others are implementing measures to discourage or prevent its use.
Senior Lecturer in English Benigno Sanchez-Eppler, a long-time computer enthusiast, views AI as a natural extension of his interest in technology. “My major permission for using this stuff [is] actually defining it like a really fancy toy, and not taking myself very seriously with what I do with it,” he said. Henry S. Poler ’59 Presidential Teaching Professor of Music Klara Moricz, while acknowledging Sanchez-Eppler’s perspective, expressed a preference for maintaining a technology-free learning environment. “For some people, especially those who grew up with [technology], their instinct is to integrate new technology into teaching,” Moricz said. “But for me, my instinct is to exclude technology from teaching and create a different space. And It’s good that we are different.”
The absence of a unified college policy reflects the disciplinary differences in how AI can be appropriately addressed, according to Assistant Professor of Anthropology Victoria Nguyen. “Right now there is no single script for detection or enforcement, and that is probably appropriate given disciplinary differences,” Nguyen shared in an email correspondence.
Several professors have altered their assessment methods in response to AI. Nguyen began experimenting with AI in her classes, asking students to evaluate and grade AI-generated work, and to discuss citation practices. “My goal was to provide basic AI literacy and to highlight where human judgment and critical reasoning remain really indispensable in our work. And most students very quickly recognized the limits of these systems,” she said.
Professor of Sexuality, Women’s and Gender Studies Krupa Shandilya, however, has reverted to blue book exams for the first time in 15 years, concluding that assessing students with AI readily available was untenable. “While We find benefits to blue book exams, such as having to build arguments in a systematic and coherent fashion on the spot, you cannot sit with the idea. You have to have it there and then,” Shandilya explained.
Moricz similarly avoids direct engagement with AI in her teaching, believing that a foundational understanding of the technology is necessary but should not overshadow traditional pedagogical methods. “In this way, I suppose learning about AI and what to use it for and what not to use it for should be a part of education,” she said. “But for the humanities especially, I think it’s very important to keep away from it and engage in, as I say, age-old methods of communication.”
Some professors are leveraging AI to support their own instruction. Sanchez-Eppler uses AI to generate code that automatically creates organizational folders for student work, a task that previously consumed days. He also utilizes AI to create summaries and timelines of complex concepts, providing students with supplementary materials while emphasizing their incompleteness. “Any use that I do in class, it’s prefaced with this business of ‘we’re playing,’” Sanchez-Eppler said.
James J. Grosfeld Professor of Law, Jurisprudence and Social Thought, Lawrence Douglas, has largely maintained his existing assessment methods, including take-home essays, believing that student performance remains consistent regardless of the setting.
The introduction of AI has raised concerns about trust between professors, and students. Nguyen noted that faculty are also grappling with ethical considerations related to AI’s potential impact on grading and feedback. “If students are outsourcing writing and faculty are outsourcing evaluation, we have to ask what remains of the pedagogical relationship,” she said.
Moricz recounted instances of encountering unfamiliar writing styles in student work, leading to anxieties about potential AI-generated content. “That’s [what] I hate about it: It made me paranoid. For me, the idea that I didn’t trust my own students was an awful feeling. That’s when I decided I was unwilling to deal with AI,” she said. She now values even grammatical errors as evidence of authentic student work.
Despite these concerns, professors expressed continued faith in their students’ integrity. “I believe my students are truly engaged in an effort to develop their skill set independent of this [technology],” Douglas remarked. Shandilya echoed this sentiment: “I absolutely trust our students to be honest and to have integrity when they’re submitting their work.”
The ease of access to AI tools, however, remains a central concern. Moricz sympathized with students’ temptation to use AI, acknowledging that “When the tool is there, people will use it. I think you have to be incredibly principled to say that you will never touch it.” Shandilya worried about a “slippery slope” of AI use, blurring the line between AI-generated content and original student thought.
Douglas suggested that effective pedagogy, regardless of the tools used, should prioritize the development of independent skills. “The technology may ultimately be a helpful device, but it really has to come from the student,” he said.
Sanchez-Eppler cautioned students against relying on AI to the detriment of their reading skills. “Don’t use it to write and don’t let it poison your capacity to read,” he warned.
Moricz expressed concern that AI use might stem from a lack of student confidence. “It is critical for you to believe that you can do it. If you are using ChatGPT, then you don’t believe that you can do it. That’s no longer an education.”
The future of AI at Amherst remains uncertain. Sanchez-Eppler advocates for experimentation, while Shandilya cautions against the insidious nature of AI’s integration into academic work. Moricz emphasizes the importance of preserving spaces where AI is not present, allowing students to engage in original thought and communication.
“One shouldn’t be rigid about it. AI is so much [a] part of life that I think there are spaces in your college environment where you, in fact, must learn about it and learn how to use it responsibly. It shouldn’t be banned, no, but it should be used responsibly,” Moricz said.
Nguyen believes that addressing the challenges posed by AI requires a reevaluation of Amherst’s core educational values. “AI has generated this kind of reckoning where we have to start distinguishing between the performance of knowledge and the actual development of understanding,” she said. “Are we credentialing, training, cultivating judgment, forming citizens, fostering intellectual independence, or something else?”
Association of Amherst Students Senator Daniel Fleer ’26 is working within the student government to clarify and shape AI policy at the college, advocating for a more collective and thoughtful approach to its long-term integration into the curriculum. He noted a growing interest among faculty in shifting towards the college’s existing disciplinary process for addressing AI-related academic misconduct.
The emergence of AI challenges the fundamental purpose of higher education. As Shandilya remarked, “you are getting a Bachelor’s of Arts from Amherst College, not ChatGPT.”
