Here Comes the Second Year of AI College – The Atlantic
Universities still aren’t sure whether to embrace the technology or ban it.
When ChatGPT entered the world last fall, the faculty at SUNY Buffalo freaked out. Kelly Ahuna, the university’s director of academic integrity, was inundated by panicked emails. “It has me thinking about retiring,” one English professor confessed. He had typed a prompt into ChatGPT and watched in horror as an essay unfurled on-screen. There were errors, sure: incorrect citations, weird transitions. But he would have given it a B-minus. He anticipated an onslaught of undetectable AI plagiarism. Ahuna found herself as something of a spiritual mentor, guiding faculty through their existential angst about artificial intelligence.
The first year of AI college was marked by mayhem and mistrust. Educational institutions, accustomed to moving very slowly, for the most part failed to issue clear guidance. In this vacuum, professors grew suspicious of students who turned in particularly grammatical essays. Plagiarism detectors flagged legitimate work as AI-generated. Over the summer, some universities and colleges have regrouped; they’re trying to embrace AI at the institutional level, incorporating it into curriculum and helping instructors adapt. But the norm is still to let individual educators fend for themselves—and some of those individuals seem to believe that they can keep teaching as if generative AI didn’t exist.
Read: The first year of AI college ends in ruin
Modernizing higher education is a formidable task. I graduated from college this past spring. Before the pandemic, my professors insisted that we print assignments out and hand them in—forget submitting online. Although ChatGPT was available for nearly my entire senior year, the university administration sent out only one announcement about it, encouraging faculty to understand the implications of the technology. My friends, meanwhile, talked incessantly about it. I don’t know anyone who wrote an entire paper with ChatGPT—or who would admit to it, at least—but people used it in other ways. Some asked it to generate practice-exam questions for them to solve. Others turned to it for help with their philosophy reading, asking the chatbot to explain, say, Parfit’s definition of a self-effacing theory. One of my friends asked ChatGPT how to get over her ex-boyfriend. (The advice was generic but excellent.) But only one of my professors ever mentioned it: Halfway through the spring semester, my computer-science professor announced that we couldn’t use ChatGPT to complete our codes. Then he said he would rely on the honor system.
Heading into the second year of AI college, some institutions are trying to develop a less technophobic approach. According to Kathe Pelletier, a director at the tech-focused education nonprofit Educause, the most enthusiastic AI adopters tend to be public universities or community colleges that serve large, diverse student bodies and see education as a means of social mobility. Arizona State University is piloting an introductory writing course in which an AI bot offers feedback on students’ work. The class is taught to remote learners at a low cost, and the AI could allow for something like peer feedback for students who take classes alone, on their own schedule. Administrators at the University of Tennessee at Knoxville have organized a professor-led task force to suggest different ways for faculty to add generative AI to the classroom. The University of Florida launched a $70 million AI initiative in 2020 with funding from the chip-manufacturing giant Nvidia. Sid Dobrin, an English professor who is part of the initiative, says that it will sponsor a competition this year in which students can win prize money for the most creative use of generative text or image AI. These schools are preparing to feed employers’ hunger for AI-savvy graduates. “I always say: You are not going to lose your job to AI,” Dobrin told me. “You are going to lose your job to somebody who understands how to use AI.”
Other universities, however, still have no overarching institutional posture toward AI. Administrators are wary of announcing policies that could age poorly. Professors are left to figure out how to leverage the technology on their own. In its defense, this stance preserves academic autonomy and encourages experimentation. For example, the teacher of Harvard’s introductory computer-science course deployed a teaching-assistant chatbot this summer built based on OpenAI’s code. But the hands-off institutional approach also forces instructors, many of whom have yet to master the “Mute” button on Zoom, to be at the vanguard of a technology that isn’t fully understood even by the people who created it. In a recent informal poll by Educause, 40 percent of respondents said that they weren’t aware of anyone at their institution taking responsibility for decisions around how generative AI should be used. “A president or provost is thinking, Should I jump on this only to have it become the most unpopular thing in the world?” Bryan Alexander, who teaches at Georgetown University’s school of learning, design, and technology, says.
Some academics have been eager to add the alien technology to their classroom. Ted Underwood, who teaches English and information science at the University of Illinois at Urbana-Champaign, says that every student should learn the basics of AI ethics. He likens the topic to the tenets of democracy, which even people who won’t pursue political science need to understand. Other professors see AI as a way to enliven instruction. The new introductory writing course at the University of Utah asks students to compare sonnets written by William Shakespeare, Pablo Neruda, and ChatGPT; professors say that using an AI bot is the easiest way to generate usefully bad poems.
Another faction within academia sees generative AI as an enemy. In the age of large language models, a student’s writing assignment can no longer reliably confirm whether they’ve understood a topic or read a text. Weekly reading responses and discussion posts, once a staple of higher education, seem useless. Some instructors are trying to adopt countermeasures. One SUNY Buffalo faculty member told Kelly Ahuna that he would keep his weekly online quizzes but employ technology that tracks students’ eye movements to detect potential cheating. Others seem to hope that prohibition alone can preserve the familiar pre-ChatGPT world. Most instructors at Bryn Mawr College have declared that any use of AI tools counts as plagiarism, says Carlee Warfield, the head of the school’s honor board. Darren Hick, a philosophy professor at Furman University, told me he refuses to abandon take-home essays. In his view, in-person exams aren’t real philosophy. They leave no time for rumination and serious engagement with a thinker’s work. “It’s gimmicky,” Hick said. “My pedagogy is good, my students learn, and I don’t like the idea of having to upend what’s been a tradition in philosophy for millennia because somebody has a new technology that students can use to cheat.”
Read: The college essay is dead
Many of the professors and administrators I spoke with likened generative AI to earlier waves of technological change; perhaps an analogy offered perspective and solace when confronting something so mystifying. They compared it to Wikipedia (riddled with inaccuracies), to calculators (students still learn long division), and even to microwave dinners (ChatGPT’s writing is a frozen meat loaf; a student essay is a marbled steak).
But the most common comparison was to the advent of the internet. Charles Isbell, the dean of computing at Georgia Tech, points out that the web did not immediately create the kind of nightmarish scenario that people had predicted. Supersonic email exchanges didn’t scramble our brains, just as the “Undo” button hasn’t eroded our sense of consequence. For now, Isbell isn’t concerned about students cheating with AI: If they submit a ChatGPT-written essay, the errors will give them away, and if they try to avoid detection by meticulously fact-checking the chatbot’s writing, they’ll learn the material. But just like the internet, which spawned smartphones and social-media sites that few people could have foreseen, AI will undercut the most basic patterns in higher education. “It’s perfectly reasonable to hold in your head both thoughts,” Isbell told me. “It’s not going to be the big, destructive force that we think it’s going to be anytime soon. Also, higher education will be completely unrecognizable in 15 years because of this technology. We just don’t really know how.”