4 Steps to Help You Plan for ChatGPT in Your Classroom – The Chronicle of Higher Education

The advent of ChatGPT and other generative AI tools is akin to the seismic shifts we experienced when the internet was born or when smartphones became part of our everyday lives. Those inventions initially bewildered and concerned us, but ended up changing the way we do lots of things — everything? — at work, home, and all spaces in between. So it will be with the new AI.
That analogy comes from a recent speech by Vinton G. Cerf, vice president and chief internet evangelist at Google. His comment struck a chord, and I’ve been thinking about it ever since. It offers much-needed perspective at a time when a lot of faculty members are once again bewildered and concerned about an invention that promises to radically alter the way we teach and work.
We are unable to fully display the content of this page.
The most likely cause of this is a content blocker on your computer or network. Please make sure your computer, VPN, or network allows javascript and allows content to be delivered from c950.chronicle.com and chronicle.blueconic.net.
Once javascript and access to those URLs are allowed, please refresh this page. You may then be asked to log in, create an account if you don’t already have one, or subscribe.
If you continue to experience issues, contact us at 202-466-1032 or help@chronicle.com
The advent of ChatGPT and other generative AI tools is akin to the seismic shifts we experienced when the internet was born or when smartphones became part of our everyday lives. Those inventions initially bewildered and concerned us, but they ended up changing the way we do lots of things — everything? — at work, home, and all spaces in between. So it will be with the new AI.
That analogy comes from a recent speech by Vinton G. Cerf, vice president and chief internet evangelist at Google. His comment struck a chord, and I’ve been thinking about it ever since. It offers much-needed perspective at a time when a lot of faculty members are once again bewildered and concerned about an invention that promises to radically alter the way we teach and work.
As an expert in technology-enabled teaching, I’ve spent the past few months absorbing and synthesizing higher ed’s conversations about generative AI. And yes, I predict we will adjust to ChatGPT as we did with the internet and smartphones, undergoing a process of wrapping our heads around AI tools and learning how to integrate them productively into our professional and personal lives.
How we get to that same level of comfort with AI may well be a rocky road. Certainly in recent months we’ve seen plenty of alarming headlines — like this one about the professor who failed all of his students after concluding they used ChatGPT, and this one and this one, about the degree to which students are already cheating with AI. But judging from the comments I see and hear, many faculty members are still in denial or unaware of how these tools might affect their own classrooms.
What follows are four strategies to help you progress through this wrapping-our-heads-around-AI stage. I would encourage every faculty member to learn to teach with ChatGPT and other such tools as soon as this fall. But my advice here is not just for those of you willing to do so — it’s also for those of you who aren’t. Because even if you’d rather not bring AI into your courses, you still need to understand how these tools work and be able to speak knowledgeably about them with students.
Get familiar with generative AI tools. The only way to do that is to use them. Play around. See how they work. Yet I’ve had countless conversations with faculty members who say they haven’t tried these tools yet and seem intent on keeping their heads stuck deep in the sand. I’ve also talked with many students who say their professors didn’t acknowledge the giant elephant-bot in the room this past spring, simply not mentioning ChatGPT or AI at all. I get it: I’ve had moments where I’ve felt stymied by these tools, and tempted to just ignore them. But that attitude won’t serve you or your students very well.
If you don’t know where to start, read “What You Need to Know About ChatGPT.” Recent essays (here and here) on how to adjust your writing assignments to the realities of AI are useful, as is this Google doc on “classroom policies for AI generative tools” and this nuanced piece on how AI could be a case of the “rich getting richer” if we don’t help students learn to use it. Finally, this article on what to do while chatbots “mature,” written by educator and blogger Ray Schroeder, establishes a useful middle ground between panic and denial. As an initial step forward, Schroeder encourages academics to develop a degree of fluency with these tools by testing them out informally.
It’s hard to make careful decisions about how and whether to use something if you have no firsthand knowledge of it. If that’s you, consider starting to use chatbots in your day-to-day life. I started by reminding myself, anytime I was about to Google something, to ask ChatGPT (or more accurately, Google Bard, my current favorite). As you play around, you can start thinking through how you might work with AI in your teaching.
Get ready to talk about it in class. If you’re like many faculty members, you have yet to define your course policies on AI, which is not surprising given how abruptly it burst on the scene. Plenty of “skeptics and fans alike” are struggling to frame their own views on the appropriate, ethical, and responsible use of ChatGPT and other tools, and aren’t fully prepared to talk with students about this topic. But ready or not, you’re going to have to discuss AI with your students in 2023-24.
I’ve come to believe that faculty members have an ethical obligation to help students prepare for the future of work, a future in which AI will undoubtedly feature prominently. Just last week I spoke with a professor whose daughter has to fire three people on her team because AI can do their jobs better. We must help students prepare for an AI-informed workplace. Even if you’re not sure what to think about using these tools in your classes, tell students that. Be honest. Students can learn from your example of transparency, humility, and willingness to learn.
A big part of the conversation has to focus on cheating and plagiarism. A recent talk on academic integrity by Tricia Bertram Gallant, director of the academic-integrity office at the University of California at San Diego, helped me think about how to frame this discussion with students. Cheating isn’t new, and neither is “contract cheating” (paper mills and other schemes to pay someone to do your homework), though the latter seems to be growing by leaps and bounds. One way forward is to emphasize how cheaters are only cheating themselves. In her talk, Gallant described a track coach who would tell runners they could ride a scooter around the track, but that that wouldn’t make them faster or stronger runners. Think about yourself like that coach, she said. Talk with students about the value of doing the work of learning for themselves instead of outsourcing it to a machine.
Better yet, coach students on the effective use of AI tools related to classwork. I recently had a conversation with a psychology professor who tells his students: “Use these tools to help you understand challenging passages in assigned readings, or to build preliminary foundational knowledge to help you understand more difficult concepts. Don’t use AI to cheat — use it as a tool to help you learn.” That strikes me as a good tone to take for now.
Further, we have an opportunity to help students become upstanding professionals who demonstrate integrity in their work. In this era of remote and hybrid jobs, working in ways that establish trust with your supervisor and team is more important than ever. Why not use a little class time to discuss integrity as students prepare for their future jobs?
If you suspect students of AI-related cheating, don’t rush to hand out F’s. Among the most common questions that faculty members are asking about AI: What do I do if I suspect a student cheated with ChatGPT? What if they admit it? Or, what if they don’t admit what seems to be a clear case of AI-enabled dishonesty?
If you think someone has submitted work done by a chatbot, Gallant and other cheating experts recommend you do two things first: (1) Carefully analyze their work, and (2) talk with the student about their writing process. Granted, this approach could be challenging and time-consuming in courses with large enrollments (which is why time-pressed instructors are inclined to hand out a failing grade on the assignment and be done with it). But I would still recommend talking with the student(s) you suspect. Request a short Zoom or phone call to ask a few questions about the student’s work: “How did you come up with the idea for your paper?” or “Tell me more about this argument you’ve proposed here.” Gauge whether they fully understand that using generative AI tools to write their paper was unethical.
If a student admits wrongdoing, you have options to consider:

If students do not admit to wrongdoing, and you’re pretty sure they cheated, well, that’s a little harder. This past spring an instructor told me about a graduate student whose writing on an assignment was noticeably unlike their previous prose, with strange errors. When confronted, the student denied using AI to generate the work. As it happens, it was a low-stakes assignment and the instructor decided there was no need to press further. But raising the issue at least opened up a teachable moment. Even if they “get away with it” this time, your intervention may keep them from cheating in your class again. And if the behavior persists, you may have to pursue a formal solution.
Ideally in the months ahead, higher-ed institutions and government agencies will create policies and guidelines on how to deal with cases of chatbot cheating. For now, in these very early days of AI, you’ll just have to follow your instincts. In my view, your best bet is to talk with the student(s) in question and decide how to proceed on a case-by-case basis. And maybe the difficulties of dealing with AI-related cheating will prompt some institutions to rethink those large-sized classes.
If you use plagiarism-detection tools, do so with a hefty degree of caution. I don’t recommend policing your classroom to promote academic integrity, as those efforts can be traumatizing for students and can communicate that they don’t belong in your class, thereby widening equity gaps in higher ed.
However, I’m a realist. Plagiarism-detection tools are available (although not nearly as effective as they initially claimed to be) and plenty of academics will use them (some of these tools are now marketing themselves as a solution to AI-enabled cheating). Emily Isaacs, a professor of writing at Montclair State University and executive director of its faculty-advancement office, recently wrote in a 700-member email group on AI in Education: “These detection systems are being used and will continue to be used. We need to think about how they can be used as a tool and make the process open and clear for students.”
I take the same view of such detection tools as I do of online proctoring. We know that these proctoring surveillance systems have “a history of racial bias” and that they disadvantage any students who live with differences related to neurodivergence or to physical or learning disabilities. Likewise, some students have caregiving and work obligations that prevent successful completion of exams while being monitored via webcam. However, we also know that online proctoring might be unavoidable due to accreditation or other requirements.
My recommendation is that you think carefully about the use of AI-detection software, and not simply default to it. Consider other options before automatically concluding that policing students is the only way forward. But if you do decide to use detection software, analyze the results very carefully before accusing students of dishonesty. Better yet, make the results available to students so they can see what’s being flagged and revise accordingly before they submit their final work.
We are in uncharted territory. It’s hard to know how to proceed with teaching in an AI world. But we are smart, resourceful, and we want the best for our students and their learning outcomes. We will find our way. Give it your time — and attention.

source

Jesse
https://playwithchatgtp.com