Counterpoint: Why AI Should Not Be Used in the Classroom – Harvard Independent

Executive Board

According to CNN, 30% of college students have used ChatGPT in the past year to supplement their learning. Evidently, artificial intelligence is increasingly becoming a prominent facet of everyday life, with its growing use in the classroom speaking volumes about how quickly we must adapt to it. While AI’s expansion into the classroom could be seen as further advancing education, it is imperative to closely evaluate the implications of accepting AI as an educational tool. While AI might seem like the world’s most efficient personal tutor on the surface, tools like ChatGPT impair learning when used incorrectly because of both the ease at which it can give students answers and its confident inaccuracies. Due to the many obstacles it places in the way of effective learning, students should not be able to use AI in the classroom. 
While recognizing AI’s potential as a positive force in the education space, Dean of Harvard’s Graduate School of Education Martin West states in an article published by the GSE, “some uses of generative AI can undermine [students’] learning. Particularly, when the tools are used to do the cognitive work of thinking for students rather than to support their learning.” A Harvard education should help students think for themselves and develop the life skills necessary to make decisions on their own in the real world, and AI can greatly hinder the growth of these skills. 
When one uses ChatGPT in the classroom, it often simply spits out answers rather than helping them come to conclusions on their own. In essay classes, I can ask the chatbot to write a paper on specific concepts of Daoism and it will write the paper for me, clearly and effectively expressing key concepts to create an arguably sophisticated essay. Students can then pass off Chat GPT’s work as their own, or at least draw heavy inspiration from its response. They can use Chat GPT to enhance or even supplant their own analytical thought, thus creating a facade for themselves or their teachers whereby they aren’t actually learning the material—the AI has learned it for them.
According to a June 2023 article from Nature, “slowly and gradually, AI limits and replaces the human role in decision-making. Human mental capabilities like intuitive analysis, critical thinking, and creative problem-solving are getting out of decision-making.” As ChatGPT expands on its capabilities (GPT 4, the AI’s subscription-based newest update, now includes the ability to analyze photos), these issues will only be magnified. If students are given a tool that will do the work for them and spit out the answer, they are simply not going to learn as well as they would if they were required to independently problem solve.
Harvard, like every other academic institution on the planet, has been forced to wrestle with these issues over the past year, and the same pitfalls recognized by Dean West have cropped up in Harvard classrooms. One such example is CS50, which has recently incorporated the CS50 duck into their curriculum.
Yet even if we assume that AI can help students learn material rather than outright giving them answers, readily-available online AI tools are far from advanced enough to be accurate, posing significant drawbacks for in-school use, particularly in STEM fields, where the software has been known to struggle with basic math problems. For example, when I asked ChatGPT make a paragraph about AI inaccuracy, ChatGPT 3.5 spit out fake but believable statistics such as “AI systems currently make errors in up to 20% of their responses when handling complex or nuanced questions,” and “68% of students who relied solely on AI-generated content for their assignments encountered errors,” with believable fabricated sources to match. I then confirmed the inaccuracy of these statistics by comparing them to reputable statistics supported by real world data.
 It’s alarming to see how ChatGPT can produce believable but incorrect and misleading information. One might think that software updates would fix this issue, but even modern versions of AI, such as ChatGPT 4 have actually been shown to spread false information and narratives at a higher rate than its predecessors. 
How can we trust a tool that can so easily mislead its users with the education of future generations? Until ChatGPT rids itself of this problem, students and teachers cannot expect it to output accurate information, putting clear roadblocks in the way of its ability to educate students effectively. Both in regard to the AI’s general accuracy and the ease at which it provides access to information, Harvard should be wary of introducing AI into the classroom.
Jai Glazer ’27 ( writes Forum for the Independent

You must log in to post a comment.
Is there a topic you think we should cover? An issue students are facing that needs to be explored? We’ll find the story. We also welcome Op-Eds. Submit these to
[email-subscribers-form id=”1″]
We’d like to store a few cookies on your computer, but you can say no.
We'd like to store a few cookies on your computer. Some of them have identical content for everyone, and are used to store things like whether or not you like having a featured story at the top of the homepage. Others are unique to you, and we use them to see which pages are more popular. You can turn these off if you like, although it helps us be a much better paper if you don't.
These cookies involve basic website functions like page navigation and ensuring you’re not a bot.
Cookies Used
Used for storage of your preferences and region.
Cookies Used
Helps us understand how popular some of our posts are and other analytics.
Cookies Used
Allows us to make money via advertisements
Cookies Used