ChatGPT dissidents, the students who refuse to use AI: ‘I couldn’t remember the last time I had written something by myself’ – EL PAÍS English


“I started using ChatGPT in my second year of college, during a very stressful time juggling internships, assignments, studies and extracurricular activities. To ease this burden, I began using it on small assignments. And, little by little, I realized [that it was remembering] details about my writing style and my previous texts. So, I quickly integrated it into everything… my work [became] as easy as clicking a button,” explains Mónica de los Ángeles Rivera Sosa, a 20-year-old Political Communications student at Emerson College in Boston, Massachusetts.
“I managed to pass the course… but I realized that I couldn’t remember the last time I had written an essay by myself, which was my favorite activity. This was the catalyst for me to stop using the app,” she affirms.
This student’s view isn’t the most common, but it’s not exceptional, either. More and more students are ceasing to use artificial intelligence (AI) in their assignments. They feel that, with this technology, they’re becoming lazier and less creative, losing the ability to think for themselves.
“I’ve stopped using artificial intelligence to do my university work because it doesn’t do me any good. Last year, I felt less creative. This year, I’m barely using it,” says Macarena Paz Guerrero, a third-year journalism student at Ramon Llull University in Barcelona. “At university, we should encourage experimentation, learning and critical thinking, instead of copying and pasting questions into a machine without even reading them,” she adds.
Microsoft recently published a study in which it interviewed 319 workers to investigate how the use of AI tools impacts critical thinking and how this technology affects their labor. The results indicate that AI users produce a less diverse set of results for the same task. This means that workers who trust the machine put less effort into contributing their own ideas. But who delegates work to the machine? And why do they do so?
The workers who are most critical of AI are those who are most demanding of themselves. In other words, the more confident a person is – and the more confidence they have in the tasks they perform – the less they resort to technology. “We’re talking about overqualified [individuals]… that is, students or workers who stand out for their high abilities and encounter limitations when using AI,” says Francisco Javier González Castaño, a professor at the University of Vigo, in Spain. He has participated in the development of AI chatbots. “But for most people and tasks that require repetition, artificial intelligence tools are very helpful,” he adds.
“When a university assignment can easily be solved by a machine, it’s not the students’ problem. Rather, the education system [is at fault],” says Violeta González, a 25-year-old pianist and graduate student in Pedagogy at the Royal Conservatory of Brussels. “However, if the assignment requires a critical mindset to be carried out, things change, because AI still isn’t capable of doing many of the things that humans can do. A ChatGPT response is a blank canvas to work on. It’s nothing more than collected data that we use to decide what to do. On its own, it doesn’t contribute anything new.”
Despite criticism, generative AI is widely used in universities. According to a recent study by the CYD Foundation, 89% of Spanish undergraduate students use some of these tools to resolve doubts (66%), research, analyze data, or gather information (48%), or even write essays (45%). Approximately 44% of students use AI tools several times a week, while 35% use them daily.
Toni Lozano, a professor at the Autonomous University of Barcelona, admits that these tools pose a challenge for the education system. “[AI] can be helpful for students who want to improve the quality of their work and develop their own skills. [But it] can be detrimental for those who don’t want to put in the effort or lack the motivation. It’s just another tool – similar to a calculator – and it all depends on how you use it.”
“There are students who only attend university for the degree,” he notes, “and others who come to learn. But in any case, I don’t think limiting or suspending the use of AI is a good idea. We’re increasingly committed to in-person classes and have returned to written exams,” he adds.
AI for Critical Thinking
In an age of automation and uniformity of results, fostering critical thinking is a challenge for both universities and technology companies. These institutions are trying to develop generative AI tools that motivate users to think for themselves. They also purport to help them tackle more complex problems and enter a job market that’s increasingly influenced by AI. This is evident from the aforementioned Microsoft study and also from recent announcements by the two leading AI companies: OpenAI and Anthropic.
OpenAI launched ChatGPT Edu – a version of its chatbot for students – in May of 2024. Anthropic, meanwhile, launched Claude for Education, a version of its chatbot focused on universities. Claude poses Socratic questions (“How would you approach this?” or “What evidence supports your conclusion?”) to guide students in problem-solving and help them develop critical thinking.
When Google first burst on the scene, there were similar claims about it leading to less creativity, less effort and less critical thinking. What are the differences between using the search engine or generative AI to complete a project? “There are many,” Macarena Paz asserts emphatically. “In search engines, you enter the question, consult different pages and structure your answers, adding and discarding what you consider to be appropriate,” she details.
Paz explains that she’s now opting for other types of search engines, since Google has integrated AI through Overviews: automatically-generated answers that appear at the top of the page in some search results. Instead, she uses Ecosia, which promotes itself as a sustainable alternative to Google, as it uses advertising revenue generated by searches to fund reforestation projects. All the students interviewed for this report expressed concern about the water consumption associated with each search performed with an AI tool.
“One of the biggest limitations I find with ChatGPT is that it doesn’t know how to say ‘no.’ If it doesn’t know an answer, it makes one up. This can be very dangerous. When I realized this, I started to take the information it gave me with a grain of salt. If you don’t add this layer of critical thinking, your work becomes very limited,” Violeta González explains. “[ChatGPT] selects the information for you and you lose that decision-making ability. It’s faster, but also more limited,” she clarifies. “Critical thinking is like an exercise. If you stop doing it, your body forgets it and you lose the talent,” Mónica de los Ángeles Rivera Sosa warns.
Furthermore, it has its limitations. “In the case of programming, we must differentiate between coding and programming. Generative AI is ideal for automating thousands of specific tasks executed with a single line of code… but it still shows significant limitations when it comes to solving complex and original problems,” Toni Lozano adds. In the case of the humanities, it can write a report or an email appropriately and correctly, but it’s unable to write in its own style. A way of writing with the ChatGPT tone is already becoming established. Again, this is the problem of standardization.
“People have never been as educated as they are now. But do we all really need to be hyper-educated for the system to work?” Francisco Javier González asks. “Obviously not. AI is likely to reduce some skills that aren’t as necessary as we think. There was a period in ancient times – I’m not saying it was better – when only a few monks had critical thinking skills. In five years, it won’t be necessary to learn languages. [At that point], something will be lost,” he admits.
There are scientific studies that confirm the negative impact of generative AI on memory, creativity and critical thinking. Before generative AI entered our lives, the American writer Nicholas Carr already warned of the epistemological impact of the internet: “Once, I was a scuba diver in a sea of words. Now, I zip along the surface like a guy on a Jet Ski,” he writes, in the opening pages of The Shallows: What the Internet Is Doing to Our Brains (2011). “As our window onto the world – and onto ourselves – a popular medium molds what we see and how we see it… and eventually, if we use it enough, it changes who we are, as individuals and as a society.”
If generative AI makes us less original and lazier, removing our critical thinking skills, what impact will this have on our brains? Will we all have the same answers to different questions? Will everything be more uniform, less creative? Only time will tell. But as we wait for the future to speak, this is what ChatGPT says: “The advance of generative artificial intelligence poses a disturbing paradox: the more it facilitates our thinking, the less we exercise it.”
Sign up for our weekly newsletter to get more English-language news coverage from EL PAÍS USA Edition
¿Quieres añadir otro usuario a tu suscripción?
Si continúas leyendo en este dispositivo, no se podrá leer en el otro.
¿Por qué estás viendo esto?
Si quieres compartir tu cuenta, cambia tu suscripción a la modalidad Premium, así podrás añadir otro usuario. Cada uno accederá con su propia cuenta de email, lo que os permitirá personalizar vuestra experiencia en EL PAÍS.
¿Tienes una suscripción de empresa? Accede aquí para contratar más cuentas.
En el caso de no saber quién está usando tu cuenta, te recomendamos cambiar tu contraseña aquí.
Si decides continuar compartiendo tu cuenta, este mensaje se mostrará en tu dispositivo y en el de la otra persona que está usando tu cuenta de forma indefinida, afectando a tu experiencia de lectura. Puedes consultar aquí los términos y condiciones de la suscripción digital.

source

Jesse
https://playwithchatgtp.com