COUNTER AI: Schools here and across the nation seeking smart AI … – Sentinel Colorado

Sign up for our free Sentinel email E-ditions to get the latest news directly in your inbox.
Sentinel Colorado
The source for greater Aurora, Colorado
When philosophy professor Darren Hick came across another case of cheating in his classroom at Furman University last semester, he posted an update to his followers on social media: “Aaaaand, I’ve caught my second ChatGPT plagiarist.”
Sign up for our free newsletter to receive the latest news
Sign up for our free newsletter to receive the latest news
Friends and colleagues responded, some with wide-eyed emojis. Others expressed surprise.
“Only 2?! I’ve caught dozens,” said Timothy Main, a writing professor at Conestoga College in Canada. “We’re in full-on crisis mode.”
Practically overnight, ChatGPT and other artificial intelligence chatbots have become the go-to source for cheating in college.
Now, educators are rethinking how they’ll teach courses this fall from Writing 101 to computer science. Educators say they want to embrace the technology’s potential to teach and learn in new ways, but when it comes to assessing students, they see a need to “ChatGPT-proof” test questions and assignments.
For some instructors that means a return to paper exams, after years of digital-only tests. Some professors will be requiring students to show editing history and drafts to prove their thought process. Other instructors are less concerned. Some students have always found ways to cheat, they say, and this is just the latest option.
While Aurora schools do not have policies specifically against students using AI to cheat on assignments, education leaders are looking at ways AI can be used as a tool to help teachers and students.
At Community College of Aurora, an AI chatbot named Professor Fox, was launched along with their new website in late April. It helps students apply for and enroll in classes. Blair Lee, executive director of communications, said that since its launch, the chatbot has engaged in more than 1,000 unique conversations with students. 
Lee added that the community college has also used AI to help translate their website to Spanish, which is expected to launch at the end of October. A human will review the translations to ensure their accuracy. 
Brandon Feres, dean of academic success in general education, said while AI is a prolific tool, students must learn the nuances of using it in their coursework and in their career. 
The rise in popularity of free AI tools such as ChatGPT, gives students the opportunity to cheat on assignments. Feres said that there have been increased instances where students submit AI-generated work. 
To combat this, the community college encouraged teachers to speak with their students about academic integrity. They also rely on AI detection software. 
Jason Koenig, chief information officer at Cherry Creek School District, said that it’s hard to prevent students from using AI to cheat. The district can turn off access to ChatGPT on the school’s devices but students have access to it at home or on their cellphones.
The school district also uses a detection software, called GPTZero, which is a free tool for teachers to use. However, it’s not foolproof. 
If students plagiarize AI-generated essays, then GPT Zero will detect that it was written by a computer. However, if students only copy and paste half of it and write the other half, then the detection tool will not flag it as being written by AI. Koenig added that there are also ways for students to use AI that will allow their plagiarized essays to be undetectable by GPTZero. 
Koenig added that they’ve seen students use AI to augment the lessons they’ve learned in class. 
“It’s a question of, what do we do to encourage students to be good creators and consumers of the internet,” Koenig said. 
In Aurora Public Schools, a newly established AI Steering Committee will make policy recommendations and provide professional development opportunities to prepare district staff to work with AI. 
In a statement from spokesman Corey Christiansen, the district is “evaluating potential AI detection tools and working with teachers as they navigate this emerging technology with their students.”
An explosion of AI-generated chatbots including ChatGPT, which launched in November, has raised new questions for academics dedicated to making sure that students not only can get the right answer, but also understand how to do the work. Educators say there is agreement at least on some of the most pressing challenges.
— Are AI detectors reliable? Not yet, says Stephanie Laggini Fiore, associate vice provost at Temple University. This summer, Fiore was part of a team at Temple that tested the detector used by Turnitin, a popular plagiarism detection service, and found it to be “incredibly inaccurate.” It worked best at confirming human work, she said, but was spotty in identifying chatbot-generated text and least reliable with hybrid work.
— Will students get falsely accused of using artificial intelligence platforms to cheat? Absolutely. In one case last semester, a Texas A&M professor wrongly accused an entire class of using ChatGPT on final assignments. Most of the class was subsequently exonerated.
— So, how can educators be certain if a student has used an AI-powered chatbot dishonestly? It’s nearly impossible unless a student confesses, as both of Hicks’ students did. Unlike old-school plagiarism where text matches the source it is lifted from, AI-generated text is unique each time.
In some cases, the cheating is obvious, says Main, the writing professor, who has had students turn in assignments that were clearly cut-and-paste jobs. “I had answers come in that said, ‘I am just an AI language model, I don’t have an opinion on that,’” he said.
In his first-year required writing class last semester, Main logged 57 academic integrity issues, an explosion of academic dishonesty compared to about eight cases in each of the two prior semesters. AI cheating accounted for about half of them.
This fall, Main and colleagues are overhauling the school’s required freshman writing course. Writing assignments will be more personalized to encourage students to write about their own experiences, opinions and perspectives. All assignments and the course syllabi will have strict rules forbidding the use of artificial intelligence.
College administrators have been encouraging instructors to make the ground rules clear.
Many institutions are leaving the decision to use chatbots or not in the classroom to instructors, said Hiroano Okahana, the head of the Education Futures Lab at the American Council on Education.
At Michigan State University, faculty are being given “a small library of statements” to choose from and modify as they see fit on syllabi, said Bill Hart-Davidson, associate dean in MSU’s College of Arts and Letters who is leading AI workshops for faculty to help shape new assignments and policy.
“Asking students questions like, ‘Tell me in three sentences what is the Krebs cycle in chemistry?’ That’s not going to work anymore, because ChatGPT will spit out a perfectly fine answer to that question,” said Hart-Davidson, who suggests asking questions differently. For example, give a description that has errors and ask students to point them out.
Evidence is piling up that chatbots have changed study habits and how students seek information.
Chegg Inc., an online company that offers homework help and has been cited in numerous cheating cases, said in May its shares had tumbled nearly 50% in the first quarter of 2023 because of a spike in student usage of ChatGPT, according to Chegg CEO Dan Rosensweig. He said students who normally pay for Chegg’s service were now using the AI platform for free.
At Temple this spring, the use of research tools like library databases declined notably following the emergence of chatbots, said Joe Lucia, the university’s dean of libraries.
“It seemed like students were seeing this as a quick way of finding information that didn’t require the effort or time that it takes to go to a dedicated resource and work with it,” he said.
Shortcuts like that are a concern partly because chatbots are prone to making things up, a glitch known as “hallucination.” Developers say they are working to make their platforms more reliable but it’s unclear when or if that will happen. Educators also worry about what students lose by skipping steps.
“There is going to be a big shift back to paper-based tests,” said Bonnie MacKellar, a computer science professor at St. John’s University in New York City. The discipline already had a “massive plagiarism problem” with students borrowing computer code from friends or cribbing it from the internet, said MacKellar. She worries intro-level students taking AI shortcuts are cheating themselves out of skills needed for upper-level classes.
“I hear colleagues in humanities courses saying the same thing: It’s back to the blue books,” MacKellar said. In addition to requiring students in her intro courses to handwrite their code, the paper exams will count for a higher percentage of the grade this fall, she said.
Ronan Takizawa, a sophomore at Colorado College, has never heard of a blue book. As a computer science major, that feels to him like going backward, but he agrees it would force students to learn the material. “Most students aren’t disciplined enough to not use ChatGPT,” he said. Paper exams “would really force you to understand and learn the concepts.”
Takizawa said students are at times confused about when it’s OK to use AI and when it’s cheating. Using ChatGPT to help with certain homework like summarizing reading seems no different from going to YouTube or other sites that students have used for years, he said.
Other students say the arrival of ChatGPT has made them paranoid about being accused of cheating when they haven’t.
Arizona State University sophomore Nathan LeVang says he doublechecks all assignments now by running them through an AI detector.
For one 2,000-word essay, the detector flagged certain paragraphs as “22% written by a human, with mostly AI voicing.”
“I was like, ‘That is definitely not true because I just sat here and wrote it word for word,’” LeVang said. But he rewrote those paragraphs anyway. “If it takes me 10 minutes after I write my essay to make sure everything checks out, that’s fine. It’s extra work, but I think that’s the reality we live in.”
The Sentinel not only cares deeply about bringing our readers accurate and critical news, we insist all of the crucial stories we provide are available for everyone — for free.
Like you, we know how critical accurate and factual information are in making the best decisions about, well, everything that matters. Factual reporting is crucial to a sound democracy, a solid community and a satisfying life.
So there’s no paywall at SentinelColorado.com. Our print editions are free on stands across the region, and our daily email E-ditions are free just for signing up, to anyone.
But we need your help to carry out this essential mission.
The costs of producing top-notch journalism are steep. We need readers like you to become partners and help us. Even if it’s a little, it means a lot.
Join our mission by clicking below and providing even a few dollars a month, or more if you’re able. Even if you can’t give right now, click here to subscribe to our free daily email E-ditions.
If stories like the one you just read matter, help us keep the Sentinel different and still here when you need us, for everyone. Join us now, and thank you. 
Thanks for your contribution!
Your email address will not be published. Required fields are marked *







Contact us: webmaster@aurorasentinel.com
We’ve recently sent you an authentication link. Please, check your inbox!
Sign in with a password below, or sign in using your email.
Get a code sent to your email to sign in, or sign in using a password.
Enter the code you received via email to sign in, or sign in using a password.
Sign in with your email
Lost your password?
Try a different email
Send another code
Sign in with a password
By signing up, you agree to our Terms and Conditions.

source

Jesse
https://playwithchatgtp.com