Some Queen's professors encourage AI use. Others will give you an “F” – Queen's Journal

For the first few weeks, ChatGPT was an amusing new toy. Then someone figured out it can do more than write bad poetry. One year in, it’s dramatically heralding the end of white-collar jobs.
Most university professors didn’t know what to make of Large Language Models (LLMs) like ChatGPT at first. But now, some are AI wizards.
Like Commerce Assistant Professor Evan Jo, who used ChatGPT to generate some responses for this article.
“The [AI] landscape is evolving, and so too will our approaches and policies in education,” ChatGPT wrote, prompted by the financial technology professor.
“Couldn’t have said it better myself,” Jo commented.
At Queen’s, it’s up to professors to decide whether they will allow students to use generative AI in class. Jo argued AI literacy is a skill valued by employers—if it’s allowed in the workforce, it should be allowed in university. Using generative AI to assist with assignments is fine Jo said, if students use citations.
READ MORE: Artificial Intelligence may have a place in academia
Students need to learn to use ChatGPT or other LLMs as productivity tools to stay competitive, according to Jo. The ability to use GitHub Copilot, which gives suggestions for code like autocomplete does for text, is a crucial marketable skill.
“This competitive edge far outweighs the challenges LLMs might pose for academic assessments, at least in my context,” he added. “We just need better/more appropriate assessments, adapted to this new reality of LLMs.”
For assessments where students are required to conduct nuanced analysis or draw on recent context, ChatGPT’s answers don’t measure up, according to Computer Science Assistant Professor Yuan Tian.
Tian encourages students to use generative AI for her Advanced Data Analytics course. Using LLMs themselves, students can better understand how people use generative AI—especially for data analysis.
“We expect students to use ChatGPT to support coding and report writing because we want to assess if students know the workflow for advanced data analysis and can proceed with the plan,” Tian said.
“Writing quality is not our main evaluation target; the methodology and logic behind the presented content is what matters.”
But for classes where writing quality or ability to memorize information does matter, rules around generative AI are tighter.
In geography and planning lecturer Michael Trendota’s classes, students will probably receive a failing grade if they use generative AI on assignments.
Students won’t necessarily fail because they’ll get caught, but because of the low quality of writing produced by LLMs like ChatGPT, according to Trendota. ChatGPT is also “famous” for making up citations, he said.
Results from ChatGPT aren’t recent and don’t go into enough depth to properly answer questions, in Trendota’s experience. ChatGPT can only access data prior to 2021, but Google’s Bard can access real-time information from the internet.
“If you use ChatGPT, you’re probably going to get an F because the quality is going to be low. That’s if you’re not discovered, you’re still going to get an F,” he said.
Trendota’s real estate courses in Queen’s department of geography and the Smith School of Business aim to give students the skills to work through a first-round interview in the real estate industry.
If students were allowed to use generative AI in his courses, they likely wouldn’t be able to absorb the terminology well enough to use it in interviews, he argues.
“It’s doing them a disservice by not preparing them properly for that interview,” Trendota said.
Throughout discussions with colleagues and students, Trendota foresees allowing the use of generative AI in the course within the next two years. As the policy was at the discretion of professors, the default was “no” until professors and course instructors had a chance to properly integrate it into courses.
“It’s so ubiquitous in the world, that we will one day all be using it and allowing it,” he said. “The question is just getting us from now until then and finding out a way to get there.”
A previous version of this article misspelled Michael Trendota’s name in one instance. Incorrect information was published in the Jan. 26 issue of The Queen’s Journal.
The Journal regrets the error
artificial intelligence, ChatGPT, Professors, Queen's, Technology, University
All final editorial decisions are made by the Editor(s)-in-Chief and/or the Managing Editor. Authors should not be contacted, targeted, or harassed under any circumstances. If you have any grievances with this article, please direct your comments to journal_editors@ams.queensu.ca.
Your email address will not be published. Required fields are marked *






© All rights reserved.
Please use the form below to subscribe to Campus Catch-Up, the new twice-weekly newsletter from the editors at The Queen’s Journal.




Accessibility Tools

source

Jesse
https://playwithchatgtp.com