Were universities wrong to reject Turnitin's AI detection system? – Palatinate
Durham's Official Student Newspaper
The writer of this piece chose to remain anonymous
When the Financial Times reported that Dan Rosensweig – chief executive of publicly-listed American education platform Chegg – had warned that elements of his edtech were suffering from AI competition, it seemed an industry might be about to fall. In hindsight, Chegg was an especially vulnerable company. Its homework cheat sheets were competing with one of the most well-publicised uses of ChatGPT, which offered a relatively comparable service free of charge. Indeed, education platforms and institutions are now looking to change this narrative by suggesting that generative AI is a friend that will enhance their offering once integrated into their existing products and customer databases.
The level of confidence that Turnitin can have in their system’s capabilities will consequently fluctuate
UUniversities are no different and are quite rightly maintaining an openness to AI. However, given that they reject the “inappropriate” use of the technology to write essays, it can seem surprising, albeit understandable considering the proximity of their decision to final exams, that many have opted out of employing the detection system developed by the company that they already use for plagiarism detection. It may be early days, but Turnitin’s own product tests suggested that their service is 98% accurate, as Palatinate has previously reported. The notion that Turnitin would have a major conflict of interest in evaluating their own product seems an entirely rational argument but can be overblown given that the company’s entire business relies on trust and integrity – indeed it would hardly be in their interests beyond this academic year to overpromise on what they have created. Undeniably, AI is a shifting the industry, and the level of confidence that Turnitin can have in their system’s capabilities will consequently fluctuate, but it will be paramount that someone tackles this issue of detection next year and they seem the best placed.
Using AI will be an essential skill in the future and it should be integrated into the university education
Concerns were also cited in the press about undermining the trust between lecturers and students, handing over student data to a private company and stifling AI experimentation. However, in reality, the first point did not prevent universities from introducing a plagiarism detection service. Furthermore, the data protection threat could well be greater to students from the AI chatbots that they are using to write their essays than from detection software. The AI experimentation considerations are completely understandable – using AI will be an essential skill in the future and it should be integrated into the university education. However, at this stage it risks creating an uneven playing field – modules have not been designed around AI and the advantages that greater use could confer may well be significant, especially when there is still limited clarity as to how much students can use it. What is required in the short term then, is a strong detection software and some specific tailored examples of acceptable and unacceptable use in next year’s subject handbooks – while we wait for lecturers to redesign their modules to accommodate this new technology.
Image credit: Focal Fot via Flickr
Your email address will not be published.
This site uses Akismet to reduce spam. Learn how your comment data is processed.
Copyright © 2023 Palatinate. All rights reserved.
Welcome to Palatinate’s website. We rely on readers’ generosity to keep producing award-winning journalism and provide media training opportunities to our team of more than 150 students. Palatinate has produced some of the biggest names in British media, from Jeremy Vine to the late great Sir Harold Evans. Every contribution to Palatinate is an investment into the future of journalism.