Even OpenAI has given up trying to detect ChatGPT plagiarism – Digital Trends

OpenAI, the creator of the wildly popular artificial intelligence (AI) chatbot ChatGPT, has shut down the tool it developed to detect content created by AI rather than humans. The tool, dubbed AI Classifier, has been shuttered just six months after it was launched due to its “low rate of accuracy,” OpenAI said.
Since ChatGPT and rival services have skyrocketed in popularity, there has been a concerted pushback from various groups concerned about the consequences of unchecked AI usage. For one thing, educators have been particularly troubled by the potential for students to use ChatGPT to write their essays and assignments, then pass them off as their own.
OpenAI’s AI Classifier was an attempt to allay the fears of these and other groups. The idea was it could determine whether a piece of text was written by a human or an AI chatbot, giving people a tool to both assess students fairly and combat disinformation.
Yet even from the start, OpenAI did not seem to have much confidence in its tool. In a blog post announcing the tool, OpenAI declared that “Our classifier is not fully reliable,” noting that it correctly identified AI-written texts from a “challenge set” just 26% of the time.
The decision to drop the tool was not given much fanfare, and OpenAI has not posted a dedicated post on its website. Instead, the company has updated the post which revealed the AI Classifier, stating that “the AI classifier is no longer available due to its low rate of accuracy.”
The update continued: “We are working to incorporate feedback and are currently researching more effective provenance techniques for text, and have made a commitment to develop and deploy mechanisms that enable users to understand if audio or visual content is AI-generated.”
The AI Classifier is not the only tool that has been developed to detect AI-crafted content, as rivals like GPTZero exist and will continue to operate, despite OpenAI’s decision.
Past attempts to identify AI writing have backfired spectacularly. For instance, in May 2023, a professor mistakenly flunked their entire class after enlisting ChatGPT to detect plagiarism in their students’ papers. Needless to say, ChatGPT got it badly wrong, and so did the professor.
It’s cause for concern when even OpenAI admits it can’t properly perceive plagiarism created by its own chatbot. It comes at a time of increasing anxiety about the destructive potential of AI chatbots and calls for a temporary suspension of development in this field. If AI has as much of an impact as some people are predicting, the world is going to need stronger tools than OpenAI’s failed AI Classifier.
Wix is an oft-recommended online service that lets you knock together a website without any coding knowledge.
Now the Israel-based company has announced a new AI Site Generator that aims to make the process even smoother and more intuitive, and less time-consuming, too.
In the world of artificial intelligence (AI) chatbots, OpenAI’s ChatGPT is undoubtedly the best known. But Google Bard is hot on its heels, and the bot has just been granted a new ability: the power of speech.
The change was detailed in a Google blog post, which described the update as “Bard’s biggest expansion to date.” It grants Bard not just speech, but the ability to converse in over 40 languages, use images as prompts, and more.
ChatGPT maker OpenAI is facing an investigation by the Federal Trade Commission (FTC) over possible violations of consumer protection laws.
The action marks the toughest scrutiny of Microsoft-backed OpenAI since it burst onto the scene in November with its AI-powered ChatGPT chatbot.
Upgrade your lifestyleDigital Trends helps readers keep tabs on the fast-paced world of tech with all the latest news, fun product reviews, insightful editorials, and one-of-a-kind sneak peeks.