AI's Dirty Little Secret: Stanford Researchers Expose Flaws in Text … – SciTechDaily

By

AI Detection ChatGPT

Researchers have found that GPT detectors, used to identify if text is AI-generated, often falsely label articles written by non-native English speakers as AI-created. This unreliability poses risks in academic and professional settings, including job applications and student assignments.

In a study recently published in the journal Patterns, researchers demonstrate that computer algorithms often used to identify AI-generated text frequently falsely label articles written by non-native language speakers as being created by artificial intelligence. The researchers warn that the unreliable performance of these AI text-detection programs could adversely affect many individuals, including students and job applicants.

“Our current recommendation is that we should be extremely careful about and maybe try to avoid using these detectors as much as possible,” says senior author James Zou, of Stanford University. “It can have significant consequences if these detectors are used to review things like job applications, college entrance essays, or high school assignments.”

AI tools like OpenAI’s ChatGPT chatbot can compose essays, solve science and math problems, and produce computer code. Educators across the U.S. are increasingly concerned about the use of AI in students’ work and many of them have started using GPT detectors to screen students’ assignments. These detectors are platforms that claim to be able to identify if the text is generated by AI, but their reliability and effectiveness remain untested.

Zou and his team put seven popular GPT detectors to the test. They ran 91 English essays written by non-native English speakers for a widely recognized English proficiency test, called Test of English as a Foreign Language, or TOEFL, through the detectors. These platforms incorrectly labeled more than half of the essays as AI-generated, with one detector flagging nearly 98% of these essays as written by AI. In comparison, the detectors were able to correctly classify more than 90% of essays written by eighth-grade students from the U.S. as human-generated.

Zou explains that the algorithms of these detectors work by evaluating text perplexity, which is how surprising the word choice is in an essay. “If you use common English words, the detectors will give a low perplexity score, meaning my essay is likely to be flagged as AI-generated. If you use complex and fancier words, then it’s more likely to be classified as human written by the algorithms,” he says. This is because large language models like ChatGPT are trained to generate text with low perplexity to better simulate how an average human talks, Zou adds.

As a result, simpler word choices adopted by non-native English writers would make them more vulnerable to being tagged as using AI.

The team then put the human-written TOEFL essays into ChatGPT and prompted it to edit the text using more sophisticated language, including substituting simple words with complex vocabulary. The GPT detectors tagged these AI-edited essays as human-written.

“We should be very cautious about using any of these detectors in classroom settings, because there’s still a lot of biases, and they’re easy to fool with just the minimum amount of prompt design,” Zou says. Using GPT detectors could also have implications beyond the education sector. For example, search engines like Google devalue AI-generated content, which may inadvertently silence non-native English writers.

While AI tools can have positive impacts on student learning, GPT detectors should be further enhanced and evaluated before being put into use. Zou says that training these algorithms with more diverse types of writing could be one way to improve these detectors.

Reference: “GPT detectors are biased against non-native English writers” by Weixin Liang, Mert Yuksekgonul, Yining Mao, Eric Wu and James Zou, 10 July 2023, Patterns.
DOI: 10.1016/j.patter.2023.100779

The study was funded by the National Science Foundation, the Chan Zuckerberg Initiative, the

Researchers have found that GPT detectors, used to identify if text is AI-generated, often falsely label articles written by non-native English speakers as AI-created. This unreliability poses risks in academic and professional settings, including job applications and student assignments.
In a study recently published in the journal Patterns, researchers demonstrate that computer algorithms often used to identify AI-generated text frequently falsely label articles written by non-native language speakers as being created by artificial intelligence. The researchers warn that the unreliable performance of these AI text-detection programs could adversely affect many individuals, including students and job applicants.
“Our current recommendation is that we should be extremely careful about and maybe try to avoid using these detectors as much as possible,” says senior author James Zou, of Stanford University. “It can have significant consequences if these detectors are used to review things like job applications, college entrance essays, or high school assignments.”
AI tools like OpenAI’s ChatGPT chatbot can compose essays, solve science and math problems, and produce computer code. Educators across the U.S. are increasingly concerned about the use of AI in students’ work and many of them have started using GPT detectors to screen students’ assignments. These detectors are platforms that claim to be able to identify if the text is generated by AI, but their reliability and effectiveness remain untested.

Zou and his team put seven popular GPT detectors to the test. They ran 91 English essays written by non-native English speakers for a widely recognized English proficiency test, called Test of English as a Foreign Language, or TOEFL, through the detectors. These platforms incorrectly labeled more than half of the essays as AI-generated, with one detector flagging nearly 98% of these essays as written by AI. In comparison, the detectors were able to correctly classify more than 90% of essays written by eighth-grade students from the U.S. as human-generated.
Zou explains that the algorithms of these detectors work by evaluating text perplexity, which is how surprising the word choice is in an essay. “If you use common English words, the detectors will give a low perplexity score, meaning my essay is likely to be flagged as AI-generated. If you use complex and fancier words, then it’s more likely to be classified as human written by the algorithms,” he says. This is because large language models like ChatGPT are trained to generate text with low perplexity to better simulate how an average human talks, Zou adds.
As a result, simpler word choices adopted by non-native English writers would make them more vulnerable to being tagged as using AI.
The team then put the human-written TOEFL essays into ChatGPT and prompted it to edit the text using more sophisticated language, including substituting simple words with complex vocabulary. The GPT detectors tagged these AI-edited essays as human-written.
“We should be very cautious about using any of these detectors in classroom settings, because there’s still a lot of biases, and they’re easy to fool with just the minimum amount of prompt design,” Zou says. Using GPT detectors could also have implications beyond the education sector. For example, search engines like Google devalue AI-generated content, which may inadvertently silence non-native English writers.
While AI tools can have positive impacts on student learning, GPT detectors should be further enhanced and evaluated before being put into use. Zou says that training these algorithms with more diverse types of writing could be one way to improve these detectors.
Reference: “GPT detectors are biased against non-native English writers” by Weixin Liang, Mert Yuksekgonul, Yining Mao, Eric Wu and James Zou, 10 July 2023, Patterns.
DOI: 10.1016/j.patter.2023.100779
The study was funded by the National Science Foundation, the Chan Zuckerberg Initiative, the


Technology

Mimicking Minds: UCLA Finds AI Language Model GPT-3 Can Reason About As Well as a College Student

Science

Virtual AI Radiologist: ChatGPT Passes Radiology Board Exam

Science

The Language We Speak Shapes the Connectivity in Our Brains

Technology

Google’s Powerful Artificial Intelligence Spotlights a Human Cognitive Glitch

Science

Surprisingly Smart Artificial Intelligence Sheds Light on How the Brain Processes Language

Biology

Against Common Belief – Invasive Species Are Often Beneficial

Science

Linguistics Research May Improve Future Internet Search Engines

Technology

New Tool Detects ChatGPT-Generated Academic Text With 99% Accuracy

This article doesn’t reference the many limitations of the study, which were detailed in the study itself, although this article links to an opinion piece and not the actual study. Sample sizes were tiny (91 TOEFL essays from a Chinese forum and 88 US eighth grade essays). The detectors in the study were based on GPT2, not 3.5 or 5.
“Firstly, although our datasets and analysis present novel perspectives as a pilot study, the sample sizes employed in this research are relatively small. …Secondly, most of the detectors assessed in this study utilize GPT-2 as their underlying backbone model…Lastly, our analysis primarily focuses on perplexity-based and supervised-learning-based methods that are popularly implemented, which might not be representative of all potential detection techniques.”
Title is pure clickbait.
Email address is optional. If provided, your email will not be published or shared.










SciTechDaily: Home of the best science and technology news since 1998. Keep up with the latest scitech news via email or social media.
  > Subscribe Free to Email Digest

September 3, 2023

A Whiff of Genius: Simple Fragrance Method Boosts Cognitive Capacity by 226%

Sweet Smell of Success: Simple Fragrance Method Produces Major Memory Boost When a fragrance wafted through the bedrooms of older adults for two hours every…

Read More

Sweet Smell of Success: Simple Fragrance Method Produces Major Memory Boost When a fragrance wafted through the bedrooms of older adults for two hours every…
September 3, 2023

Primordial Puzzles: Unraveling the Cosmic Origins of Life in the Lab

September 3, 2023

Atmospheric Revelations: New Research Reveals Earth’s Ancient “Breath”

September 2, 2023

Bioplastic Backfire: Why Paper Cups Are Just As Toxic as Plastic Cups

September 2, 2023

Warning: Columbia University Uncovers High Metal Levels in Blood of Marijuana Users

September 2, 2023

NASA’s ILLUMA-T: Pioneering the Next Era of Laser Space Communications

September 2, 2023

What Happened to All the Supermassive Black Holes? Astronomers Surprised by Webb Data

September 2, 2023

Profound Consequences for the Climate – Scientists Discover Urea in the Atmosphere

Copyright © 1998 – 2023 SciTechDaily. All Rights Reserved.

source

Jesse
https://playwithchatgtp.com