Opinion | AI Garbage Is Already Polluting the Internet – The New York Times

Advertisement
Supported by
Guest Essay

Mr. Hoel is a neuroscientist and novelist and the author of The Intrinsic Perspective newsletter.
Increasingly, mounds of synthetic A.I.-generated outputs drift across our feeds and our searches. The stakes go far beyond what’s on our screens. The entire culture is becoming affected by A.I.’s runoff, an insidious creep into our most important institutions.
Consider science. Right after the blockbuster release of GPT-4, the latest artificial intelligence model from OpenAI and one of the most advanced in existence, the language of scientific research began to mutate. Especially within the field of A.I. itself.
Adjectives associated with A.I.-generated text have increased in peer reviews of scientific papers about A.I.
Frequency of adjectives per one million words
Innovative
100
80
60
Notable
40
Commendable
Intricate
20
Versatile
Meticulous
2020
2021
2022
2023
2024
Adjectives associated with A.I.-generated text have increased in peer reviews of scientific papers about A.I.
Frequency of adjectives per one million words
Innovative
100
80
60
Notable
40
Commendable
Intricate
20
Versatile
Meticulous
2020
2021
2022
2023
2024
Note: Peer reviews are for the International Conference on Learning Representations (ICLR), one of the largest A.I. conferences.
Source: “Monitoring AI-Modified Content at Scale: A Case Study on the Impact of ChatGPT on AI Conference Peer Reviews”
By Taylor Maggiacomo
A study published this month examined scientists’ peer reviews — researchers’ official pronouncements on others’ work that form the bedrock of scientific progress — across a number of high-profile and prestigious scientific conferences studying A.I. At one such conference, those peer reviews used the word “meticulous” more than 34 times as often as reviews did the previous year. Use of “commendable” was around 10 times as frequent, and “intricate,” 11 times. Other major conferences showed similar patterns.
Such phrasings are, of course, some of the favorite buzzwords of modern large language models like ChatGPT. In other words, significant numbers of researchers at A.I. conferences were caught handing their peer review of others’ work over to A.I. — or, at minimum, writing them with lots of A.I. assistance. And the closer to the deadline the submitted reviews were received, the more A.I. usage was found in them.
If this makes you uncomfortable — especially given A.I.’s current unreliability — or if you think that maybe it shouldn’t be A.I.s reviewing science but the scientists themselves, those feelings highlight the paradox at the core of this technology: It’s unclear what the ethical line is between scam and regular usage. Some A.I.-generated scams are easy to identify, like the medical journal paper featuring a cartoon rat sporting enormous genitalia. Many others are more insidious, like the mislabeled and hallucinated regulatory pathway described in that same paper — a paper that was peer reviewed as well (perhaps, one might speculate, by another A.I.?).
We are having trouble retrieving the article content.
Please enable JavaScript in your browser settings.
Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.
Thank you for your patience while we verify access.
Already a subscriber? Log in.
Want all of The Times? Subscribe.
Advertisement

source

Jesse
https://playwithchatgtp.com