AI spam is already starting to ruin the internet – Business Insider
A little over a year after the public launch of ChatGPT, we’re starting to see one prediction come true of how it could affect the internet: AI spam is flooding the web.
Just last week, there were three examples of how this played out.
First, 404 Media, a new tech blog, wrote that it had to modify its website because of artificial-intelligence spam.
Recently, it has noticed that AI-written versions of its scoops have shown up on spam sites that are friendly to search-engine optimization — sometimes even showing up above the real 404 Media articles on Google search results. The scammers, of course, are making money by running ads on the AI-generated pages.
From 404 Media‘s look into the article-theft cottage industry:
These AI-generated versions of articles hurt the news business, effectively stealing away clicks (and revenue) from the outlets that spend real time and money doing the reporting.
Secondly, Wired wrote that The Hairpin, a popular indie blog from the 2010s, had been taken over by an AI click farmer who left up some of the popular articles but replaced the names of the women who wrote them with men’s names — ick.
Lastly, at the most toxic end of the AI-spam spectrum, there are AI-generated obituaries, full of errors, that cause real pain to grieving families. In 2021, long before ChatGPT, Wired reported that “obituary pirates” were scraping and copying funeral-home websites. Now they’re using AI for a new and lucrative tactic of creating YouTube videos and spammy websites out of the obits, capturing search traffic for people looking for information about the recently deceased.
The New York Times recently reported on the pain these AI-generated YouTube videos caused a real grieving family. After a college student died by accidentally falling onto New York subway tracks, YouTube videos and AI-generated articles quickly appeared.
These obits were in response to the scammers noticing a spike of search interest around the young man’s name and the word “subway.” The scammers quickly plugged in those key terms, told AI to write an obit in a conversational tone, and then slapped it up on a website, the Times reported. (Most of the details were wrong, but that didn’t stop the site from appearing in Google searches.)
All three examples — 404 Media’s copycats, The Hairpin’s squatter, and the obituary pirates — differ in the details. But they have one thing in common: Bad actors, scammers, and spammers are trying to make money by using AI to pump out massive quantities of content to reach the top of Google search results.
Ultimately, this isn’t a problem just for journalists getting their content stolen or grieving families rightfully upset by the digital grave robbery. This is a huge problem for Google. It ends up serving up garbage results to users, who increasingly have other attractive options — also thanks to AI — for search.
Google told The New York Times that it’s aware of these spammy obits and was working to address them (and took some down since they violated its policies).
But bad actors are often a step ahead of the platforms — as with the AI-generated lewd images of Taylor Swift that proliferated on X last week.
AI is going to radically change the internet, for better or worse. It’s on Google and the companies making these AI tools to minimize the actual harm.
Read next
Jump to