The promise of AI has delivered nothing but spam and fakery – The Telegraph

Tools such as ChatGPT won't bring about economic transformation
Our water companies may be greedy and negligent, but they don’t hold a candle to Big Tech. In the digital world, we’re facing a different kind of sewage crisis: rather than swimming in physical excrement, we’re drowning in a tsunami of the digital stuff.
Spam and fakery is everywhere. Amazon last week removed a host of books from its online store. They weren’t real books, but artifacts mass produced by generative artificial intelligence. For example, one e-book was called “Apricot bar code architecture”, Vice reported.
One very human author of children’s books estimated that 81 of the 100 best selling e-books in the category of Teen and Young Adult Contemporary Romance were AI-generated junk. Many had AI-generated covers. And underneath, you could read AI-generated reviews praising them.
“Spamageddon” is easy to explain. Public open information systems rely on trust, and the AI bot operators have very few scruples. Generative AI has handed them a new tool which dramatically lowers the cost of mass producing very low-quality material.
This is more than a nuisance for book authors, because the fake e-books drain the pool of royalties that Kindle Unlimited makes available for genuine e-book consumption.
The Verge, a technology site, warned last week of the damage to “whole swathes of the web that most of us find useful — from product reviews to recipe blogs, hobbyist homepages, news outlets, and wikis”.
When I first said in February that digital pollution would be the main consequence of generative AI – and not misinformation, or mass unemployment – its enthusiasts were dismissive.
Back then, there was talk that an AI-powered “fourth industrial revolution” would transform economic growth. An investment bubble has inevitably followed, with Thomson Reuters paying a tidy $650m (£512m) for Casetext, a ten-year-old legal software company which only launched an AI bot for lawyers a few weeks ago.
Real examples of generative AI helping productivity are still hard to find. Generative AI is fundamentally a word completion tool that has no understanding of the material it is handling. It also has a tendency to make stuff up. As The Verge explained, it’s creating “a massive game of AI misinformation telephone, in which chatbots are unable to gauge reliable news sources, misread stories about themselves, and misreport on their own capabilities”.
Nevertheless, I can confirm I have finally found one real world example. Unfortunately, it’s being used for search engine optimisation (SEO) to generate low quality blogs cheaply, funnelling internet traffic to a niche commerce site. This is legal, but conceptually not so dissimilar to the e-book trickery. SEO only exists because of a flaw in how search engines work that’s very difficult to remedy.
I’ve spoken to software development managers who hope to employ fewer coders, with generative AI bots overseen by a human “code editor”, a new role, to catch the bugs. But these are still in the pilot phase. And as one told me, good coders aren’t exactly expensive today – just cast your net worldwide.
“AI was and is overhyped. We were promised self-driving cars and cures for cancer, and we ended up with splashy tools for image generation,” wrote Ben Recht, professor of machine learning at the University of California, Berkeley, shortly before the release of ChatGPT.
Yet even if the hallucinations are fixed, and some productivity gains are finally realised, then the pollution problems remain. We have gifted the world a tool for generating lots of what we don’t want, very cheaply.
So where is the Government in all this? It’s making everything worse.
Parents are already furious that schools may allow pupils to submit course assessments written by chatbots. Cambridge University researcher Sam Gilbert found that homework is the No 2 use of ChatGPT.
A divide is opening up between the private sector and Oxbridge, who are mostly strict on banning AI, and the state sector, where many teachers are more permissive.
The Education Secretary Gillian Keegan went further in May, suggesting that chatbots could even mark exam papers. Perhaps she wanted to wind up the teaching unions, but there’s no evidence generative AI can do marking reliably.
Only last week, in new guidance for civil servants the Cabinet Office warned that, “False information can appear at any point and all facts and assertions must be cross-checked, no matter how authoritatively they appear to be presented”.
OpenAI has announced it is opening a London office to lobby policy makers more effectively. It shouldn’t find that very difficult. The Government is so keen to be loved by Silicon Valley, it is handing an outsized role in regulating the sector to the companies responsible for the pollution themselves.
That might sound familiar.  Ofwat and the Environment Agency played a role in letting water companies drain our aquifers and pollute our beaches and rivers – and now we’re inviting the digital pollution companies to govern themselves. With foxes guarding the hen house, what could possibly go wrong?

source

Jesse
https://playwithchatgtp.com