‘Flipping the script’ and other tell-tale signs of AI-written copy – Press Gazette


Fighting for quality news media in the digital age.
Journalists and PRs share their tips on spotting and blocking AI-written content.
By Rob Waugh
Journalists who are facing an increasing bombardment of AI-written commentary and press releases have shared their tips on sifting reality from robot-written content.
And PR executives have also shared how they feel “caught in the middle”, as one put it, with clients increasingly submitting AI-written copy in the expectation this will be sent out to journalists.
Tech journalist Chris Stokel-Walker highlighted certain “tells” of AI-written copy in a Linkedin post, pointing out certain phrases he believes are the hallmarks of AI writing.
These include phrases such as as “flipped the script”, and sentence structure like: “They’re not just… they’re” and “framed by… reframed by”.
Sharing a likely AI-written comment, Stokel-Walker wrote: “I put out a callout for comment on a story and got the below back. So I flagged to the PR that the ChatGPT-generated comment is worse than useless to me.
“They were very apologetic and embarrassed, and said that’s what they’d been sent by their client, but they’d go back and get human-written thoughts.”
Prolific freelance journalist Harry Wallop said he believes the phrase “flip the script” is one that no human writer would ever use, and sees it as an instant cause for suspicion. 
Other freelances such as tech writer James Day say they have stopped using long em dashes in copy because they arouse suspicion that their work has been produced with the help of AI chatbots such as ChatGPT. 
Freelance journalist and content strategist James Snodgrass said he identified some copy supplied to him by a freelance as having been written by ChatGPT due to “equivocal” sentence structures which read “like a 2.2 undergraduate essay”.
When challenged, the freelance told Snodgrass that he had relied on ChatGPT to produce the piece.
Author and FT/Reuters commentator Dr Roger Miles said the hallmark he uses to identify AI-written copy is three or more consecutive paragraphs of exactly the same length, and a reluctance to use active verbs.
Dr Miles believes that AI writing tends to use abstractions in favour of concrete alternatives, such as “making money” instead of “profitability”.
Wikipedia has published a guide to help editors on the site spot copy enhanced or created by AI.
The site pointed to copy with emphasis on its own importance as a telltale sign of AI, with terms such as “stands as/serves as a testament, plays a vital/significant role, underscores its importance, continues to captivate, leaves a lasting impact, watershed moment, key turning point, deeply rooted, profound heritage, steadfast dedication…”.
The site said: “LLM writing often puffs up the importance of the subject matter with reminders that it represents or contributes to a broader topic. There seems to be only a small repertoire of ways that it writes these reminders, so if they are otherwise appropriate it would be best to reword them anyway.”
The site also highlighted the editorialising as a key problem in LLM writing, with phrases such as “it’s important to note/remember/consider, it is worth, no discussion would be complete without, this article wouldn’t exist without…”
It notes: “LLMs have serious problems keeping a neutral tone, especially when writing about something that could be considered ‘cultural heritage’ – in which case they will constantly remind the reader that it is cultural heritage.”
The site also noted that LLMs are prone to vague attributions of opinion such as “Industry reports…observers have cited…some critics argue.”
People in the public relations industry told Press Gazette that many clients now turn to ChatGPT to write commentary and preventing this, and advising clients against doing so, has become a part of their job.
Press Gazette has previously highlighted how apparently AI-generated content is increasingly making it into mainstream publications with our Reality Wars investigations.
One PR person who wished to remain anonymous claimed that The Telegraph had published an entirely AI-written comment piece delivered by a client. 
Another, Olivia Lott, spoke about the challenges of clients supplying AI-written copy and having to rewrite copy into human language to send to journalists.
Julie Thomson Dredge of Frame PR said: “When we ask clients for, say, a paragraph of comment on their area of expertise their default will be to go to ChatGPT. This has really happened just in the last year.”
Clients will also turn to ChatGPT for everything from “personal” biographies to details about their careers, says Dredge
Dredge said: “When we ask them for a biography or similar on their background, say ‘tell me a bit about why you changed career’, they will do the same. I know why this is, and a big part of me understands – PR or writing is not part of their day job. They don’t feel inherently confident about their writing skills and so they ‘outsource’.”
Dredge believes that PRs need to shoulder the burden of explaining to clients how damaging it is to supply AI-written copy in these circumstances.
She said: “To get around this, we do two things. Always tell them not to, whenever we first engage a client, and explain that journalists really don’t like it, it’s time-wasting, makes them look the opposite of a trusted expert voice, and may even get us blacklisted. And we try to interview them on the phone, like a journalist would, and write their story up from that.”
Journalists and journalist response services are also turning to automated AI tools to detect AI writing in pitches, pieces and quotes from sources.
Browser-based tools such as Pangram and Quillbot analyse text for telltale signs of AI writing, and provide an assessment on whether text is likely created by AI.
Such tools are not 100% reliable, but Pangram has been rated as accurate compared to other online AI checkers, and is integrated into journalist response services such as Qwoted to detect AI-generated pitches and copy.
Press Gazette used Pangram to analyse likely AI-generated responses shared by someone who created an X account in the name of fake freelance journalist Margaux Blanchard.
Similar tools such as SightEngine are being used to detect AI-generated images, offering a percentage probability that an image was generated by tools such as ChatGPT or Midjourney.
Email pged@pressgazette.co.uk to point out mistakes, provide story tips or send in a letter for publication on our “Letters Page” blog

source

Jesse
https://playwithchatgtp.com