Opinion | 11 practical and responsible ways I have improved my … – Poynter

Generative artificial intelligence accelerated its creep into journalism this summer — and I use the term “creep” deliberately — when Google shopped a mysterious robot-writing tool to executives at several major news companies. The product, they claimed, would help journalists … at least until competing AI applications replace them. Or until AI floods the internet with so much text that all information melts into a self-cannibalizing, undifferentiated muck where notions like “credibility” and “reality” become quaint and irrelevant.
For journalists, this is the great paradox of generative AI: At its best, the technology could automate menial tasks, identify human blind spots and otherwise make us better at our jobs. At its worst, AI could further muddy the information ecosystem, jeopardize reader trust — and tempt publishers eager to cut labor costs.
Such big-picture decisions won’t fall to me, however, or to most of the other as-yet human grunts toiling away in the content mines. Instead, I’m interested — as both an occasional tech journalist, and as an adviser to early-career reporters — in finding practical, responsible ways to improve my work and workflow with AI.
To be clear, I’m under no illusions that ChatGPT and its ilk represent some new class of flawless wonder tools. Among other serious issues, the technology companies that developed these applications often exploit low-wage workers to annotate their training data and train on copyrighted work without authors’ permission. We know that generative AI can “hallucinate,” defame and plagiarize, with embarrassing — even dangerous — consequences. And the information you share with ChatGPT is not private by default: OpenAI uses your conversation history to improve its models unless you opt out.
But similar types of risks and pitfalls don’t prevent us from using other digital tools, and the ethical norms we’ve developed for those tools can also apply here. For instance, I do not and will never use AI to produce any analysis, research or writing for publication, much as I would never source a fact from Wikipedia. I also wouldn’t share sensitive or secret information with ChatGPT — but I wouldn’t put that type of information in a Google Doc, either.
For me, generative AI — much like Wikipedia and Google Docs — has become a useful launch point for a number of secondary reporting and writing tasks. Since ChatGPT launched to the public last November, I have experimented extensively with the tool and spoken with a range of programmers, prompt engineers and writers about how they use ChatGPT in their work.
Mainstream journalists are increasingly viewing the tool with “cautious curiosity,” said Charlie Beckett, who leads the Journalism AI project at the London School of Economics. An April survey of more than 300 U.S. media workers found that roughly three in four have experimented with ChatGPT or a similar application.
“One thing you shouldn’t do is ignore this technology,” Beckett said. “In a very mundane way, it’s going to be very useful for anybody who’s organizing information.”
Here are 10 ways I use ChatGPT in my writing and journalism.
Everyone has different methods for initial research and prereporting, but my latest is this: Read everything I reasonably can about the subject; save relevant info as Raindrop annotations; export all those annotations to an unwieldy, many-paged Google Doc; and then outline and rewrite through it. ChatGPT’s base model can reorganize background research into timelines or thematic buckets, making it easier to reference as you draft. I recently asked the tool to reformat a bunch of background research as a timeline for a story about a Bitcoin mining company’s acquisition of a local power plant:
Be careful: ChatGPT sometimes invents, or “hallucinates,” information, even when instructed not to. It’s one of the reasons Karen Hao — formerly the AI reporter for The Wall Street Journal and MIT Technology Review — said she doesn’t use ChatGPT in her work.
“ChatGPT hallucinates very often even when you feed it source material to summarize,” she wrote by email. “I’m working on a story about this right now — and it’s really shocking how common this is!”
ChatGPT remembers the information that you share within each discrete conversation, so after you share background on a story, the tool becomes useful for other, secondary reporting tasks. I have asked it for both ideas of people to interview and for potential interview questions. In some cases, it’s thought to ask things I wouldn’t, especially when I ask it to tailor questions to the concerns of a particular community or the style of a specific outlet. This was for a freelance story on wildfire smoke and worker protections:
I now routinely feed finished drafts to ChatGPT to help identify biased language, unclear arguments and other holes in my reporting, inspired by this June newsletter from Trusting News assistant director Lynn Walsh. I ask questions like, what is this story missing? What sections might supporters or opponents of this topic consider biased or unfair? Who else should I interview about this topic? I recently shared an example on Twitter involving this story about evictions:
“Some people may look at these responses and think the information shared is pretty basic,” Walsh wrote, “… but I do think it could help journalists identify potentially polarizing words or phrases without taking too much more time or interrupting current workflows.”
I have found that to be true. Of course, ChatGPT also has significant biases of its own, which journalists should understand before they use the tool.
ChatGPT-produced writing is soulless and dreadful, on the whole — a failure the writer and UCLA writing instructor Laura Hartenberger dissected at great length in a July essay for Noema, a magazine that explores “the transformations sweeping our world.”
“For creative, expressive, or exploratory writing tasks, using ChatGPT is like supervising a bumbling assistant who needs painfully detailed, step-by-step instructions that take more effort to explain than to simply do the work yourself,” she wrote.
But for smaller, more concrete writing tasks, even a bumbling assistant is useful, Hartenberger told me by email. She uses ChatGPT like a high-powered, personalized thesaurus, asking for its help coming up with words when she “blanks” on them. ChatGPT also improves on existing thesauruses, dictionaries and other language tools by finding words that meet multiple conditions. For instance, what’s a word for dark beige that starts with a hard “c” sound? Or what’s an idiomatic expression that conveys the same meaning as “back on track” and relates in some way to pools, beaches, swimming or lifeguards? (I picked the first one.)
Hartenberger told me ChatGPT is also useful for “getting up to speed” on specialized or technical topics, kind of like a Wikipedia page that talks back to you. (Much like a Wikipedia page, this is also not a primary source — you have to verify everything afterward.) I’ve asked ChatGPT to explain how a turbine generates electricity, for instance, and how a computer “mines” cryptocurrency. My husband, a nerd, has spent untold hours studying the basics of quantum physics with the help of Professor ChatGPT. Our shared account is littered with instructional gems like this:
You can tailor the bot’s conversational and instructional style in your initial prompt, as my husband did above, or by editing your custom instructions. I’ve specified, among other things, that I am a journalist at a local newspaper writing for a general audience, and that I’d like ChatGPT to reread our entire chat before each response.
While ChatGPT may write poorly, it copy-edits fairly well — at least when given the right prompts. I’ve been most successful using Wharton School of the University of Pennsylvania professor Ethan Mollick’s formula for writing ChatGPT “spellbooks” — packages of prompts that nudge the software into using a specific type of reasoning in a specific order.
This prompt, which works in GPT4, instructs ChatGPT to behave like a friendly editor who gives general feedback and is extrasensitive to flaws I’d like to fix in my own work: excess adjectives and adverbs and an overreliance on auxiliary verbs. Once ChatGPT identifies those issues, I’m free to leave them or correct them as I like. For instance, I just removed about five needless uses of the word “can” in this article after completing this exercise.
ChatGPT can “read,” summarize and explain corporate earnings reports, affidavits and other legal, financial and regulatory documents — a good way to cull large files, or large bodies of files, for closer human review. (Remember that ChatGPT can hallucinate even in its summaries, so you’ll need to verify any bullet points it gives you.)
Because ChatGPT has a roughly 4,000-character limit, you must either excerpt relevant snippets for its analysis or chop your text into shorter segments using a tool like ChunkEase. From there, use a prompt like this one in GPT-4 to generate detailed summaries of the material. Paid users may also install a plug-in to analyze entire PDFs — but more on that later.
With a little more legwork and an audio recording, the above technique also works to keep an eye on public meetings you otherwise couldn’t attend.
First, run your personal recording or YouTube or Facebook stream through Otter.ai or a similar tool, making sure to label individual speakers. Then use ChunkEase or a similar tool to break the transcript into shorter segments before feeding to ChatGPT using a prompt similar to the one above. Once you’ve submitted all your segments, ChatGPT can combine its summaries into a high-level overview. This is from a summer press call by New York State environmental advocates:
Give ChatGPT a dataset and a question about that data, and the tool will give you step-by-step instructions for finding the answer — even writing custom Python scripts or Excel or Google Sheets formulas. This is an application I haven’t tried much myself, and some data journalists have cautioned that ChatGPT isn’t practical for many advanced reporting projects. Still, as an intermediate user, I found it introduced me to several formulas I didn’t previously know as I worked through a recent audit of my household finances:
AI-based productivity apps will charge you $30 or more each month to “optimize” your work schedule — but ChatGPT does many of the same tasks for free with a little bit of human prep work. To start, I make a list of all my current assignments, tasks and deadlines. Then I ask ChatGPT to prioritize my tasks according to the Eisenhower matrix. From there, you can ask ChatGPT to suggest a plan of action, break each task into smaller steps and output your schedule as a table, like this one:
Generative AI is evolving quickly, and this guide will be obsolete as soon as I finish it. Already, paid users have access to a whole other suite of ChatGPT tools via OpenAI’s growing library of third-party plug-ins. For $20 each month, these plug-ins allow users to scrape public websites, create custom maps, generate summaries of PDFs and webpages, plumb the transcripts of earnings calls and YouTube videos and even search NPR’s Diverse Sources Database. Last week, OpenAI also announced that paid subscribers would soon get access to new features that let ChatGPT recognize the objects in images, read text aloud and browse the live web with Microsoft’s Bing search engine.
I’m excited to experiment with these new tools — more excited than scared, at least for the time being. Ask again in a year, of course, and I might feel differently.
SPJ leaders say risks to the organization are overblown as the foundation has millions of dollars in reserves.
The latest edition of the staple book series about sports writing is available this week.
These tests are routine and required by law. Cellphone users will receive a text message, along with a unique tone and vibration.
Generative artificial intelligence has plenty of issues. It’s also very useful for organizing and bolstering journalists’ work.
Taylor Swift appeared on camera a dozen or so times during Sunday’s Chiefs-Jets game. What were they not showing that you really wanted to see?
You must be logged in to post a comment.
This site uses Akismet to reduce spam. Learn how your comment data is processed.
Get the Poynter newsletter that’s right for you.
Support responsible news and fact-based information today!