How to talk to an AI: Tips for getting the most out of chatbots like … – The Washington Post

How can an AI chatbot help me

ChatGPT doesn’t come with an instruction manual. But maybe it should. Only a quarter of Americans who have heard of the AI chatbot say they have used it, Pew Research Center reported this week.
“The hardest lesson” for new AI chatbot users to learn, says Ethan Mollick, a Wharton professor and chatbot enthusiast, “is that they’re really difficult to use.” Or at least, to use well.
The Washington Post talked with Mollick and other experts about how to get the most out of AI chatbots — from OpenAI’s ChatGPT to Google’s Bard and Microsoft’s Bing — and how to avoid common pitfalls. Often, users’ first mistake is to treat them like all-knowing oracles, instead of the powerful but flawed language tools that they really are.
Here’s our guide to their favorite strategies for asking a chatbot to help with explaining, writing and brainstorming. Just select a topic and follow along.
AI chatbots can be impressive, especially once you start to learn how to coax better answers from them. But understanding their limitations is at least as important as discovering their strengths, Willison says.
It’s crucial to remember that they’re not human, and they’re not reliable sources of information, even about themselves. So if a chatbot makes a factual claim, verify it elsewhere. And if it’s acting like it has thoughts and feelings — or wants to break up your marriage — remember that it’s just playing off your prompts, drawing on billions of human interactions in its training data to predict the most likely response.
Similarly, if chatbots show cultural biases or say offensive things, it’s a reminder that they’ve ingested some of the ugliest material the internet has to offer, and they lack the independent judgment to filter that out.
AI may or may not be coming for your job. But if you familiarize yourself with its strengths and weaknesses, you’ll be better positioned to fend it off, or even turn it to your advantage.
For this story, The Post used OpenAI’s ChatGPT chatbot to enter the prompts and generate the responses shown above.
Often, finding the right prompt to generate an interesting or satisfying response required a process of trial and error, which experts say is common with today’s chatbots. Entering the same prompt as shown here may generate a different response, because responses can be context-dependent, personalized to users and even somewhat random. Chatbot makers such as OpenAI, Google, Microsoft and Anthropic also routinely update the underlying models, which can affect responses.
The prompts shown in this story are the same as those used to generate the corresponding examples. The responses shown in the story are taken verbatim from ChatGPT, with minor edits for length (for example, when ChatGPT included recipes, those were not included in the responses reproduced here). The responses were not edited for accuracy; inaccurate information is a well-documented weakness of large language models.
The tasks, tips and prompts in the story were informed by interviews with Ethan Mollick, an associate professor at the University of Pennsylvania’s Wharton School of Business; Simon Willison, a British technologist and software programmer; and Andrew Mayne, science communicator at OpenAI. The responses come from ChatGPT, using its GPT-3.5 and GPT-4 language models. The conversations were conducted between May and July.
Tatum Hunter contributed to this report.
Editing by Karly Domb Sadof, Emily M. Eng, Mark Seibel and Matt Callahan.