Here's how to use AI chatbots like ChatGPT – The Boston Globe

The late science fiction writer Arthur C. Clarke once wrote that “any sufficiently advanced technology is indistinguishable from magic.”
Now, when it comes to the new wave of generative artificial intelligence chatbots, it’s a matter of learning the sorcery.
In just over half a year, ChatGPT, Google’s Bard, and Bing’s new search engine — free services that generate humanlike prose from user prompts — have shifted from sci-fi to gimmick to near-ubiquity. Industries from health care to education to travel have begun implementing the technology to speed up or automate operations.
“Whenever there is new technology, then there is a massive hype — a massive fear, a massive hope,” said Sandra Wachter, a faculty associate at The Berkman Klein Center for Internet & Society at Harvard University. But as opposed to things like self-driving cars or the metaverse, in this case “the general public has access to it,” Wachter said.
Indeed, while the risks are real, it’s becoming increasingly clear that ChatGPT and its ilk aren’t going anywhere. The question is, are we ready to use them?
A Pew Research Center survey conducted in March found that while 58 percent of US adults had heard of ChatGPT, only 14 percent of people had tried it out for themselves.
“My first advice would be to play around with it,” said Wachter, “and to see what it can and cannot do.”
So fear not, technophobes; consider this your AI 101. The Globe spoke with a range of experts about the basics of the buzzy technology: how it works, how to use it, and how to avoid common pitfalls.
Chatbots like ChatGPT and Bard are powered by large language models, computer systems that have been fed mountains of text (or training data) to establish patterns about how language works. This machine-learning process teaches these systems to predict the most probable sequence of words to answer any given prompt. This is made possible by the structure of large language models, which are a type of neural network, designed to roughly emulate how the human brain processes information.
ChatGPT’s neural network is called a transformer, and it works by assigning numerical weights to the relationship between different tokens, which are short bits of text that the system is able to process. (These weights are how it predicts that the blank in “the cat in the ____” is more likely to be “hat” than “banana.”)
Globe reporter: Fill in the blank and explain how you did it: “Mary had a little [BLANK].”
This question and response were taken verbatim from a Globe reporter’s interactions with OpenAI’s ChatGPT. You may receive a different response if you ask ChatGPT the same question.
As the model is trained, systems like ChatGPT (which stands for “generative pre-trained transformer”) are honed through a process called reinforcement learning from human feedback. This is where humans rank different outputs to guide the chatbot toward the sorts of responses they want it to spit out in the future.
Think of it as “kind of like autocomplete” on your smartphone, said Brian K. Smith, a professor of information science and education at Boston College. Word by word, autocomplete suggests how you might complete a text message based on what you’ve already written. The difference with systems like ChatGPT, Smith says, is the mammoth amount of training data and the level of computing power.
The safest applications of chatbots, experts say, are ones with low stakes: meal-planning, brainstorming birthday party ideas, or jump-starting a new hobby. “I ask myself, is this a creative thing that I’m trying to do? Or is it something where factual accuracy is important?” said Rama Ramakrishnan, a professor of the practice at the MIT Sloan School of Management who focuses on generative AI.
Globe reporter: I’m trying to come up with an idea for a creative retirement party theme for my 61-year-old husband, who was an elementary school teacher. Could you come up with three ideas and summarize them to me in no more than 50 words?
Answer 1: “Back to the Classroom”: Transform the venue into a nostalgic classroom, complete with desks, blackboards, and school supplies. Encourage guests to dress up as their favorite school characters, and organize fun activities like spelling bees, recess games, and a “Teacher of the Year” award ceremony.
Answer 2: “A World of Adventures”: Take your husband on a retirement trip around the world without leaving the party. Set up different stations representing various countries, each offering unique food, decorations, and entertainment. Guests can “travel” from one destination to another, immersing themselves in different cultures.
Answer 3: “Teacher’s Tribute Concert”: Arrange a musical extravaganza in honor of your husband’s teaching career. Invite former students, colleagues, and local musicians to perform songs, skits, and dances inspired by education. Display a slideshow of memorable teaching moments while celebrating his lasting impact on generations of students.
This question and response were taken verbatim from a Globe reporter’s interactions with OpenAI’s ChatGPT. You may receive a different response if you ask ChatGPT the same question.
If accuracy is what you’re after, be on high alert. Chatbots contradict themselves. They misquote and fail to accurately cite their sources. In ChatGPT’s case, it’s mostly working off information from 2021 and earlier. Chatbots also have been known to “hallucinate,” or make up convincing-sounding statements altogether.
Also, since chatbots were trained on human writing, they have picked up some of our biases. The models have been trained to sidestep some of the more egregious prejudices — like overt sexism or racism — but there’s no way to filter out everything.
One useful approach is to ask the same question to multiple chatbots to see if the answers align. But the golden rule, experts say, is just “double-checking the information these tools are producing,” said Hima Lakkaraju, an assistant professor at Harvard University who specializes in machine learning.
And remember: Don’t mistake a bot’s eloquence with true comprehension.
“It’s not that it lies; it just has no sense of what’s true,” said Nir Eisikovits, a professor of philosophy at UMass Boston who focuses on the ethics of tech. “It was calibrated to sound rhetorically convincing, rather than to sift truth from falsehood.”
All that said, chatbots do have “practical applications,” said Usama Fayyad, the executive director for the Institute of Experiential Artificial Intelligence at Northeastern University. But good output requires good input.
Enter “prompt engineers,” a nascent group of professionals who attempt to write the perfect questions or requests to cajole a chatbot into giving you a desired response. But this skill is “more of an art right now,” said Fayyad, “because nobody knows how to do it as a science.”
There are, however, some tips and tricks. First, set the scene. Rather than asking ChatGPT, for example, to just write a tweet about why one brand of headphones is better than another, try teeing it up with some context, said Max Bridgland, a Boston-based prompt writer for the online marketplace Promptbase: You’re a passionate music producer. Your favorite headphones are Sony WH-1000XM4s. Write a 280-character tweet on why these are better than Apple AirPods.
Globe reporter: Write a 280-character tweet about why the Sony WH-1000XM4 headphones are better than Apple Airpods.
ChatGPT: 🎧 Sony WH-1000XM4 headphones vs. Apple Airpods? No contest! 🏆 The XM4s deliver mind-blowing sound quality, customizable noise cancellation, all-day comfort, and an unrivaled battery life. Immerse yourself in audio bliss with Sony’s superior tech. Say goodbye to compromises! #SonyXM4
Globe reporter: You’re a passionate music producer. Your favorite headphones are Sony WH-1000XM4s. Write a 280-character tweet on why these are better than Apple AirPods.
ChatGPT: 🎧 🎶As a passionate music producer, Sony WH-1000XM4s are my go-to headphones! They deliver an immersive audio experience with exceptional sound quality and powerful bass. The noise cancellation is top-notch, allowing me to focus on every beat. Sorry AirPods, but these cans take the crown! 👑 #SonyWH1000XM4
These questions and responses were taken verbatim from a Globe reporter’s interactions with OpenAI’s ChatGPT. You may receive different responses if you ask ChatGPT the same questions.
“Give it instructions for who it would be responding as, rather than just what it should be doing,” said Bridgland.

If all else fails, treat chatbots like used-car dealers: Never accept the first offer. Chatbots can “remember” ideas referenced earlier in a conversation, so you can ask follow-up questions or give instructions for specific changes you want: Shorten this or Make the tone more solemn or Rewrite this as a stand-up routine.
Short answer? Probably. “Because the chatbot is trying to collect as much data as it can — it’s trying its best to learn from these questions, interactions, feedback from the user — you don’t know which facts will get embedded in the next version of the training data,” said Fayyad.
ChatGPT, Bard, and Bing have all disclosed that they collect data on conversations and other user information to train their models. There may be ways to opt out of this or delete your data retroactively, but the best policy is to avoid sharing any personal, sensitive, or proprietary information with chatbots.
“If I wouldn’t want the world to know it,” said Smith, “I wouldn’t ask ChatGPT about it.”
It’s complicated.
OpenAI, for instance, generally gives users the ownership rights over content generated through ChatGPT. But when it comes to exercising that ownership, things get a little thornier.
The US Copyright Office has said that the question of authorship of AI productions is “necessarily a case-by-case inquiry,” but it broadly reserves copyright protections for “material that is the product of human creativity.” There may be exceptions, however, for AI-generated material that has been sufficiently modified by a human author.
There is also a chance that chatbots will regurgitate copyrighted content picked up from their training data — such as an exact quote from a book or article — without giving proper citation. And, of course, your workplace or school might have their own restrictions about using chatbots.
To avoid inadvertent plagiarism, it is wisest to treat AI-generated writing as a rough draft rather than a final product, said Ron N. Dreben, a partner at Morgan Lewis who specializes in intellectual property.
“The question always is, ‘How little can I change it to stay out of trouble?’ ” he said.
But like much to do with AI, there are not (yet) any firm answers.
Dana Gerber can be reached at Follow her @danagerber6.