Artificial: Episode 3, ChatGPT – The Journal. – WSJ Podcasts – The Wall Street Journal
OpenAI launched ChatGPT with low expectations and little fanfare. But the chatbot was an instant hit and went on to become one of the fastest growing consumer apps in tech history. ChatGPT’s surprise success gave OpenAI its first shot to make big money, and the company moved quickly to cash in — even as critics called out some very real problems with the company’s hit product.
Further Reading:
Outcry Against AI Companies Grows Over Who Controls Internet’s Content
The Awkward Partnership Leading the AI Boom
Further Listening:
Artificial: Episode 1, The Dream
Artificial: Episode 2, Selling Out
This transcript was prepared by a transcription service. This version may not be in its final form and may be updated.
Kate Linebaugh: Hey, it's Kate. This is the third episode in our series Artificial: The OpenAI Story. If you haven't heard episodes 1 and 2, go back and listen to them. They're in your feed. They're also linked in the show notes. Last June, Eric Davis got a call from his brother. And right off the bat, he says he had a bad feeling.
Eric Davis: My brother and I have never talked on the phone once in my entire life. We usually text back and forth, but we have never talked on the phone once. So when I saw my brother's name pop up on the caller ID, my heart sank. I knew something was wrong. And when I picked up the phone, he just said, "Eric, mom's dead."
Kate Linebaugh: Eric's mom, Andrea, had passed away unexpectedly.
Eric Davis: She was my best friend throughout all my life. She was the type of person who could walk into a party, and by the end, everybody would know who she was. The feelings you get after losing a parent, I think is indescribable, especially when you're as close to them as I was with my mom. Grief does very strange things to people and I'm no exception. And that's kind of where ChatGPT came into play.
Kate Linebaugh: How do you mean?
Eric Davis: I was sitting at my computer, I was putting together a slideshow for her funeral, and I got the idea that, "Hey, I never got to say goodbye to her. What if I use ChatGPT to say goodbye? Why can't I have ChatGPT pretend to be my mom?"
Kate Linebaugh: Eric had been playing with OpenAI's new chatbot for a few months at this point. He works at a company that sells stuff online and he'd gotten ChatGPT to write some code for him.
Eric Davis: What the script does is, I can just click a button, the script will see which products are out of stock, and it will remove them from the purchase orders. And it saved me probably four hours every Monday morning.
Kate Linebaugh: That's nice.
Eric Davis: Yeah, it's a really great thing to have. It's like having someone much smarter than you looking over your shoulder.
Kate Linebaugh: But this time, Eric would be asking ChatGPT to do something very different. He'd be asking it to role play, ChatGPT would play the part of his mom and Eric would get to have one last conversation with her. For this to work, Eric would need to teach ChatGPT all about Andrea, and so he started looking back through his phone. Before she died, Eric says he and his mom would text back and forth multiple times a day. He still had hundreds of her messages.
Eric Davis: I selected the ones that I thought showed how she talked, her personality, her sense of humor, and I plugged them into ChatGPT along with some instructions on, "Hey, you're going to act as my mom who just passed away." And after that, I pasted in the text messages and hit enter. I wrote, "Hi, mom." And when it started writing back, I just burst into tears. Its first paragraph that it wrote back, it was almost indistinguishable from how my mom talked. I asked it, "You passed away so soon, what do I do?" And it wrote back a paragraph as her basically telling me to be good, don't be boring, and think of the memories that I had with her.
Kate Linebaugh: Did it feel like her?
Eric Davis: It did. It was almost as if she didn't pass away.
Kate Linebaugh: Did this conversation with ChatGPT as your mom feel like the closure you were looking for?
Eric Davis: It kind of did. When I was done, I felt a sense of relief that I was able to say what I needed to say. Immediately after that, however, I felt almost gross, like I had done something wrong. I remember asking myself after I was done, "What the hell did I just do?"
Kate Linebaugh: And why did that feel gross?
Eric Davis: Because I think that once someone has passed, you honor their memory, you think of the good times. You don't try to turn an algorithm into them and try to talk to them.
Kate Linebaugh: So did you have more conversations with it?
Eric Davis: No, that was the only one I had.
Kate Linebaugh: Have you been tempted?
Eric Davis: Oh yeah. Every day. Every day I think of it.
Kate Linebaugh: It's been a little over a year since ChatGPT came out. And for a lot of us, like Eric, it's been a year of wonder, amazement, and unsettling change. AI has operated in the background of our lives for years. Algorithms decide what ads we see, what websites were served, what insurance rates were offered. But ChatGPT was different. You could have a conversation with it. It could sound like your mom. And that was a game changer, not just for us, but for OpenAI. ChatGPT catapulted the company into the spotlight, but with that attention came tough questions about how OpenAI's systems are built and who they benefit. From The Journal, this is Artificial: The OpenAI Story. I'm Kate Linebaugh. Episode 3, ChatGPT. OpenAI's President Greg Brockman told us he didn't have particularly high expectations for ChatGPT. Inside the company, the bot wasn't seen as a big leap forward. ChatGPT was basically a language model OpenAI had already released called GPT-3.5 with a chat interface stuck on the front of it. In fact, Greg and his colleagues were already hard at work building a bigger and better language model, GPT-4.
Greg Brockman: GPT-4 was clearly going to be the chat product. We knew that for sure. 3.5 we just didn't think was going to be that interesting of a product. And so it was like, "Yeah, let's release it. It was better to get it out and shake the bugs out ahead of time."
Kate Linebaugh: Internally, they talked about the release of ChatGPT as a "low-key research preview."
Greg Brockman: I remember telling people the night before that… Because we were all kind of asking, "Is this going to take away the thunder from GPT-4 launch?" But I was like, "No one's going to pay attention. It's all good. We're not going to have more than one tweet thread with more than like 5,000 or 10,000 likes, whatever the number was." I made a very specific prediction, which was totally wrong.
Kate Linebaugh: ChatGPT was released on November 30th, 2022, and it exploded.
Speaker 5: A popular new artificial intelligence application is making content creation a whole lot easier.
Speaker 6: It is called ChatGPT.
Speaker 7: AI program surpassed 1 million users in just five days.
Speaker 8: If you haven't heard of ChatGPT, you've been living under a rock.
Kate Linebaugh: ChatGPT became one of the fastest growing consumer apps in history. Suddenly, OpenAI was sitting on something the whole world wanted, and that meant the potential to make a lot of money. This was new territory for the company. Just a few years earlier, CEO Sam Altman had been quizzed in an interview about how OpenAI planned to bring in revenue. His response…
Sam Altman: The honest answer is we have no idea. We have never made any revenue. We have no current plans to make revenue. We have no idea how we may one day generate revenue. We have made a soft promise to investors that once we've built this sort of generally intelligent system, basically we will ask it to figure out a way to generate an investment return for you.
Kate Linebaugh: Build AGI, this super intelligent machine, and then ask the A GI how to make money, that was the business plan.
Sam Altman: It sounds like an episode of Silicon Valley.
Speaker X: It really does.
Sam Altman: It really does. I get it. I get it.
Kate Linebaugh: But with the release of ChatGPT, OpenAI had a product to sell and it moved quickly to cash in. A few months after ChatGPT's launch, OpenAI unveiled a new souped up version of its chatbot. This version of ChatGPT was powered by GPT-4, that new better language model Greg and his colleagues had been working on. Users now had two options, use the old 3.5 version of ChatGPT for free or pay 20 bucks a month to use the turbocharged GPT-4 version. OpenAI also began licensing its language models to businesses. Companies in healthcare and hospitality, publishing and accounting, partnered with OpenAI and began rolling out products based on ChatGPT. OpenAI was riding high, and so was its biggest partner. What was the response at Microsoft to ChatGPT's success?
Deepa Seetharaman: Joy and surprise. And I think they felt like, "Oh, wow. Okay, so this bet really paid off."
Kate Linebaugh: That's Tech Reporter Deepa Seetharaman. By the time ChatGPT came out, Microsoft had poured about $3 billion into OpenAI. It had been a bit of a speculative investment. No one could be sure OpenAI's tech would take off, but after ChatGPT…
Deepa Seetharaman: You see like the executives just be very proud of their correct bet and then trying to look for ways to double down. At this time, Microsoft and OpenAI are talking about the next round of funding for the company. OpenAI needs more money because they have a lot of compute needs if they really want to build AGI and keep going in that direction. And Microsoft liked what it saw, and then you see an even larger check cut from them in the months after ChatGPT's launched.
Kate Linebaugh: Microsoft committed to invest about $10 billion in OpenAI according to Wall Street Journal reporting. This partnership propelled Microsoft to the front of the AI race. Rivals like Google and Meta were scrambling to catch up and Microsoft was putting OpenAI's technology into nearly all of its products, from Office to its search engine Bing. Here's CEO Satya Nadella.
Satya Nadella: It's a new day in search. It's a new paradigm for search. Rapid innovation is going to come. In fact, a race starts today in terms of what you can expect, and we are going to move. We going to move fast. And for us everyday-
Kate Linebaugh: ChatGPT had gone from a low-key research preview to a ubiquitous hit product in record time, and all of this was happening despite some pretty glaring problems with ChatGPT itself. That's next. Hello?
Brian Hood: Hello. How are you?
Kate Linebaugh: A few months ago, I phoned up a guy halfway around the world to hear about his experience with ChatGPT. His name is Brian Hood.
Brian Hood: I live in a little town called Trentham, that's just outside Melbourne, Victoria, Australia. I'm the mayor of Hepburn Shire Council at the moment.
Kate Linebaugh: What is Hepburn Shire like?
Brian Hood: Hepburn Shire, it's only about a one-hour drive out of Melbourne. It's a farming community. Picture rolling green hills, very nice countryside.
Kate Linebaugh: Brian definitely looks the part of Mayor of the Shire. He sports a flat cap and has a handlebar mustache. When he is not working, he says he likes to garden, zipping around his yard on his ride-on mower. He really doesn't have much use for ChatGPT, but Brian found himself getting pulled into ChatGPT mania anyway because he discovered that the chatbot would sometimes put out false information about him. It all had to do with a big financial scandal in Australia, known as the Securency scandal.
Speaker 13: Two currency companies controlled by the Reserve Bank had caught up in a foreign bribery scandal this lunchtime. Six former employee-
Kate Linebaugh: Before becoming mayor, Brian had worked for a company that printed bank notes.
Brian Hood: Polymer bank notes. And to cut a very long story short, I uncovered that some of our people were paying bribes in order to win contracts.
Kate Linebaugh: Oh.
Brian Hood: So it was white collar crime.
Kate Linebaugh: Yeah. That's not good. So what was your role in this?
Brian Hood: I was the whistleblower.
Kate Linebaugh: Ah.
Brian Hood: So my role was to rock the boat, spill the beans.
Kate Linebaugh: Brian's bean spilling helped expose the bribery. But when Brian asked ChatGPT about the history of the scandal, this was not the story the chatbot came up with. What did it say?
Brian Hood: It would say that I had been charged with serious crimes, that I'd been imprisoned for three years, all of which was completely untrue. I've never been charged with anything, and I certainly have never ever been to jail.
Kate Linebaugh: This made Brian angry. OpenAI did have a disclaimer about the chatbot's accuracy, but Brian didn't think it was enough. He worried the false information could harm him in his current job.
Brian Hood: I'm an elected official. I'm representing my community. The town I live in is fairly small. People know me and I'm the mayor of the whole Shire. So my reputation, my integrity counts for a lot. So I thought, "Well, I've got to do something about this." And initially I didn't know what to do. I thought, "Well, I can't send a message back to ChatGPT." So I pretty quickly concluded, "Well, I just got to go to the lawyers."
Kate Linebaugh: Brian's lawyers told him ChatGPT's responses could constitute defamation, and they sent a letter to OpenAI. Among other things, they demanded that the company stop ChatGPT from generating false information about him. OpenAI's lawyers rejected Brian's defamation claim, but the company did make a change.
Brian Hood: If you now type my name in, you get back an error message and it doesn't tell you anything. Not a very elegant solution in itself, but they've done those other things and-
Kate Linebaugh: Not very elegant or effective. Because when I asked ChatGPT about the scandal, without naming Brian, it's still generated false information. The bot told us that Brian had been "accused of involvement in corrupt practices." ChatGPT's tendency to mix fact and fiction, it's been a problem ever since the chatbot's debut. People would head over to ChatGPT looking for a quick answer to a question, some help brainstorming or homework help, what they got back could sound convincing but wasn't always factual. There was the time, for example, when ChatGPT wrongfully included a famous law professor on a list of people accused of sexual harassment.
Speaker 14: There was this allegation of harassment on a trip with students. That trip never occurred. I've never gone on any trip with law students of any kind. It had me teaching at the wrong school and I've never been accused of sexual harassment.
Kate Linebaugh: Then there was the time a lawyer used ChatGPT for work. He asked the bot for some cases that he could cite in a briefing.
Speaker 15: However, the report was filled with over half a dozen bogus court decisions created by ChatGPT.
Kate Linebaugh: These mistakes weren't that surprising. When you consider how ChatGPT works. When the chatbot generates an answer, what the underlying language model is doing is predicting which word is most likely to come next, but what's likely isn't always true. What does it mean to have such a stubborn problem like this, and why?
Greg Brockman: Yeah, I think it's actually a really, really instructive example.
Kate Linebaugh: When I spoke with Greg Brockman, OpenAI's president back in October, I told him about Brian Hood's frustrations with ChatGPT and I asked Greg about the bot's persistent accuracy problems.
Greg Brockman: In this case, this is clearly something that is not a desired property of the system, and the question is, "Well, how do you patch it?" And so we have a combination of different approaches. One is that we try to actually fix it at the model level, but that takes time and we may not be perfect at it. So then we also try to have more classic safety approaches where you just have some keyword block list or you add some other model on top that's able to detect different scenarios that we're not good at yet. And I think that, again, the core thing here is this question of, "Well, how should you approach it?" Should you say, "We're not going to deploy into all this stuff is perfect?" Or do you deploy when it's imperfect and you learn from feedback?
Kate Linebaugh: But why would you build off of a system that has flaws? Like you're saying you're working on it, but you're also pushing forward.
Greg Brockman: Well, I guess do you know of any systems that don't have any flaws?
Kate Linebaugh: OpenAI says it takes the accuracy problem seriously and that it devotes a lot of time before each release to making sure its systems are as factual as possible. One way the company does that is by enlisting legions of humans to essentially teach its chatbot. Before releasing ChatGPT for example, OpenAI had people chat with it and rate its responses. Researchers then use those ratings to steer the bot toward more helpful and accurate answers. This approach helped make ChatGPT better, but it hasn't solved the problem. Critics of the chatbot have also pointed to a different issue, ChatGPTs training data. When OpenAI's language models predict the next word, they're making those predictions based on all the words they've seen before, all the words in their training data. And some people are questioning where all those words came from and whether OpenAI has the right to use them.
Speaker 16: Some of the biggest and bestselling authors in fiction are suing OpenAI.
Speaker 17: Game of Thrones author George R.R Martin and 16 other writers are suing the creators of ChatGPT.
Speaker 18: The lawsuit filed with the Author's Guild claims that OpenAI reproduced copyright books without the author's consent to train ChatGPT.
Kate Linebaugh: Will you take legal action?
Min Jin Lee: I might. I might. Well, I'm a member of the Author's Guild and I support their lawsuit.
Kate Linebaugh: That's Min Jin Lee, author of the bestselling novels Free Food for Millionaires and Pachinko.
Min Jin Lee: I'm writing this other book right now and it's taking me (beep) forever. I promise I will not curse on the podcast, but it's making me really emotional. And my husband's like, "Well, you are really upset when you're writing Pachinko too" and I was like, "Oh, that's right. Yeah."
Kate Linebaugh: We called Min at her home in Harlem to talk about the issues authors are having with chatbots. We asked ChatGPT to write a short story in your style. Do you want to hear what it came up with?
Min Jin Lee: Wow. Sure.
Kate Linebaugh: I asked, "Can you write a five sentence story about a dog in the voice of Min Ji Lee?"
Min Jin Lee: Okay.
Kate Linebaugh: Here it is.
Min Jin Lee: Okay.
Kate Linebaugh: "In the hushed alleyways of old Soul, Bore, a wise eye Jindo, remembers tales of her grandmother whispered under starlet knights. The Lee family, bound by tradition and memories, saw in her the legacy of a land once whole. Each evening, as lanterns lit the streets, Young Sue, the families' elders, shared stories of ancestors and honor while Bore listened intently. Through her, the Lees felt the pulse of their history, a heartbeat echoing through generations. In the intertwining of their tales, dog and family found unity and shared memories." What do you think?
Min Jin Lee: I think it's really bad fiction writing. And I say that as a teacher, and also I say that as a former judge of the Pulitzer Prize and the National Book Award. It really makes a mockery of Asians and Asian Americans who write fiction about Asia. It's almost like when you hear times being used, when we think about the audio for Asian things or thinking about Chop Suey when we think about Chinese food. It's a bastardization and a caricaturisation of what I care about out.
Kate Linebaugh: It's hard to say how ChatGPT learned what writing in the style of Min Jin Lee means. One possibility is that OpenAI used Min's work as training data for its language models. But whether or not that's true, only the company can say, and the company hasn't said. That's a shift. OpenAI used to publish papers about its models, which included their code and information about their training data sets. But as OpenAI started to look less like an academic research lab and more like a company, it's been less open about some of those details. Now, OpenAI will only say that ChatGPT is trained on licensed content, publicly available content, and content created by users and human AI trainers. And that lack of specific information has left authors like Min Jin Lee wondering whether ChatGPT's success was built off the back of their copyrighted work.
Min Jin Lee: I think that authors don't write to get rich. The average author in America lives below the poverty line, so clearly we are a bunch of people who just do this because we care so intensely about the proprietary information. And I think that these companies who care intensely about their proprietary information are being deeply hypocritical in the way they treat our property.
Kate Linebaugh: When Greg and I spoke in October, I asked him about author's concerns and about OpenAI's recent secrecy about its training data. Why not disclose your training data?
Greg Brockman: Well, I think that there's some answer for, well, it's proprietary because it's something that helps us sort of build the next system, it's something that we spent a lot of time on to get here. The selector is really for any given fact that we reveal. How much does this help people who are going to use the system, how much does this help people choose what to use it for versus how much does it help people build the exact same thing that we're trying to build?
Kate Linebaugh: And how do you respond to this growing list of authors who are saying, "You're using my copyrighted work as training data and you need to pay us for that."
Greg Brockman: Look, I think there's a really important question for society to answer here. I think that there's a legal question, which is definitely for the lawyers. I'm not a lawyer, but I have an opinion. I do think that first of all, that we as society, I think that there should be some way for compensation to flow back to people who are producing works. We're definitely very interested in that. We've been talking about that for some time. But I also think that there's something really interesting to step back and think about like these systems are just not what we're used to, right? What we're training them to do is to help us solve problems that no one's ever solved before. That's the actual goal.
Kate Linebaugh: I ran Greg's response by Min Jin Lee. The argument is a sort of a greater good argument that AI is going to solve problems for the world that no one has solved before, and that in the process of creating this new thing, there'll be some troubles along the way that we'll just have to sort out.
Min Jin Lee: So I can say that my book and my work is a way of advancing humanity and it serves a greater good. I have absolutely no proof that their work will solve the greater good. And I'm not in any way coward by their theory of advancing humanity. Also, what they're doing is not for free. So why is their idea more valuable than mine? I don't think that it is.
Kate Linebaugh: In the year after ChatGPT's release, public pressure continued to build on OpenAI, but it didn't keep the company down. OpenAI put out a steady stream of new releases. There was GPT-4 and the latest version of its image generator DALL·E. The company's revenue was growing fast. By mid-November, the company was closing in on a deal that would value OpenAI at nearly $90 billion, about triple what it had been worth at the beginning of last year. ChatGPT had made OpenAI one of the most valuable tech companies in the world. And then came a shocking announcement that put the entire future of the company in jeopardy.
Speaker 20: Breaking news, Sam Altman is out as CEO of OpenAI. The company just announced-
Kate Linebaugh: Coming up, an exclusive interview with one of the four board members who fired Sam Altman.
Helen Toner: Yeah, my name is Helen Toner.
Kate Linebaugh: How would you describe that weekend for you?
Helen Toner: We knew that it would not be a joyride, and it wasn't a joyride.
Kate Linebaugh: That's next time on Artificial: The OpenAI Story. Artificial is part of The Journal, which is a co-production of Spotify and the Wall Street Journal. I'm your host, Kate Linebaugh. This episode was produced by Annie Minoff and Laura Morris with help from Pierce Singie and Kylan Burtz. The series is edited by Maria Byrne. Fact Checking by Amelia Seanbeck with consulting from Arvind Narayanan. Series art by Pete Ryan. Sound design and mixing by Nathan Singapok. Music in this episode by Catherine Anderson, Peter Leonard, Emma Munger, and Nathan Singapok. Our theme music is by So Wiley and remixed by Nathan Singapok. Special thanks to Catherine Brewer, Jason Dean, Tom Dotan, Karen Howe, Berber Jin, Sarah Platt, Sarah Rebel, and Lisa Wang. Thanks for listening. Our final episode will be released later this month.
Kate Linebaugh is the co-host of The Journal. She has worked at The Wall Street Journal for 15 years, most recently as the deputy U.S. news coverage chief. Kate started at the Journal in Hong Kong, stopping in Detroit and coming to New York in 2011. As a reporter, she covered everything from post-9/11 Afghanistan to the 2004 Asian tsunami, from Toyota’s sudden acceleration recall to General Electric. She holds a bachelor degree from the University of Michigan in Ann Arbor and went back to campus in 2007 for a Knight-Wallace fellowship.
Ryan Knutson is the co-host of The Journal, The Wall Street Journal’s flagship daily podcast.
He has worked at the Journal since 2013. Before joining the podcast, Ryan covered the wireless industry, and was responsible for a string of scoops including Verizon’s $130 billion buyout of Vodafone’s stake in their joint venture, Sprint and T-Mobile’s never ending courtship, and a hack of the 911 emergency system that spread virally on Twitter. He also spent a year on the business side of Dow Jones, helping lead the company’s strategic relations with tech companies like Apple and Google. Before WSJ, he reported for ProPublica, PBS Frontline and OPB, the NPR affiliate station in Portland, Ore. He grew up in Aloha, Ore. and graduated from the University of Oregon.