AI, explained: The everyday person's guide to ChatGPT – SFGATE

Microsoft announced earlier this month that it will integrate ChatGPT into its product suite.
TikTok took nine months to get its 100 millionth user. Instagram took two years. 
ChatGPT, the chatbot released in late November 2022 by San Francisco’s OpenAI, reached the milestone in just 60 days. 
The viral growth came with a flurry of breathless journalism and Big Tech excitement. There was a $10 billion investment. The chatbot answered millions of queries, then passed law and business school exams. Microsoft said ChatGPT would transform its unpopular search engine Bing into a real Google competitor.
But the average person — even those relatively tech-savvy among us — hasn’t got much clarity about how the technology actually works or where it’s headed. Companies whose products we use every day insist that artificial intelligence is the foundation of an ever-nearer future. So far, though, the entire concept can seem opaque and inscrutable. SFGATE tried out the new chatbot, took stock of the technology and asked AI experts to weigh in.
OpenAI CEO Sam Altman speaks during a keynote address announcing ChatGPT integration for Bing at Microsoft in Redmond, Wash., on Feb. 7.
Take any task you’ve completed today. Maybe you picked a friend out of a crowd, added some songs to a morning shower playlist or, like me, took a stroll to the coffee shop around the corner.
All these things take human intelligence. We have a goal, and we figure out the best way to reach that goal using the knowledge we have. But a computer could do those jobs too and often does. Instead of using thoughts and human experience, it uses collected data to complete a task. A computer considers traffic data and previously taken routes and maps a path to a coffee shop. Or it uses a database of photographs to select a face in a crowd, or it parses through listening habits to build a playlist.
Those are all tiny applications of artificial intelligence — a form of computing devoted to making machines that match or exceed our intellectual abilities. We’re used to it: Google Maps, Apple’s Face ID, Spotify playlists, Siri, spell-check, TikTok (or Twitter) “For You” pages. That sort of intelligent technology, up until now, has been intentionally obscured.  
AI exists already, and we use it — through the tech that undergirds our routines — every day. But those uses are narrow and often focused on areas where humans, with our meager processing power, are at a disadvantage. (There’s no way any human could take every single TikTok posted today and sort them into an interesting feed for millions of users, for instance.) 
But now, researchers have broken through into new territory. “What has suddenly happened is that we have AIs that can talk — effortlessly, confidently, unflappably, volubly,” said Selmer Bringsjord, the director of the AI and Reasoning Lab at Rensselaer Polytechnic Institute. 
Google’s headquarters are seen in Mountain View, Calif., on Oct. 28, 2021. The tech giant recently unveiled its chatbot Bard, to compete with OpenAI’s ChatGPT.
ChatGPT, like Google’s new chatbot Bard, doesn’t just analyze data or organize content. The “G” stands for “generative” — able to create something on its own. Users can type in basically any prompt on the platform, and the chatbot will spit out a coherent answer — usually. ChatGPT can edit text, mimic writing styles and brainstorm lists of ideas. It’s like an email exchange with a friend, if your friends are emotionless, untiring, deferential robots.
Like other types of AI, ChatGPT takes in data and uses algorithms to form an answer. In chatbots’ case, these “large language models” are trained on massive data sets of text to understand how people interact — what to say and when. The large language models that ChatGPT relies on use text from all across the internet and recorded human speech — including more objectionable content.
Bringsjord has a metaphor for these models: If someone who doesn’t know any Mandarin listens to two native speakers talk all day, they might pick up a few sounds or patterns.
“At the end of the day, if you’re pretty sharp, and some [Mandarin] speaker walks up to you and quickly says something, … you could give it a shot: You could sound out a sequence of sounds based on what you’ve heard earlier through the day, and your interlocutor might find it quite competent,” Bringsjord said.
But “pretty sharp” is an understatement — most people couldn’t pick up a new language even after listening to it for an entire week. Large language models, he said, “don’t just have a record of the sounds over one day of two people; they have a record of all the sounds, replete with recurring patterns, over centuries, involving billions of humans, so they’re pretty convincing.”
Another way of looking at ChatGPT is as a successful sampling from a huge, complicated model of the internet, said Nathan Lambert, a research scientist at the AI community platform Hugging Face. Lambert told SFGATE he suspects ChatGPT’s success in (largely) cutting out the offensive text helped feed its viral loop of popularity. (But more on this in a bit.)
Travel | The story of Giant Rock, a mythical 7-story boulder in the desert
History | Did Alcatraz’s most famous escapees survive?
Local | How the last pirates of Sausalito fought the man, and won
Food | The real ‘Mexican pizza’ is found in the heart of the Mission
As of this article’s publication, ChatGPT and Bard are in research phases — OpenAI and Google want feedback to make the chatbots work better. On Feb. 7, Microsoft and OpenAI announced that ChatGPT would integrate with the Redmond-based tech giant’s Bing search engine to create a “copilot for the web,” an AI-powered conversation alongside typical search results. 
But as the real-world use of this technology blazes forward, AI ethicists, journalists and even a company executive are asking for regulation and outside input.
The problem of providing inaccurate information may continue to hound OpenAI (and Google, for that matter). Quirks in the data sets — the internet didn’t ask to be scraped — have led to well-publicized issues with ChatGPT’s and Bard’s answers. Strange keywords have prompted bizarre answers, and logic and math problems seem difficult, but the chatbots still answer in unflappable, confident tones.
The companies behind the platforms take pains to note that the chatbots can give inaccurate answers, but if whacky responses and incorrect calculations continue to pop up, the tech is less likely to be useful for search, brainstorming, parsing documents or writing copy — not to mention more intense use cases like providing thorough customer service to humans.
And there’s a human cost to these innovations. According to a Time investigation, OpenAI, in its attempt to build safeguards against more nefarious uses of its generative text, used outsourced Kenyan labor to make ChatGPT less toxic. Workers, often paid under $2 an hour, labeled tens of thousands of snippets of offensive text. Those labeled examples of violence, hate speech and abuse could be fed into a type of filter for ChatGPT to make sure the chatbot wouldn’t echo any toxic language from its vast data sets. 
For now, ChatGPT has captured the public’s attention more than any other chatbot or AI-based product because it’s fast and feels right, even when it isn’t. It brings the internet — diverse, wide-ranging, massive — into a simple back-and-forth conversation.
“The big step has already kind of happened to ChatGPT where it went from [where] researchers could use language models to everyone can now use the ChatGPT website,” Lambert said. “I think of that as an iPhone moment, and now it’ll just keep getting better every year.”
Stephen Council is the tech reporter at SFGATE. He has covered technology and business for The Information, The Wall Street Journal, CNBC and CalMatters, where his reporting won a San Francisco Press Club award. Signal: 628-204-5452 Email: stephen.council@sfgate.com

source

Jesse
https://playwithchatgtp.com