ChatGPT plugs into the internet’s real-time data firehose—again – Fortune

Hello and welcome to Eye on AI.
For today’s big story, we’re back to ChatGPT. OpenAI this past week announced its chatbot can now browse the internet in real-time. And while the update didn’t even warrant a blog post from the company (the news was announced on LinkedIn and X/Twitter), it’s significant both for how ChatGPT functions and the future of search. 
The new capability—referred to as Browse with Bing—means ChatGPT isn’t limited to data from before September 2021 when its initial training data cut off. It’s currently only available to Pro and Enterprise customers, though OpenAI says it will expand to all users “soon.” 
This is actually OpenAI’s second go at rolling out live browsing for ChatGPT. The company released the capability in June before quickly pulling it, citing that it was bypassing paywalls and privacy settings. There have been other ways for GPT users to tap the current knowledge of the web, such as by using third-party extensions or Bing Chat, which is powered by GPT-4. But the wide-scale integration of live browsing into ChatGPT marks a turning point for the tool as it continues to lead the growing parade of AI chatbots and assistants. 
It also pushes the conversation about the role generative AI will play in search. Crucially, ChatGPT Browse with Bing will provide links to cite where it found the information it surfaces, but there’s still reason to be skeptical about factual accuracy. Generative AI’s “hallucination” problem, wherein these tools blatantly make up information and present it as fact, continues to run wild. And while citing sources sounds like the most sure-fire way to ensure accuracy (it is how human writers back up their work, after all), we’ve already seen that generative AI tools aren’t too great at this either. Just last week, for example, we wrote about New York Times columnist Kevin Roose’s failed experiments using Google’s Bard for email, where the generative AI tool repeatedly misattributed where it got information, in addition to both misconstruing and making up information. And that was just limited to what was in Roose’s own inbox, not even the whole web. 
It’s one thing to put information into ChatGPT and ask it to summarize, create bullet points, or perform other tasks with the provided information. But when it comes to pulling in outside content, it’s clear users will still need to fact-check every piece of information and its source if they want to reach any level of certainty that what ChatGPT said is true. It sounds like a lot of work. And it begs the question: Is this truly a better way to search? Or if not now, will it be in the future?
These questions also apply to Bard, which can also access the web. But interestingly enough, they’re the same questions people have been increasingly asking about a totally different type of platform, TikTok. While the idea that TikTok is better for search than Google was initially framed as a GenZ preference (and indeed a survey suggests that 51% of Gen Z women prefer the app over Google for search), all sorts of users, marketers, and even tech editors are now seriously exploring the possibility. And it’s easy to see why: Google search pages today are filled with so many ads and made-for-SEO content that getting the information you want requires navigating through a seemingly endless thicket of extraneous matter.
TikTok, however, benefits from the clearly successful short-form video format, where any query is likely to be answered directly and by a friendly face delivering it with personality, possibly even with a hit song playing in the background. And ChatGPT offers a super simple interface and conversational experience, not to mention a host of additional capabilities.   
But TikTok has the same problem as ChatGPT in that you have to circle back to Google to fact-check what you just learned. TikTok creators may deliver answers in more captivating ways, and there are reputable people sharing accurate information on the platform, but there’s also tons of misinformation. And in an interesting twist, TikTok recently confirmed it’s testing surfacing Google results in its search pages, showing how the platform is indeed leaning into search (and how Google might have a future plugging its search into other platforms).
All of this is to say that while platforms like TikTok, ChatGPT, and Bard have a ways to go in the accuracy and trust departments before they can dethrone classic Google, they’re clearly pushing the bounds of search. Google still holds over 90% of the search engine market (as Microsoft CEO Satya Nadella lamented in court on Monday), but with better interfaces starting to grab users’ attention, it’s becoming possible to imagine a world where Google as we know it becomes a plug-in or reference resource, rather than the global go-to for any and all inquiries.
And with that, here’s the rest of this week’s AI news.
But first…a reminder: Fortune is hosting an online event next month called “Capturing AI Benefits: How to Balance Risk and Opportunity.”
In this virtual conversation, part of Fortune Brainstorm AI, we will discuss the risks and potential harms of AI, centering the conversation around how leaders can mitigate the potential negative effects of the technology, allowing them confidently to capture the benefits. The event will take place on Oct. 5 at 11 a.m. ET. Register for the discussion here.
Sage Lazzaro
sage.lazzaro@consultant.fortune.com
sagelazzaro.com
Google adds tool for publishers to opt out of data scraping for AI training. The company announced Google-Extended, which publishers will be able to toggle if they want to prevent Google’s data crawlers from offering up their content as training data for Google AI technologies. The company confirmed in July that it uses public web data to power its AI services like Bard.
Meta admits Facebook and Instagram posts were used to train new AI assistant. That’s according to Reuters, which reports the company did not use private messages or private posts shared only with family and friends. "We've tried to exclude datasets that have a heavy preponderance of personal information," said Nick Clegg, Meta president of global affairs. Will Meta provide an opt-out next?
U.K. officials negotiate for access to examine the internal workings of AI models from OpenAI, Anthropic, and Google DeepMind. That’s according to the Financial Times. The companies agreed to open their models for government safety research purposes in early June, but the extent of the access was never agreed upon. As calls for transparency continue, U.K. officials hope to reach an agreement to examine the models before the country hosts the first global summit on AI safety in November.
France’s competition authority raids Nvidia’s French office in cloud antitrust inquiry. That’s according to the Wall Street Journal. The agency conducted the raid on suspicion that the chipmaker engaged in anti-competitive practices and that it’s part of a broader inquiry regarding the cloud sector. It’s the first significant regulatory action toward Nvidia, which is widely considered to be the only viable option for the type of chip needed to run the latest AI models. 
AI bias research. Studies have repeatedly surfaced racial biases in computer vision technologies, and now a new paper from researchers at Sony AI and the University of Tokyo reveals additional racial biases. It also proposes a solution for bias identification and assessment.
The paper suggests that while the Fitzpatrick skin type classification system for assessing skin color bias in computer vision is effective, it only focuses on the light-to-dark aspect of skin tone. The researchers argue it’s also necessary to account for skin hue—the range from red to yellow—and that when they did, this revealed additional biases related to skin color in both computer vision datasets and models. Read the full paper here.
Why ‘AI copilot’ startups are so hot with VCs right now —Anne Sraders
A one-year-old U.K. startup says it’s achieved a major breakthrough in AI safety by scoring high on a video game —Jeremy Kahn
Google finally gave teens access to generative AI in search. They were probably already using it anyway —Rachyl Jones
Harvard’s cybersecurity guru warns that AI is ‘uniquely suited to internet-era propaganda’ as elections face Russian, Chinese, Iranian disinformation campaigns —Bruce Schneier and The Conversation
How generative AI is placing CFOs at the forefront of company strategy —Sheryl Estrada
NSA starts AI security center with eye on China and Russia as general warns U.S. lead ‘should not be taken for granted’ —the Associated Press
A day in the life of a data labeler. Data labeling, the tedious task of annotating data with labels and context so machine learning models can learn from it, is getting the side hustle treatment on TikTok. In a recent viral video that combines the popular “Day in the Life” format with a 100-day challenge, a poster named Jackie invites viewers to follow her journey of challenging herself to make an extra $100 a day for 100 days by doing data annotation gig work.
In the video, she starts by making a latte (as one does in “Day in the Life” videos) and heads to the office for her full-time job. She stops at the grocery store on her way home, checks Swagbucks for other online money-making opportunities, and then starts annotating. She labels data intermittently throughout her entire night in between making dinner and baking brownies—34 minutes of work for $15.58, another hour for $25, and so on until she finally reaches $104.75 after 4 hours and 9 minutes of annotating. The comments are full of people asking for advice on how they can get started doing the same.
It’s a fascinating collision of the labor and AI issues boiling up in society at the moment. Here’s a young woman working all day at a nine-to-five, working almost the entirety of her nights away for just $100 (less after taxes) and packaging it all up for internet consumption. The work she’s doing may seem simple and innocuous, but it’s actually fueling what will be the biggest labor transformation in human history (for better or for worse). And there’s also the fact that the vast majority of this type of labor has been performed by exploited workers recruited from impoverished countries by the world's richest companies and paid as little as $1.46 per hour for the same work (or in many cases, annotating content that features explicit images of violence and abuse). Just this past week, a group of U.S. lawmakers sent a letter to Amazon, Google, OpenAI, and other tech giants demanding they answer questions about the working conditions of data workers. 
To be clear, none of this is a criticism of the poster. But it’s certainly some real brain food.
This is the online version of Eye on AI, a free newsletter delivered to inboxes on Tuesdays. Sign up here.
© 2023 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information | Ad Choices 
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.
S&P Index data is the property of Chicago Mercantile Exchange Inc. and its licensors. All rights reserved. Terms & Conditions. Powered and implemented by Interactive Data Managed Solutions.

source

Jesse
https://playwithchatgtp.com