ChatGPT’s move to lure businesses into the A.I. waters gets mixed reactions – Fortune

Hello and welcome to Eye on A.I.
“Embrace A.I. or get left behind” is the message plastered everywhere since ChatGPT opened the floodgates on generative A.I. Now that OpenAI has released its enterprise tier for the tool, the opportunity— and decision—to jump on the train has quickly gotten very real for businesses. As Eye on A.I. discovered in conversations with various companies, the Enterprise version has delivered the needed peace of mind for some to integrate ChatGPT into their businesses—but there are enough outstanding privacy and security questions that many businesses will continue to hold out.
Employees at design platform Canva have found success using ChatGPT Enterprise to learn new areas of the codebase, troubleshoot bugs, write and fix complex spreadsheet formulas, categorize and analyze free-form data, and extract themes from transcripts and user interviews, according to Danny Wu Canva’s head of A.I. products.
Canva, which had already integrated OpenAI technology into features like Magic Write, was among a select group of companies invited to beta test ChatGPT Enterprise weeks before its public launch. “We’re definitely impressed by the quality of responses provided with team members reporting that it’s a time saver and provides practical advice with nearly every use,” Wu said.
Hi Marley, a Boston-based cloud insurance company that has been wanting to tap ChatGPT, finally got its chance to use the tech thanks to the improved security features of the enterprise tier, according to chief product officer Jonathan Tushman.
He cited the ability to sandbox data and that fine-tuning of models will be private to their system, which will be “a big unlock for our customer base and for us.” 
“Currently, with ChatGPT, we cannot put sensitive data through their pipes. But with ChatGPT Enterprise, they are not using data from customer prompts to train open A.I. models,” he said. “This is the big thing and the thing we’ve been waiting for.”
This is likely exactly what OpenAI is hoping for with the enterprise tier (as well as a way to start bringing in significant revenue from ChatGPT). But a closer look shows the two tiers may not be as different security-wise as they seem, according to The Register. For example, while OpenAI stressed in its announcement for the enterprise tier that the company will not train its model on customer data, OpenAI actually says it does not use data submitted through the API in any of its tiers to train or improve ChatGPT. Even non-API consumers toying around with ChatGPT can opt out of having their interactions used for training. Additionally, OpenAI says some of its staff can access encrypted conversations taking place within the enterprise tier. 
In all conversations regarding ChatGPT and its new enterprise offering, security and data governance seem to be top of mind. And for some, the current state of ChatGPT Enterprise’s security and data management is still not sufficient.
Executives at upskilling platform Degreed, for example, said they don’t envision tapping ChatGPT Enterprise anytime soon, in part due to concerns about security and compliance. 
“Given the current A.I. regulation and OpenAI being under regulatory investigation, our clients have concerns about licensing OpenAI as a vendor using its outputs directly in the platform,” said Fei Sha, VP of data science and engineering at Degreed. 
This is of particular concern for the company’s EU-based customers, as A.I. systems used in education are categorized as “high-risk” in the recent EU A.I. Act, which would make Degreed subject to higher levels of regulation around transparency, data governance, and other guardrails. 
Additionally, ChatGPT Enterprise just didn’t pass the company’s cost-benefit analysis. Licensing any new vendor would trigger a resource-intensive vetting process in which all of the company’s clients would have to give permission and sign new agreements. And while Degreed recognizes the potential benefits LLMs could provide for its customers, Fei said the company is “keeping its options open” for other emerging LLMs and that “A.I. technologies should not be implemented for their own sake.” 
In order for the company to consider ChatGPT Enterprise, Janice Burns, chief transformation officer at Degreed, said they’d need to see transparent and measurable privacy and security practices, actionable insights to combat misinformation, and for the biases embedded within the LLM to be addressed, in addition to quality recommendations. 
“The excitement of innovation is undeniable,” she said. “But tools like ChatGPT for Enterprise also require guardrails and transparency around privacy and user data to make the technology fit for enterprise adoption.”
Online jewelry retailer Angara is another company that, while excited about the possibilities of A.I. tools, will not be using ChatGPT Enterprise. 
Cofounder and CEO Ankur Daga said Angara has been using A.I. tools for over a year to help customers find the right product faster and has seen its website conversion rate increase 20% as a result. The company also has a lot more on its A.I. product roadmap, such as upcoming features that will surface recommendations to customers based on text descriptions or uploaded images.
But when the company ran trials of ChatGPT, they were not pleased with the results. While ChatGPT got it right 80% of the time, the other 20% “can cause us to lose our customers’ trust, which is catastrophic in the jewelry business,” Daga said.
And with that, here’s the rest of this week’s A.I. news. 
Sage Lazzaro
sage.lazzaro@consultant.fortune.com
sagelazzaro.com
The U.S. Copyright Office asks for the public’s thoughts on A.I. and copyright issues. The agency opened a public comment period and published a notice outlining its current challenges with A.I. and four main issues it’s exploring, including the use of copyrighted works to train models and the copyrightability of material generated using A.I. systems. The public comment period will run through Oct. 18 and is part of a larger effort by the agency to address issues around ownership, infringement, and fair use in a world being quickly upended by A.I.
Zoom launches a generative A.I. digital assistant. Much like the A.I. meeting tools Google announced last week, Zoom says its AI Companion can create meeting summaries, video highlights, and even chat with users mid-meeting to catch them up on what they missed, among other tasks. Zoom also announced a suite of related features coming next spring including a meeting prep feature, wherein the AI Companion can offer background context on what will be discussed in an upcoming meeting by surfacing knowledge from across meetings, chats, documents, etc. The company said it tapped its own LLMs as well as ones from Meta, OpenAI, and Anthropic for the AI Companion.
X updates its privacy policy to allow the company to train A.I. models on user data. That’s according to StackDiary. X owner Elon Musk has previously criticized other companies for using Twitter data for A.I. training while setting the groundwork to do so himself, especially as he seeks to enter the A.I. market with another company he’s calling xAI. He commented on the policy changes in a post, writing “just public data, not DMs or anything private.” Other notable changes state that the company will now collect biometric data as well as users’ job and education history.
Amazon is filling up with A.I.-generated mushroom foraging books that experts warn could kill someone. 404media uncovered a trove of likely ChatGPT-generated books about mushroom foraging on Amazon, mostly targeted at beginners. The listings contain no indication that the books were written by A.I. and even have what appear to be fake human authors. It goes without saying that a technology that frequently mixes things up probably isn’t suited to author guides explaining which mushrooms you can enjoy in pasta versus which can kill you. 
A smelly Turing Test. For hearing and vision, we’ve long had maps that relate the physical properties of these senses (such as frequency and wavelength, respectively) to properties we can actually perceive, like pitch and color. But what about our sense of smell? 
A new paper published this past week in Science details how researchers used machine learning to crack the code for scent. Using a graph neural network, the researchers built a tool for predicting the odor profile of a molecule solely based on its structure. The tool outperformed the average human panelist on 53% of the molecules tested including in tricky situations, like distinguishing between molecules that appear incredibly similar but actually smell very different. 
While researchers from various universities participated in the paper, it was driven largely by Osmo, a company working to bring technology to a point where “computers will generate smells like we generate images and sounds today.” The Osmo team originally started this work at Google Research before spinning out into a separate startup in 2022 with Google Ventures as a leading investor. 
Alex Witschko, who pioneered the digital olfaction group at Google Brain before founding Osmo, views the results as validation of this line of research he says he’s been “obsessed with” for most of his life and has been studying professionally since 2008.
“A core problem preventing us from digitizing scent is something that the other senses already have – a map,” he wrote in a blog post about the study.
 “In short, the paper validates the idea that it’s now possible to apply machine learning to quantify, digitize, and engineer scent,” he said.
U.S. should use Nvidia’s powerful chips as a ‘chokepoint’ to force adoption of A.I. rules, DeepMind cofounder Mustafa Suleyman says —Nicholas Gordon
Indeed’s CEO wants to create ‘cyborg’ recruiters that play to the strengths of both humans and A.I. —Orianna Rosa Royle
ChatGPT creator OpenAI is reportedly earning $80 million a month—and its sales could be edging high enough to plug its $540 million loss from last yearChloe Taylor
A.I.’s un-learning problem: Researchers say it’s virtually impossible to make an A.I. model ‘forget’ the things it learns from private user data —Stephen Pastis
A.I. and big market shifts are making PCs interesting again —David Meyer
In Meta we trust? Continuing on its “open-source” spree, Meta this past week unveiled FACET, a new A.I. benchmark for evaluating fairness in models. Made up of 32,000 images containing 50,000 people labeled by human annotators, it’s positioned as a way to evaluate biases in models across classification, detection, instance segmentation, and visual grounding tasks involving people.
A.I. models absolutely should be examined widely, regularly, and with a microscope (after being designed with the utmost responsibility from the outset). The problem is, “Meta releases a dataset to probe computer vision models for biases” is the kind of headline assured to inspire eye rolls.
Meta, to put it gently, doesn’t have the best record when it comes to trust or transparency. The company has spent the past several years up to its search bar in trust and safety issues related to its social platforms, and its track record with A.I. isn’t looking much better. 
“Late last year, Meta was forced to pull an AI demo after it wrote racist and inaccurate scientific literature. Reports have characterized the company’s AI ethics team as largely toothless and the anti-AI-bias tools it’s released as ‘completely insufficient.’ Meanwhile, academics have accused Meta of exacerbating socioeconomic inequalities in its ad-serving algorithms and of showing a bias against Black users in its automated moderation systems,” summarizes TechCrunch, which also uncovered “potentially problematic origins” and data issues with FACET. 
Meta clearly has the tech to be a leader in A.I.—PyTorch was truly disruptive as far as machine learning frameworks go, and LLama 2 has quickly become a go-to model for the industry. But as our lead story in Eye on A.I. this week showed, A.I. is as much about trust as it is about A.I.
This is the online version of Eye on A.I., a free newsletter delivered to inboxes on Tuesdays. Sign up here.
© 2023 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information | Ad Choices 
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.
S&P Index data is the property of Chicago Mercantile Exchange Inc. and its licensors. All rights reserved. Terms & Conditions. Powered and implemented by Interactive Data Managed Solutions.

source

Jesse
https://playwithchatgtp.com