The not-so-subtle self-interest behind tech warnings not to ‘stifle innovation’ – Fortune

Hello and welcome to Eye on AI
Almost a week after the Senate’s first AI Insight Forum, the discourse about AI regulation is running hotter than ever. While the session was conducted behind closed doors, we do know a little about what happened: Elon Musk warned AI could threaten civilization; Bill Gates argued it could help address world hunger; and when Senate Majority Leader Chuck Schumer asked if the government needs to regulate AI, all of the executives present raised their hands. There were also debates about how AI will affect jobs, how bad actors could abuse open-source AI systems, and whether there should be an independent agency dedicated to overseeing AI.
The goal of all of this, of course, is for the Senate to work through how it might want to regulate this fast-moving technology. And while all of the tech executives in the room may have raised their hands in favor of regulation, there’s since been a chorus of takes from industry leaders about how regulation would stifle innovation—and threaten the United States’ position with China—that makes clear the industry would really prefer to continue running free. 
As I write this, a top story on the popular tech news aggregator Techmeme is a blog post from investor and self-declared “short term AI optimist, long term AI doomer” Elad Gil, in which he argues the U.S. needs to let the technology advance and should not yet push to regulate AI. 
“I do think in the long run (ie decades) AI is an existential risk for people. That said, at this point regulating AI will only send it overseas and federate and fragment the cutting edge of it to outside US jurisdiction,” reads the blog post, where he also makes the popular argument that regulation would favor Big Tech incumbents. 
The chorus continued in my inbox. “Heavy handed regulations will choke our country’s budding leadership in the AI sector and could have a lasting and negative impact on our ability to compete with foreign industry that is accelerating R&D with the support of their own governments,” Muddu Sudhakar, CEO at AI company Aisera, emailed Eye on AI via a representative after the hearing. 
The innovation-over-all argument against regulation was perhaps most on display at the recent All-In Summit, where Benchmark general partner Bill Gurley gave a talk titled “2,851 Miles.” Noting 2,851 miles as the distance between Silicon Valley and Washington D.C., he declared, “The reason Silicon Valley has been so successful is because it’s so fucking far away from Washington D.C,” receiving a roar of applause and a standing ovation. 
He was immediately joined onstage by fellow VCs for a discussion, where they proceeded to tear into the idea of regulating AI and laughed that regulation would lead to the government doing code reviews and forcing product managers to travel to Washington to get approval on new software features. Tech executives like Docusign CEO Allan Thygesen and Applied Research Institute CEO David Roberts later lauded the talk on LinkedIn. 
As always, it’s important to keep in mind that these VCs—much like executives—have a vested interest in letting AI run wild. Benchmark bills itself as focused on AI startups, and many VCs have already made a ton of money in the space. But their innovation-over-all stance has also found some support in the Senate, which critics credit to Big Tech’s lobbying against AI regulation (or at the very least, their efforts to shape the regulation so it minimally affects—and perhaps even benefits—their incumbent positions).  
In his opening remarks at the hearing, Texas Republican Sen. Ted Cruz rallied against regulation, stating that “if we stifle innovation, we may enable adversaries like China to out-innovate us,” according to a press release. And Sen. Roger Marshall, a Kansas Republican, had a similar takeaway, telling Wired after the hearing, “The good news is, the United States is leading the way on this issue. I think as long as we stay on the front lines, like we have the military weapons advancement, like we have in satellite investments, we’re gonna be just fine.” 
While there’s no doubt that AI has major implications for national security, it also has implications for every other aspect of society and human life. AI is not just the future—it’s a deeply impactful technology that’s been testing current laws and sowing real-world harms for years, from upending copyright law and workers’ rights to cementing discriminatory biases into everything from policing technology to how home loans are approved.
VCs and executives have long centered “innovation” as a primary stakeholder. Throughout the tech industry’s rise to dominance, they’ve positioned stifling innovation as the worst-case scenario. And now rising tensions with China are adding more fuel to their argument.
And with that, here’s the rest of this week’s AI news.
But first…a reminder: Fortune is hosting an online event next month called “Capturing AI Benefits: How to Balance Risk and Opportunity.”
In this virtual conversation, part of Fortune Brainstorm AI, we will discuss the risks and potential harms of AI, centering the conversation around how leaders can mitigate the potential negative effects of the technology, allowing them confidently to capture the benefits. The event will take place on Oct. 5 at 11 a.m. ET. Register for the discussion here.
Sage Lazzaro
sage.lazzaro@consultant.fortune.com
sagelazzaro.com
Google unveils Bard integrations for Gmail, Drive, Maps, YouTube, and more. Called Bard Extensions, the offering enables Bard to find and show users relevant information from across their Google apps. As an example, Google described how this can help users plan a trip, with Bard able to grab dates that work for everyone from Gmail, provide flight and hotel information, show Google Maps directions to the airport, and suggest YouTube videos of things to do at the destination—all within one conversation. The company also announced it improved its “Google it” feature to allow users to easily double-check Bard’s answers.
Google is getting close to releasing its Gemini AI model. That’s according to The Information, which reported Google has given a small number of companies access to an early version of the conversational AI software. Gemini is Google’s big bet to compete with GPT-4 and is expected to be made available through the Google Cloud Vertex AI service. 
Amazon launches generative AI tools to help sellers write product descriptions. The company claims the tools will help sellers save time and offer customers more complete product information, but there’s reason to be skeptical considering generative AI’s tendency to “hallucinate,” or make things up. EBay also recently released a similar AI tool for writing product descriptions, according to TechCrunch.  
Digimarc announces a product for watermarking original content, starting with images. Many have suggested watermarking content created by AI as a strategy for deciphering between artificially and human-generated works. Digimarc today announced it's taking the opposite approach with Digimarc Validate, a product for watermarking original content instead. Essentially, it will mark content with a machine-readable “©” that will provide a clear signal of content ownership and authenticity—before any generative AI models have the chance to ingest it. 
Big yikes. Microsoft AI researchers accidentally exposed 38 terabytes of sensitive data while publishing a storage bucket of open-source training data on GitHub, according to research from cloud security startup Wiz, which was shared with TechCrunch. The trove of data included private keys, passwords to Microsoft services, the personal backups of two Microsoft employees’ personal computers, and more than 30,000 internal Microsoft Teams messages from hundreds of Microsoft employees. “No customer data was exposed,” according to Microsoft’s Security Response Center.
The exposure doesn’t bode well for open-source as its role in AI is being fiercely debated, including as part of the Senate’s probe into potential AI regulation. Many security researchers have been ringing the alarm about the security risks of open-source AI projects. And while some believe open-source is critical to democratizing the technology, others argue the seemingly “open” offerings from Big Tech are merely tactics to help these companies own the ecosystem and capture the industry.
Billionaire investor Ray Dalio says the AI transformation could create a 3-day workweek. We’re ‘going through a time warp’ —Chloe Taylor
Actor Stephen Fry says his voice was stolen from the Harry Potter audiobooks and replicated by AI—and warns this is just the beginning —Chloe Taylor
OpenAI realizes that engaging with Europe, rather than threatening it, is the way to get what it wants —David Meyer
Top AI institute chair and ex-Amazon exec thinks AI will disrupt employment as we know it—but it’ll make the world wealthier and more skilled —Prarthana Prakash
3 investors from Microsoft’s corporate VC arm M12 are striking out on their own with Touring Capital, a new AI-focused firm —Anne Sraders
Spies, scientists, defense officials, and tech founders can’t agree on how to keep AI under control: ‘We’re running at full speed toward a cliff’ —Chloe Taylor
Oops AI did it again. A few weeks ago, we wrote about an embarrassing misstep from Microsoft-owned MSN in which a travel guide for Ottawa, Canada, published on its site prominently featured the Ottawa Food Bank as a top tourist attraction—even recommending visitors go with an empty stomach. The article was called out for being extremely insensitive, and now just a few weeks later, it seems the platform has outdone itself. 
MSN is in hot water again after publishing a seemingly AI-generated obituary for former NBA player Brandon Hunter who passed away unexpectedly this past week. The headline: “Brandon Hunter useless at 42.”
The rest of the article is no better and reads like a nonsensical game of Mad Libs. It says he "handed away" after achieving "vital success as a ahead [sic] for the Bobcats" and "performed in 67 video games,” noted Futurism
Another day, another example of AI totally failing at this type of use case. So how many times does this have to happen before executives admit it’s not working?
"We are working to ensure this type of content isn’t posted in [the] future," Jeff Jones, a senior director at Microsoft, told The Verge after last month’s incident, which we now know wasn’t the last time. And it wasn’t the first either; Futurism has been documenting the publication of wonky AI-generated stories on MSN’s platform since the company replaced its human writers with AI last year.
This is the online version of Eye on AI, a free newsletter delivered to inboxes on Tuesdays. Sign up here.
© 2023 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information | Ad Choices 
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.
S&P Index data is the property of Chicago Mercantile Exchange Inc. and its licensors. All rights reserved. Terms & Conditions. Powered and implemented by Interactive Data Managed Solutions.

source

Jesse
https://playwithchatgtp.com