Apple’s AI debut: privacy nightmare or FUD? – SC Media

(Credit: Rafael Henrique – stock.adobe.com)
Apple debuted both its Apple Intelligence on-device AI system and its partnership with OpenAI to bring ChatGPT to Siri during the 2024 Worldwide Developers Conference (WWDC) on Monday.
The announcements were met with a range of reactions: running from excitement over the new functionalities to concern over the implications of on-device and OS-level AI implementation.
“Apple has no clue what’s actually going on once they hand your data over to OpenAI,” Elon Musk said in a post on X, calling the ChatGPT integration an “unacceptable security violation” and threatening to ban Apple devices at his own companies, including include X, Tesla, SpaceX and xAI.
But is Apple’s AI plan really as dangerous to organizations’ security as Musk’s comments would suggest?

Security and AI experts who shared their thoughts with SC Media had a mix of reactions, but overall expressed optimism regarding Apple’s new Private Cloud Compute system.
Apple promises it will keep data processed through Apple Intelligence safe and inaccessible to anyone other than the user. Even Apple itself will not have access to data passing through the private cloud, the company said in a blog post.
Still, Apple’s partnership with OpenAI has raised eyebrows.
Although Apple says its partner will not store the prompts and IP addresses of those who access its ChatGPT services through iOS 18, iPadOS 18 and macOS Sequoia integrations, OpenAI’s track record when it comes to privacy and security makes such a widespread deployment worrisome for many.
“Musk’s outburst at the news of Apple partnering with OpenAI may be an overreaction, but it’s symptomatic of some recent high-profile gaffes around AI and data privacy,” Alastair Paterson, CEO and co-founder of Harmonic Security, said in a blog post and statement to SC Media. “In recent months, there’s already been public outcry about data privacy issues surrounding Microsoft Recall, Slack, and DocuSign.”
Apple’s promise to protect data processed by Apple Intelligence through its new Private Cloud Computer (PCC) system, which has received positive attention for its zero trust approach, enforcing encryption in transit of user data and measures to decrease the likelihood of a cyberattack compromising identifiable information.
While some Apple Intelligence processing will occur locally on the user’s device, requests that require more computational resources will go to the PCC, where any data received will supposedly be used solely for the task requested and never stored, collected or used for training, according to Apple.
Data traveling to PCC nodes is encrypted, preventing man-in-the-middle attacks, and there is no privileged runtime access, meaning even Apple staff cannot bypass privacy measures to access the data. Additionally, the PCC infrastructure, which is based on Apple’s own silicon, is designed to prevent data from being tied to a specific user in the event of a direct attack on the PCC.
“In typical Apple fashion, they have vertically integrated everything and taken ownership of its security from top to bottom. It’s impressive and head of AWS, Microsoft, and Google cloud offerings for LLMs thus far, even if it is just in support of Apple Intelligence features,” said GenAI expert Kevin Surace, CTO of Appvance and chairperson at Token, in a statement to SC Media. “Apple has set the bar for absolute privacy and security of generative AI interactions.”
Michael Covington, vice president of portfolio strategy at mobile security company Jamf, also expressed praise for the “trust boundary” Apple provides for users of Apple Intelligence and agreed that the company’s approach is a first for GenAI implementations.
“For an initial release, the breadth of Apple Intelligence is impressive. And the benefits it will deliver to the overall Apple experience is already clear,” Convington said. “We are likely seeing a new standard emerge for how privacy controls and security safeguards can be achieved with integrated AI.”
Due to the current limitations of Apple’s foundational models, it has partnered with OpenAI to integrate the latest ChatGPT models at the OS level into the function of Apple’s Siri virtual assistant and into its developer tools.
According to Apple, users will be prompted before a request is made to ChatGPT, allowing them to approve or reject sending the request to the third-party AI.
Despite Apple’s assurances that OpenAI has agreed not to store the requests of users who are not logged in to an OpenAI account and that user IP addresses will be obscured, those who are already wary of OpenAI’s data privacy practices and security posture will remain weary.
“Thus far, Apple has fallen short of providing clear guidance on what data will ultimately be sent to OpenAI and how privacy will be protected. This should give individuals and corporations pause,” said Paterson, who noted that privacy measures may differ for users who log in to their own OpenAI accounts for access to more advanced features.
The unmonitored use of ChatGPT by employees has long been a concern, especially given the tendency for employees to submit sensitive company information to the chatbot.
One study published last month by Cyberhaven found that inputs of corporate data into AI tools by employees increased by 485% between March 2023 and March 2024, with more than a quarter being classified as sensitive data.
With Apple having an estimated 52% US smartphone market share as of Q1 2024 and ChatGPT now being implemented at the OS-level rather than behind OpenAI’s own app, it is reasonable to believe employee use of the AI tool could increase upon the release of iOS 18.
“How safe is OpenAI? They do provide various levels of private operation, but no one really knows how safe, secure, and non-sharing it actually is,” said Surace.
“While Apple has published an extensive security white paper, OpenAI has a short ChatGPT Enterprise privacy note, which certainly isn’t convincing Elon Musk it’s safe.”
Widespread implementation of LLMs directly on smart devices also increases the attack surface for a variety of LLM-specific threats such as prompt injection and training data poisoning. ChatGPT specifically has dealt with multiple cyberattacks, vulnerabilities and the proliferation of “jailbreaks” in the short time since its release.
Given Apple’s large market share and influence in both the personal and enterprise smart device and computation spaces, its first step into the GenAI craze is likely to be a turning point in adoption of the technology.
“We’re in the midst of an AI arms race, or an AI game of chicken,” said CyberRisk Alliance Chief Technologist and Senior Director of Programs Dustin Sachs.
“It’s going to take one of the big players blinking or backing off or realizing what they’re doing may not be in the customer’s best interest before we start to see any action in favor of customer privacy.”

Mend.io has introduced MendAI, an application security tool that identifies code generated by AI models and has expanded its software composition analysis tool to provide detailed AI model versioning and update information, DevOps reports.

In Part 2 of this series on emerging threats, Critical Start’s Callie Guenther explores a six-point strategy for responding to threats posed by AI.

Meta has deferred training its large language models using publicly available Facebook and Instagram content without the explicit consent of its adult users across the European Union after an Irish Data Protection Commission request, reports The Hacker News.

By clicking the Subscribe button below, you agree to SC Media Terms and Conditions and Privacy Policy.

Copyright © 2024 CyberRisk Alliance, LLC All Rights Reserved. This material may not be published, broadcast, rewritten or redistributed in any form without prior authorization.
Your use of this website constitutes acceptance of CyberRisk Alliance Privacy Policy and Terms & Conditions.

source

Jesse
https://playwithchatgtp.com