The Ridiculously Easy Way To Remove ChatGPT’s Image Watermarks – Forbes

Taylor Swift has been the victim of fake AI images and videos
OpenAI is adding watermarks to images created by its DALLE-3 AI in ChatGPT, but it’s ridiculously easy to remove them. So easy that ChatGPT itself will show you how to do it.
The watermark announcement comes amid renewed controversy over AI-generated “deepfake” images. Only last week, X (formerly Twitter) was forced to temporarily prevent searches for Taylor Swift, after the service was flooded with explicit AI-generated images of the artist.
OpenAI announced that it is adding watermarks in image metadata—hidden code that accompanies each image—instead of making watermarks visible, as you often see with images from photo libraries such as Getty.
The company said it was adding the watermarks to “indicate the image was generated through our API or ChatGPT.”
“We believe that adopting these methods for establishing provenance and encouraging users to recognize these signals are key to increasing the trustworthiness of digital information,” the company added in a blog post announcing the new feature, which will start to appear in images generated on mobile devices from next week.
However, as the company concedes in its own blog post, it’s very simple to circumvent such a system.
Images generated in ChatGPT will soon have metadata added using the C2PA system, which is an open standard used by many media organizations and camera manufacturers to embed data within images.
There’s no immediate visual clue that an image is AI-generated, but images can be dragged into services such as Content Credentials Verify to have their provenance checked.
Here, for example, I dragged an image I created with ChatGPT into Content Credentials Verify, and it revealed the image was generated by the AI service. Even though the new metadata system is yet to be rolled out, ChatGPT-generated images already contain a metadata link back to ChatGPT, allowing them to be identified by such services.
AI-generated images can be verified on services such as Content Credentials Verfiy
However, simply taking a screenshot of the exact same image is enough to remove the identifying metadata and leave services such as Content Credentials Verify unable to identify whether an image is AI-generated or not.
Screenshots remove the image metadata
And even if you don’t want to use the screenshot method, there are other simple means to remove the metadata, as ChatGPT itself explains:
ChatGPT reveals how to get around its own watermarks
In OpenAI’s defense, and as ChatGPT explains itself, there are legitimate privacy and security reasons why you might want to remove image metadata. Whistleblowers or reporters sending images from war zones might want to remove data that might betray their precise location, for example. Or parents might not want location-based data when sharing photos of their children.
Nevertheless, it’s a trivial task to remove the data that identifies an image as AI-generated. “Metadata like C2PA is not a silver bullet to address issues of provenance,” OpenAI admits in its blog post.
“It can easily be removed either accidentally or intentionally. For example, most social media platforms today remove metadata from uploaded images, and actions like taking a screenshot can also remove it. Therefore, an image lacking this metadata may or may not have been generated with ChatGPT or our API.”
Despite the ease with which it can be circumvented, the company believes that “adopting these methods for establishing provenance and encouraging users to recognize these signals are key to increasing the trustworthiness of digital information.”
The C2PA metadata being inserted into images won’t be used for other types of content generated by the AI service, including text and audio.
OpenAI’s attempts to improve the detection of AI images comes amid growing fears about AI’s ability to wreak havoc in a number of scenarios.
Schools are already using a variety of other (imperfect) methods to detect if content has been written by students or AI, including looking for telltale keywords.
With elections scheduled for many Western democracies in 2024, there are increasing fears that faked AI-generated images and videos could interfere with the campaigns.
Taylor Swift was once again involved in a deepfake controversy this week, when a video appearing to show the musician holding a flag promoting Donald Trump went viral on social media. The video was proven to be fake.
AI-generated deepfakes are also being used to commit crimes. This week, it was reported that fraudsters used a deepfake video to convince a finance worker that they were speaking with their chief financial officer on a video call, resulting in the theft of $25 million.

source

Jesse
https://playwithchatgtp.com