Getty Images Built a 'Socially Responsible' AI Tool That Rewards … – Slashdot

Catch up on stories from the past week (and beyond) at the Slashdot story archive




The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
“I think a world that doesn’t reward investment in intellectual property is a pretty sad world”
And an even sadder world is one where rights holders use the law to limit creativity and extend copyright far beyond its original intent
The summary of their promise, at least, is so vague that it doesn’t actually mean anything.
Getty’s AI tool “doesn’t know what the Pope is,”
Wait, what? There are no Getty images of the Pope? Really? [google.is]
When asked how Getty creators will be paid for AI training data, Peters said that there currently isn’t a tool for Getty to assess which artist deserves credit every time an AI image is generated
Because that’s not how AI works…
Lastly, why would anyone pay Getty to use this tool when there’s ample free alternatives? Even in a world where copyright law was modified to ban training on copyrighted dat
Were the two guys The Oblivion Boys?

Because that’s not how AI works…

Because that’s not how AI works…
You clearly don’t understand AI technology. If you did you would know that there’s an internal representation of the training dataset in a high dimensional space through a complicated nonlinear transformation of the images. The crediting assignment problem is just a matter of lifting the function representing the original author labels from the discrete dataset into the internal representation, followed by an extension to the full internal state space.
The main difficulty is that for reasons of lazyness/ex
If you did you would know that there’s an internal representation of the training dataset in a high dimensional space through a complicated nonlinear transformation of the images
Yeah, nah. There is zero “representation of images”. They’re denoising algorithms, fed static noise and trying to nudge it into a coherent image. It’s learning how to take a dot product of a textual latent and a noised-up image latent to create a diffusion gradient to denoise the image latent. You could include the author in the t
Indeed, current networks generally include the author in the training text. That’s why you can add something like “By [name]” and get something in the style of that person’s name. But that person’s name is just one or more tokens. An array of a couple hundred floating point values. The dot product of any given token with any other token represents its semantic distance. So you if you have two authors with similar styles, they’ll have a low semantic difference.
Here’s the thing though: you can have a netw

FYI: styles aren’t copyrightable. You can be mad about this fact, but it’s a core element of how copyright works. Copyright is based on works, not styles. .

FYI: styles aren’t copyrightable. You can be mad about this fact, but it’s a core element of how copyright works. Copyright is based on works, not styles. .
Tell that to Robin Thicke. [utexas.edu]
Given that the concept now has legal precedent in music, it’s not a stretch to assume it’s also applicable to images. Just wait for the right lawsuit to come along.
He lost that lawsuit not because “his style” was similar to Gaye’s style, but because a specific work of his was too close to one of Gaye’s specific works [youtube.com].
Excepting character copyrights**, copyright is based on *works*
(even character copyrights, which are very narrowly delineated, are technically also based on works, just with a broader applicability)
Of course they ruled the work resembled Gaye’s. The question is, in what respect do these works resemble each other?
They do not share the same melody.
They do not share the same chord progressions.
They do not share the same lyrics.
Very specifically, the ruling cited the songs share the same “feel”, which in the vernacular of musicians would very much be understood to refer to the style, as in a “blues feel” or a “jazz-waltz” feel.
In other words, that’s exactly what the ruling stated, that Thicke had appropri
It’s valid because copyright law also depends on some form of closeness metric, measured by provenance and perturbation level, evaluated by a judge. The internal mechanism used by a human artist or an ML algorithm to produce that perturbation or even synthesise an image
I think you’re right, but that doesn’t mean the gp is wrong. Clearly the information is there in a highly processed form, or there would be no way to retrieve it. And clever prompts have retrieved pretty exact replicas of some of the info fed in. The question is something line “how distributed can the information be and with how much loss and still be subject to copyright?”. Consider, for example, the abstraction called “Mickey Mouse”. Lets only consider black and white versions for simplicity. There
Last but not least, you are advocating free alternatives that are only free based on massive copyright infringement. We will see eventually if the US courts side with the content producers or the content consumers in this game.
Or how about we do a turnabout – instead of generating images, the AI generates CODE?
If that AI was trained using GPL code, if all the image AI proponents get their way, the code I get out of the AI will no longer be GPL.
Would that be fair? Perhaps it was trained on lots of GPL and F/
If only large corporations have access to that AI, it’s a problem. If everyone has access, then the problem the GPL was created to deal with has been solved.
The problem is centralized control. BSD was the first response, but that didn’t incentivize releasing the code sufficiently, so development was quite slow. The GPL was a second response, and was more successful, because it gave people more incentive to release the code openly. If you can release the code ad hoc without copyright limitations, then th
What’s “so special” abou this is that Getty is one of the largest single copyright holders in the world and they know the licensing status of every piece of media in their collection, so any AI trained on those images is guaranteed liability-free for their clients.”
the monetization schema is bullshit – it looks like the tiktok model where there’s a giant pool of money split between all the creators every year with the size of that pool determined by “business growth” (ie in a way that prioritized the busi
any AI trained on those images is guaranteed liability-free for their clients.
1. That’s not how copyright works. Automated processing to create new services is granted an exception under copyright law. Which is why 99% of Google’s business line isn’t illegal.
2. Getty owns a minuscule fraction of the image space that exists on the internet. Pretending that Getty owns most images is just nonsense.
If I can’t have Balenciaga Pope, then what’s the point of an image generator? I don’t want your product if it’s just a nerfed image gen in a collard shirt. Go back to innovating on web crawlers and under handed threats of lawsuits to any random domain owner on the web.
I bet the artists will all be thrilled to be paid that mad Spotify-level money. I predict most everyone gets less than a dollar a month.
keeps getting sued by photographers [insideimaging.com.au] for selling their photos without a license?
That Getty?
There may be more comments in this discussion. Without JavaScript enabled, you might want to turn on Classic Discussion System in your preferences instead.
NSA Shares Top Ten Cybersecurity Misconfigurations
Atari Releasing New Cartridge For Its 46-Year-Old 2600 Console
No one gets sick on Wednesdays.

source

Jesse
https://playwithchatgtp.com