US Copyright Office Wants To Hear What People Think About AI and … – Slashdot
Follow Slashdot blog updates by subscribing to our blog RSS feed
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
[Student taking art classes] pillages[their] content from anywhere it can get it, so before you get to if what [a person] produces is copyrightable you have to get over the fact that [every single person that has ever produced art learned the techniques and styles by looking at prior works and by that same logic] is breaking copyright on an industrial scale.
[Student taking art classes] pillages[their] content from anywhere it can get it, so before you get to if what [a person] produces is copyrightable you have to get over the fact that [every single person that has ever produced art learned the techniques and styles by looking at prior works and by that same logic] is breaking copyright on an industrial scale.
[Student taking art classes] pillages[their] content from anywhere it can get it, so before you get to if what [a person] produces is copyrightable you have to get over the fact that [every single person that has ever produced art learned the techniques and styles by looking at prior works and by that same logic] is breaking copyright on an industrial scale.
One difference is that recency matters for most interesting AI purposes, and lots of content old enough to be in the public domain either isn’t available in digitized form or is in a form unsuitable for ingestion (e.g. images from scanning microfiche). Thus, the quality of AI’s results is likely to suffer enormously if you exclude recently created material, whereas an artist’s output is likely to improve by not imitating anything from the last 120 years or so.
No, that’s not a difference at all that’s just a lie. You think people only consume information from public domain works?
No, that’s not a difference at all that’s just a lie. You think people only consume information from public domain works?
I’m pretty sure you completely misunderstood my post, which was quite tongue-in-cheek.
What I was saying (sarcastically) is that it is easy for an artist to be inspired only by public domain works; the artist merely has to dismiss all modern art as ugly and not pay attention to it.
An AI can’t avoid being trained on recent content (which, by definition, is almost guaranteed to not be in the public domain) if you want it to be useful, precisely because people don’t exclusively consume information from public d
Huh? AI can absolutely be trained on only public domain works.
Huh? AI can absolutely be trained on only public domain works.
Depends on what you’re using it for, but unless you at least include Creative Commons, the AI won’t have any notion of such modern inventions as the television (late 1927), the cellular phone, etc. That starts to be problematic.
Copyright is Constitutional and it says right there it’s For the People.
Putting machines on an equal footing to people and then going after people to benefit the machines would be pure insanity.
Most tools to date have not been creative so the locus has been solely on the artist, not the tool.
Putting machines on an equal footing to people and then going after people to benefit the machines would be pure insanity.
Some fraction of people are the ones benefiting. The machine is just an automation tool.
Most tools to date have not been creative so the locus has been solely on the artist, not the tool.
Isn’t this really just another fight against automation. We could modify copyright to try and prevent it, but I think the current laws support the “tools”.
It’s only an industrial scale when anyone can use an AI, for free or very low cost, to produce thousands of works an hour.
When it required a human with skill, who wants to be paid, to produce work in the style of some more famous artist there was a limit on how often it happened. That human would have potential lawsuits in mind too, so design their work to be different enough to avoid copyright claims.
If I write a movie script based off of one movies, maybe even two or three, I’m going to have lawyers all over me. What if it’s elements of 50? 300? AI’s are trained off of everything its owners can suck down, scrambled in a database that barely anyone understands, and spit back out in who-knows what kind of order.
So if a human writer is inspired by elements from, say, 30-40 movies, will ALL of those producers be able to tell? Would they ALL have a right to sue me even if I didn’t rip off lines, verbatim?
The difference is, if you write a movie script after having watched 30 other movies, do the authors of those 30 movies have a right to declare your script a derivative work?
I think it’s rare that an artist on their own produces art without any exposure to art from someone else; as the saying goes, good artists copy, great artists steal. Every writer has read books before, every painter has seen a painting before, ever musician has heard music before except maybe Beethoven. And every artist is inspired by other artists.
So rather, I think the issues that should be addressed are:
1) the scale at which AI can train itself on others’ art which vastly eclipses any human’s ability to do so
2) the inhumanity of AI art misses art’s purpose in society. we seem to be ok with an artist taking inspiration or ideas from another if they remix it in new ways, because we then can engage in a meaningful conversations with other humans and explore our humanity in multiple forms of expression. AI of course has no expression, no engagement, and thus nothing meaningful to provide the human experience
3) based on 2, the only remaining purpose for AI art is to enrich other humans as art has value, but the scale of AI means that it enriches some at the expense of the human experience which is where art derives it’s value in the first place
So there may be a place for AI art, but whatever regulation should be made should understand, value, and protect the human experience, and let AI handle the rest.
That’s what we as humans do, take all our exposures and regurgitate them as our “unique” creation, no different.
For us it’s a copyright violation if we don’t modify the original enough to be its own work. For AI, that test has already been passed and you can’t recognize millions of bits of originals in a cumulative resulting work.
The wrong question is being asked. The question should be, if there is copyright infringement, work obviously unchanged from the original, how is the originator compensated effec
Get back to 28 years for copyrights, and let the AI have everything before, say, 1995.
Honestly, copyright terms are so long, nobody is really going to take them seriously.
So the only way a human can produce non-infringing art is if they have been carefully prevented from seeing any art starting from infancy?
The problem is not that AI is like humans in taking inspiration from other people’s content. The problem with AI copyright is that:
1) AI could shit out 1 billion copyrighted works, which will result in people accidentally “copying” their works.
2) The purpose of US copyright is to advance the progress of science and the useful arts, and it is likely giving AI copyright will do the opposite.
3) AIs aren’t the ones asking for copyright protection, nor will they get the money, nor are they motivated by money.
A suitable compromise might be that AI works get 1 year of copyright protection upon paying a $100 registration fee per work and which is signed by the human responsible for any issues regarding the content.
how AI models should use copyrighted data in training;
– Any way they want. If it’s not copyright infringement for a person to look at a piece of existing art and learn from the styles (which is just, you know, what learning is) and techniques, then it’s not for a computer either.
whether AI-generated material can be copyrighted even without a human involved;
– Nope. And typing in a prompt isn’t enough creative involvement to justify a copyright either. A human should have to touch the actual work.
and how copy
why would a machine be subjected to similar laws? this makes literally no sense given the scale, impact and economically exploitative models
laws should be in favor of humans, not dystopia and megacorps
why would a machine be subjected to similar laws?
why would a machine be subjected to similar laws?
Because the humans running them are subject to them. They’re being run by humans, for humans. There are already rulings that ingesting books for statistical data is acceptable, like how OpenAI uses that data for ChatGPT. That isn’t to say that will ultimately be how the law comes down on the topic, just how it looks to be leaning so far.
laws should be in favor of humans, not dystopia and megacorps
laws should be in favor of humans, not dystopia and megacorps
I fail to see how OP’s opinion wouldn’t be in favor of humans. The question is, which humans and what are the effects.
and how copyright liability would work with AI.
– It shouldn’t. If an image is fully generated by an algorithm, then it is a new work.
and how copyright liability would work with AI.
– It shouldn’t. If an image is fully generated by an algorithm, then it is a new work.
I’m not so certain of that. As we are quickly moving down the rabbit hole of human-computer interactions, asking a computer algorithm to “create a photorealistic copy of which afterwards it produces a copy of artwork, would not that copy violate copyright of the original (under the hypothetical that the original is still copyrighted)? Even though an algorithm then generated that “new copy”, it only exists because the original did in the first place, and thus it is in violation of that original work.
In thi
people need to own the right to their own likeness and they can’t just give that up with an EULA.
people need to own the right to their own likeness and they can’t just give that up with an EULA.
people need to own the right to their own likeness and they can’t just give that up with an EULA.
I agree, though there’s some nuance. You shouldn’t need permission to make a documentary about Trump or Obama, nor even to make a film in which they’re a character.
But training a model to replicate a 30 year old Schwarzenegger and having him star in your action film, you should definitely need permission for that.
Though what if instead of Schwarzenegger it’s Bela Lugosi? Does it make a difference if the star has been dead for almost 70 years? Unlike copyright at some point their image should fall into the p
“their own likeness” is an impossible thing to define, so you defacto lose your argument. Two people can look alike, twins, for example, or celebrity look-alikes. It’s an impossible hill to climb.
See the story on Scientology and the DMCA [slashdot.org] down below for one example.
…pay up to the IP Mafia, your brain made an illegal copy.
And your chicks for free.
It’s here
https://www.copyright.gov/poli… [copyright.gov]
If you want to lazy-mode it, this may be a useful chatgpt prompt:
write an argumentative comment that supports the ideal that AI trained on copyrighted material is infringing upon copyright law
write an argumentative comment that supports the ideal that AI trained on copyrighted material is infringing upon copyright law
Problem solved
I think for the betterment of humanity, the entire concept of information being copyrighted and locked up needs to go away. People can still make money with things made from knowledge, but the knowledge itself should always be free. Nowhere is this more evident than in medicine. Public money goes into universities to fund research but then universities and individuals still get to tie up the knowledge legally. Yes, it would be severely disruptive of markets, but for the long term betterment of humanity, all
It was already past its time before AI, but now AI can easily make derivatives “inspired” but not identical to the others. And what takes a human produce 1 work, AI will do a million, driving the price to near zero. Resistance is futile…
People should ask the opinion of the most famous unknown composer of our time: Kevin MacLeod of https://incompetech.com/ [incompetech.com] which is basically a human version of what soon AI will do to music.
“Give me a song with this mood like this composer… done, here are 1 million permut
Meta’s MusicGen-large. Their AI team is crushing it these days.
It was already past its time before AI, but now AI can easily make derivatives “inspired” but not identical to the others. And what takes a human produce 1 work, AI will do a million, driving the price to near zero.
It was already past its time before AI, but now AI can easily make derivatives “inspired” but not identical to the others. And what takes a human produce 1 work, AI will do a million, driving the price to near zero.
All of which end up being rather terrible, because AI doesn’t understand any of the rules involved, and doesn’t follow them. And it turns out that when you just try to imitate something without having at least some understanding of the structure of what you’re creating, you create something that makes anyone who knows anything about music absolutely cringe.
I’ve heard a lot of AI-created music, all the way back to some of the music that David Cope created back in the 1980s and 1990s. The best actually soun
Legal stuff surrounding both independently is a steaming pile of shit. Why do you think combining them would produce anything better?
There may be more comments in this discussion. Without JavaScript enabled, you might want to turn on Classic Discussion System in your preferences instead.
Google Removes ‘Pirate’ URLs From Users’ Privately Saved Links
Starfield’s 1,000 Planets May Be One Giant Leap for Game Design
Possessions increase to fill the space available for their storage. — Ryan