OpenAI Disputes Authors' Claims That Every ChatGPT Response is … – Slashdot

Please create an account to participate in the Slashdot moderation system




The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.

misconceive the scope of copyright, failing to take into account the limitations and exceptions (including fair use)

misconceive the scope of copyright, failing to take into account the limitations and exceptions (including fair use)
Fair use is a defense, but you have to have otherwise violated the copyright to claim it. It is not a sound legal argument to say that fair use is outside the scope of copyright.
Lawyers are expected to always file a motion to dismiss… that they included this argument shows how weak their case is. It seems pretty obvious that they copied all these authors works, without permission. The bot can recite whole sections…

Fair use is a defense, but you have to have otherwise violated the copyright to claim it. It is not a sound legal argument to say that fair use is outside the scope of copyright.

Fair use is a defense, but you have to have otherwise violated the copyright to claim it. It is not a sound legal argument to say that fair use is outside the scope of copyright.
It is fair to say that if something is an exception to a rule/law, it’s outside of the scope of it. That isn’t a statement as to the validity of their claim, only one of how they worded it.

Lawyers are expected to always file a motion to dismiss… that they included this argument shows how weak their case is.

Lawyers are expected to always file a motion to dismiss… that they included this argument shows how weak their case is.
It does nothing of the sort. You just appear to have an incomplete understanding.

It seems pretty obvious that they copied all these authors works, without permission.

It seems pretty obvious that they copied all these authors works, without permission.
They go on to show there is legal precedence for being able to use existing works, assuming they’re acquired legally, to derive data from the works. “OpenAI reminded the court that “while an author may register a copyright in her book, the ‘s
Does a person who’s good at memorizing books need permission to recite sections of said book from memory?
According to Big Brother live feed, just a few seconds is enough to get them to block the feed.
“Stop singing”, and reciting other copyrighted stuff gets the slam.
“It seems pretty obvious that they copied all these authors works, without permission. The bot can recite whole sections…”
Ok.. I’ve studied transformers like GPT. Can you explain to me how they “copy” authors’ works?
I think (and I have NOT studied transformers or GPT) that how that happens is the probabilities collapse to 100%. What I mean by that is like when I put in a super specific search phrase into Google and it comes back with ONE, and only ONE answer. (Happened to me about 6 months ago and was amazing at the time).
GPT can only match on what it finds – if it’s a subject no one anywhere writes about, and there is only one work for GPT to draw from that fits, then it only has one sequence of words to “choose fro

The bot can recite whole sections…

The bot can recite whole sections…
Your ability to recite a whole section is not a copyright infringement. I can remember exactly how a Muse song sounds and can even sing the lyrics. My ability to do so is not copyright infringement.
ACTUALLY DOING SO, would be.
Also I’m suing you because your brain just copied this post into your memory without my permission you hypocrite!

Your ability to recite a whole section is not a copyright infringement.

Your ability to recite a whole section is not a copyright infringement.
This is the sort of stupid non-argument arguments people make on slashdot.
No, the ability to recite whole sections is not copyright infringement. Actually doing it is. These bots don’t have agency, and they don’t have any code that blocks them from reciting those sections. So they do recite them.
Why reply if you’re gonna say something completely obtuse?
Itâ(TM)s not about the amount used, itâ(TM)s about the amount copied. ChatGPT does not hold a copy of any of these works.

So should humans be constantly accused of copyright violation for expounding on things, based on a huge combination and re-mixing of copyrighted (and uncopyrighted) works/information experiences?

So should humans be constantly accused of copyright violation for expounding on things, based on a huge combination and re-mixing of copyrighted (and uncopyrighted) works/information experiences?
Don’t give the author’s guild any ideas. This is, after all, the same group that sued over kindle’s text to speech.
Since the first cave man painted the first picture of a bison on the wall of his cave with black soot from his fire, all art has been informed by other art. That’s how it works.
The question here is, derivative or transformative.
Lawyer on both sides know this. All else is waving flags at the potential jury pool.
So should humans be constantly accused of copyright violation for expounding on things…?
They often are.
If not, then why should the similarly functioning ChatGPT be accused of such violation?
Because ChatGPT can always tell you where they got the idea from, and can also be made to forget something if the original author wishes it. It would be unjust to try to force a human to do the same just because ChatGPT can do it.

So should humans be constantly accused of copyright violation for expounding on things…?

They often are.

So should humans be constantly accused of copyright violation for expounding on things…?
They often are.
This just shows how backward the system is. Copyright is not a natural thing, it is a legal thing so someone can extort money from someone else. It is well and good if it is done for a limited time, but the current system is broken. Copyright should end after say 10 years after registration and certainly should end upon death of the author.
Regardless of copyright, LLM is not copying anything as the GP stated. It creates new text which is similar to the text it was trained on. If I register copyrights on p
LLMs like ChatGPT store only a statistical abstraction of all of the sequences of words (from many billions of sources) that have been read into them in the neural-net training.
That is incorrect. ChatGPT can also summarize the content of almost any book written before 2021 [medium.com].

At a minimum I think if one can get the LLM to reproduce a copyrighted work, then the author can receive damages.

At a minimum I think if one can get the LLM to reproduce a copyrighted work, then the author can receive damages.
If you ask an LLM for the lyrics to a popular theme song the response provided is a conveyance of fact not a performance.
It’s no different than a person memorizing the same theme song and reciting it when asked.

This means that the company that makes the LLM must also design it to not violate copyright with its output. Otherwise copyright is dead because every copyrighted work will just be imported into an LLM and then everyone can just buy access to the LLM and use the appropriate prompt, like “please recite the work … by …”. So every copyrighted work will be sold one time, to the company that makes the dominant LLM. The LLM just becomes a copyright washing machine.

This means that the company that makes the LLM must also design it to not violate copyright with its output. Otherwise copyright is dead because every copyrighted work will just be imported into an LLM and then everyone can just buy access to the LLM and use the appropriate prompt, like “please recite the work … by …”. So every copyrighted work will be sold one time, to the company that makes the dominant LLM. The LLM just becomes a copyright washing machine.
LLMs don’t work this way. They may have better memories than some of us but none of them are that good.
To give you an idea try asking an LLM something very specific but not widely known. Ask it for example to tell you the callsigns of a random cruise ship. If you dump the context window between pr

But that only proves further that it is derivitive, because it can’t tell you about anything that it hasn’t sampled. What has it sampled that isn’t someone else’s work?

But that only proves further that it is derivitive, because it can’t tell you about anything that it hasn’t sampled. What has it sampled that isn’t someone else’s work?
LLMs are not a search index. What makes the technology useful is that generally applicable principals are being learned during training that are subsequently generally applicable within and across domains in response to prompting. The ability to apply knowledge is what sets AI apart from a search thru a database.
If you ask it to write you a joke or story a thousand times and dump the context window between each attempt you will get a thousand jokes and stories from the same exact prompt. Perhaps some by
If there’s money involved, yes, humans are subject to copyright violation for expounding on things.
…learn by studying the work of others
Using human generated work to train AI is fair use
I would never want to read a book created by AI. Only people can make creative art
Unfortunately, those who control entertainment, hate creative work and prefer sequels, reboots, remakes, spinoffs, etc. Much of what they produce might as well be created by AI
Hopefully, people will get bored and demand original, creative work by human artists

Only people can make creative art

Only people can make creative art
If that is true, then human judges of creativity should be able to easily distinguish between human art and AI art.
Guess what? They can’t.
They’ll be even less able in the future.

Guess what? They can’t.

Guess what? They can’t.
I’m not sure what retarded judges you’re talking to, but largely AI generated art is dead easy to spot. In any case the topic has nothing to do with the quality of the output and everything to do with creative expression. There’s a reason e.g. midjourney images are easy to spot, they all *look the same*. They lack any kind of creativity, and above all they require creative input in order to generate any useful outcome at all.

I’m not sure what retarded judges you’re talking to, but largely AI generated art is dead easy to spot.
In any case the topic has nothing to do with the quality of the output and everything to do with creative expression. There’s a reason e.g. midjourney images are easy to spot, they all *look the same*. They lack any kind of creativity, and above all they require creative input in order to generate any useful outcome at all.

I’m not sure what retarded judges you’re talking to, but largely AI generated art is dead easy to spot.
In any case the topic has nothing to do with the quality of the output and everything to do with creative expression. There’s a reason e.g. midjourney images are easy to spot, they all *look the same*. They lack any kind of creativity, and above all they require creative input in order to generate any useful outcome at all.
Literally every point made above is completely backwards.
Midjourney images “look the same” because of product management decisions made by Midjourney’s product engineering and marketing teams to offer a consistent, solid user experience that can reliably deliver art that meets or exceeds the needs and expectations of the product’s target audience, at a price they would be willing to pay.
Try installing Stable Diffusion locally on your computer, and you can tailor the “AI art style” to whatever you see fit. No need to make inaccurate generalizations about AI.
Care to tell us where you get the numbers ?
Studies ?

All the claims seem to have fallen flat on their face

All the claims seem to have fallen flat on their face
If you believe that, you’re not paying attention.
LLMs make mistakes. Sometimes hilarious mistakes. But more often than not, they are correct, and the error rate will fall rapidly with improved training and faster hardware.
For ChatAI type AI, it has been the “next bubble” for quite some time. There are grande, massively overstated claims, and when you look, rather simplistic demos on the skill level of a beginner. There are massive, massive problems that have been demonstrated, like model poisoning, unfixable damage to models by recursion (https://arxiv.org/pdf/2305.17493v2.pdf), the impossibility to make LLMs safe (https://arxiv.org/pdf/2209.15259.pdf), ChatAI getting dumber in most/all other areas when you try to fix proble
Also obviously, the claim is true and can be mathematically proven. Statistical models _cannot_ be original. That is fundamentally impossible. Only deductive AI models can theoretically be original and they drown in complexity before they get there.
Also obviously, the claim is true and can be mathematically proven. Statistical models _cannot_ be original. That is fundamentally impossible. Only deductive AI models can theoretically be original and they drown in complexity before they get there.
I could claim the opposite. There is no creativity in deductive models since all the true statements are already determined by the axioms. If you claim that the creativity is finding interesting true statements, given the complexity, then the creativity is in
That is stupid. “Creativity” != “creating original information”. https://en.wikipedia.org/wiki/… [wikipedia.org]
Incidentally, you just stated that either creativity is impossible or limited to sentient beings _and_ that sentience is an extra-physical phenomenon. Are you sure you wanted to do that?

Creativity requires experiencing the real world.

Creativity requires experiencing the real world.
Would you agree someone who has never been able to move, see, feel, smell but can listen can’t be creative because they’ve never experienced any of the things described to them in the real world?
Is star trek creative? Afterall nobody has ever been in a starship before or gone to any strange new worlds. Without experiencing a real starship or other planets, without any relevant experience how can sci-fi be creative? If the answer is some form of extrapolation and application of learned experience the foll
That is stupid. “Creativity” != “creating original information”
So you don’t think the LLMs create original information? Deduction sure doesn’t do that. In Shannon’s information theory, randomness plays an essential role, so if you actually defined things, you might be stuck with probability/statistics.
Also, most LLMs provably generate original text. The output range of these LLMs is enormous. They can’t help but generate original text. Of course, a monkey can easily generate original “text”, so th

Also obviously, the claim is true and can be mathematically proven. Statistical models _cannot_ be original. That is fundamentally impossible. Only deductive AI models can theoretically be original and they drown in complexity before they get there.

Also obviously, the claim is true and can be mathematically proven. Statistical models _cannot_ be original. That is fundamentally impossible. Only deductive AI models can theoretically be original and they drown in complexity before they get there.
It can write bedtime stories.
PROMPT: Please write a bedtime story about a grump who shits on everything he doesn't understand.

Once upon a time, there was a grumpy old man named Mr. Grumps. He lived in a small town with his cat and dog, but he wasn't very happy. You see, Mr. Grumps didn't like anything new or different. If something didn't fit into his idea of how things should be, he would get angry and start complaining.

Mr. Grumps was especially grumpy about technology. He thought it was a

Also obviously, the claim is true and can be mathematically proven. Statistical models _cannot_ be original. That is fundamentally impossible. Only deductive AI models can theoretically be original and they drown in complexity before they get there.

It can write bedtime stories.

Also obviously, the claim is true and can be mathematically proven. Statistical models _cannot_ be original. That is fundamentally impossible. Only deductive AI models can theoretically be original and they drown in complexity before they get there.

Also obviously, the claim is true and can be mathematically proven. Statistical models _cannot_ be original. That is fundamentally impossible. Only deductive AI models can theoretically be original and they drown in complexity before they get there.
It can write bedtime stories.
It can do a lot of simplistic things (with low reliability), because simplistic things can be easily and statistically derived from its training data. Most people forget how _much_ training data went into these systems. These systems however cannot go beyond that training data and always fall short of what the training data would have allowed something with real intelligence to do with it. Statistical derivation of things is always incredibly shallow. There is not even one real deduction step in there.

It can do a lot of simplistic things (with low reliability), because simplistic things can be easily and statistically derived from its training data.

It can do a lot of simplistic things (with low reliability), because simplistic things can be easily and statistically derived from its training data.
Well the machine did manage to create an original bedtime story despite your claim it can’t be original.
Come to think of it previously you admitted to never even having tried GPT-4.
You previously claimed “ChatGPT cannot even do a simple addition of two arbitrary numbers, the model is simply incapable of doing something like that. ” which didn’t age well when you were instantly proven wrong.
Before that you said “The only impressive thing about LLMs is the language interface, not the utterly dumb “reasoning

Well the machine did manage to create an original bedtime story despite your claim it can’t be original.

Well the machine did manage to create an original bedtime story despite your claim it can’t be original.
Your evaluation is flawed. This story is not original, but deeply derivative. All this shows is your lack of insight.

Your evaluation is flawed. This story is not original, but deeply derivative. All this shows is your lack of insight.

Your evaluation is flawed. This story is not original, but deeply derivative. All this shows is your lack of insight.
What if anything is the objective basis for this claim? If it was deeply derivative and not original what was it deeply derived from? What objective criteria do you believe must be met for something to be considered original that was not met?
Can you for example point to an original bedtime story and contrast that with the machine generated story showing how and why the definition applies to the “original” yet why the machine generated story falls short?
Is there an objective falsifiable means of discrimina
Wow, Robin Thicke should have picked you as a lawyer for his copyright case as clearly you know better than his lawyers, the judges, and the experts. Making a full copy of a work is not needed for copyright infringement. The fact is that if it can be shown that the training included a work, and that it can be triggered to output something very similar, that’s a lawsuit that can go either way.
So If I borrow a book from a friend, or read an article in the book store, or download a PDF of a book and read it. Them am I liable for what I read from that if I tell someone else, or summarize that? If I watch a TV show, say on Youtube. Then that is copyrighted material in my brain if I go an use it?
We need laws that clarify it is ok for AI to be trained on publicly accessible and/or purchased content. If you make it publicly accessible I should have the right to train my AI on it. When you put information out into the world you can’t expect a cut based on what someone does with that info. If I read a book on aerodynamics and build & sell airplanes I don’t owe the author of that book any money.
We should be able to recreate anything they found.
Reviews and comments for books were certainly ingested and they can be lengthy and contain considerable plot detail and summarization. So I would expect the LLMs to be aware of these works and be able to summarize them.
I’d like to see how they “know” a certain text was used. The devil is in the details (not the summaries).
I don’t have any of these nor read them. I might try looking for Stephen King details, he has also claimed this.
Would be interesting if one of those authors ever tried to copyright an “original” blues song.
Yes… I know. “All works are derivative” is annoying click-bait. The statement is both completely true and entirely useless.
But it would be easy to prove false given that ChatGPT is unable to understand even the most basic concepts. People need to realise that fundamentally repeating something, mixing something, and understanding the context of something is not the same thing.
I don’t see how it can be a derivative work. Derivative works are copyrighted works based on something else. The OpenAI responses were created by an AI which cannot be the author of a copyrighted work so they can’t be derivative works.
Not really, The legal issue is whether the output of the chatbot is derivative – which requires permission from the original copyright holder to distribute – or transformative – which does not.
Google’s book scanning project face the exact same claims in multiple lawsuits, and won on the basis that a full text searchable index , is a new kind of thing, and thus, transformative.
This is really the only legal argument that matters. If it’s transformative, it doesn’t matter who wrote the works used to train, or how they’re used in train, only that it’s transformative.
Both sides have coherent arguments in this, but Authors Guild, Inc. v. Google, Inc. is going to be pretty relevant.
Not really, The legal issue is whether the output of the chatbot is derivative – which requires permission from the original copyright holder to distribute – or transformative – which does not.
Google’s book scanning project face the exact same claims in multiple lawsuits, and won on the basis that a full text searchable index , is a new kind of thing, and thus, transformative.
This is really the only legal argument that matters. If it’s transformative, it doesn’t matter who wrote the works used to train, or
I have no idea how this will turn out when it gets to SCOTUS, only that it will, and that the Google case will figure prominently is at least one side’s arguments.
And after that, Congress may well alter the law to suit the desires of whoever owns them that week.
Ban Gogh didn’t invent the basic techniques, nor technology, of painting pictures. Improved, undoubtedly, but he started with what others had done before, just like all artists.

Not really, The legal issue is whether the output of the chatbot is derivative – which requires permission from the original copyright holder to distribute – or transformative – which does not.

Google’s book scanning project face the exact same claims in multiple lawsuits, and won on the basis that a full text searchable index , is a new kind of thing, and thus, transformative.

This is really the only legal argument that matters. If it’s transformative, it doesn’t matter who wrote the works used to train, or how they’re used in train, only that it’s transformative.

Both sides have coherent arguments in this, but Authors Guild, Inc. v. Google, Inc. is going to be pretty relevant.

Not really, The legal issue is whether the output of the chatbot is derivative – which requires permission from the original copyright holder to distribute – or transformative – which does not.
Google’s book scanning project face the exact same claims in multiple lawsuits, and won on the basis that a full text searchable index , is a new kind of thing, and thus, transformative.
This is really the only legal argument that matters. If it’s transformative, it doesn’t matter who wrote the works used to train, or how they’re used in train, only that it’s transformative.
Both sides have coherent arguments in this, but Authors Guild, Inc. v. Google, Inc. is going to be pretty relevant.
Would a raw statistical analysis of the text be derivative?

There is a difference between being influenced by someone and copying someone. If you copy a large number of people and make a mix, it is still a copy. This has already been litigated when humans were doing the combining (for example, DJs doing sampling).

In the example above, a musician, who had no external influences would also have no training and would probably not create anything we would recognize as music. There is a difference between making something in the style of someone and copying their work.

There is a difference between being influenced by someone and copying someone. If you copy a large number of people and make a mix, it is still a copy. This has already been litigated when humans were doing the combining (for example, DJs doing sampling).
In the example above, a musician, who had no external influences would also have no training and would probably not create anything we would recognize as music. There is a difference between making something in the style of someone and copying their work.
What a LLM produces when you give it a prompt isn’t a *reproduction* of *samples* of the works used to train it.
If I say to ChatGPT: “Review Star Trek, The Next Generation in the style of Gordon Ramsay”, the result is neither a derivation of the works of Gene Roddenberry nor of Gordon Ramsay. It isn’t a sampling of them, it doesn’t even contain samplings. Its based on statistics, no different to the predictive corrections on your phone keyboard.

Oh, bloody hell! What on Earth did they think they were doing with Star Trek: The Next Generation? It’s like they took a classic recipe and decided to jazz it up with all sorts of strange ingredients. Let’s break it down, shall we?

First off, the characters. You’ve got Captain Jean-Luc Picard, who’s meant to be the leader of the pack, but half the time he’s dishing out speeches that are drier than an overcooked turkey. And what’s with that android, Data? It’s like they grabbed a bucket of emotionless nonsense and poured it all over him. No seasoning, no depth, just blandness.

And the plots, my goodness! They’re all over the place, like a dog’s breakfast. One episode, they’re dealing with time travel, the next they’re stuck in a holodeck fantasy. It’s like they couldn’t decide if they were making a space opera or a soap opera. Pick a lane, for heaven’s sake!

Now, let’s talk about the special effects. Some of those space battles look like they were cooked up by a toddler with a box of crayons. I’ve seen better effects on a flipping cereal box. And don’t get me started on those aliens. Half the time, they look like they raided a Halloween store for their costumes.

But you know what really takes the cake? The inconsistency. One episode, they’re churning out Michelin-star quality storytelling, and the next, they’re serving up a half-baked mess that leaves you scratching your head. It’s like they had a team of chefs with wildly different skills and couldn’t figure out how to make a cohesive menu.

Look, I’m all for trying new things, pushing boundaries, and experimenting in the kitchen. But when you mess with a classic recipe like Star Trek, you better make sure you’re adding the right ingredients in the right proportions. Unfortunately, The Next Generation feels like a recipe that’s gone way off course. It’s a bit like finding a soggy bottom on your soufflé – disappointing and just not up to par.

Oh, bloody hell! What on Earth did they think they were doing with Star Trek: The Next Generation? It’s like they took a classic recipe and decided to jazz it up with all sorts of strange ingredients. Let’s break it down, shall we?
First off, the characters. You’ve got Captain Jean-Luc Picard, who’s meant to be the leader of the pack, but half the time he’s dishing out speeches that are drier than an overcooked turkey. And what’s with that android, Data? It’s like they grabbed a bucket of emotionless nonsense and poured it all over him. No seasoning, no depth, just blandness.
And the plots, my goodness! They’re all over the place, like a dog’s breakfast. One episode, they’re dealing with time travel, the next they’re stuck in a holodeck fantasy. It’s like they couldn’t decide if they were making a space opera or a soap opera. Pick a lane, for heaven’s sake!
Now, let’s talk about the special effects. Some of those space battles look like they were cooked up by a toddler with a box of crayons. I’ve seen better effects on a flipping cereal box. And don’t get me started on those aliens. Half the time, they look like they raided a Halloween store for their costumes.
But you know what really takes the cake? The inconsistency. One episode, they’re churning out Michelin-star quality storytelling, and the next, they’re serving up a half-baked mess that leaves you scratching your head. It’s like they had a team of chefs with wildly different skills and couldn’t figure out how to make a cohesive menu.
Look, I’m all for trying new things, pushing boundaries, and experimenting in the kitchen. But when you mess with a classic recipe like Star Trek, you better make sure you’re adding the right ingredients in the right proportions. Unfortunately, The Next Generation feels like a recipe that’s gone way off course. It’s a bit like finding a soggy bottom on your soufflé – disappointing and just not up to par.

If this goes through and accepted as an argument, copyright as a concept is dead.

If this goes through and accepted as an argument, copyright as a concept is dead.
Funny, it’s been a decade since Authors Guild, Inc. v. Google, Inc. [wikipedia.org] was decided over Google scanning millions of books, using the same defense, transformative vs derivative, and, despite the same predictions of hellfire raining down from the heavens, dogs and cats living together, and, yes, the death of copyright as a concept, despite all that, Google won at trial, and the publishing industry and copyright seem to be doing just fine.
You’re missing the point entirely. That was irrelevant to this, because there the argument was “is this novel enough?”
Here, the question is much more fundamental: “are you allowed to learn from material without paying the copyright holder?” Because AI learns from material. It doesn’t actually copy it. And that’s the argument, that the output of the learning process falls under the copyright, because you shouldn’t be allowed to learn from material and then generate your own creative output based on that lear

You’re missing the point entirely.

You’re missing the point entirely.
That is the only point.

That was irrelevant to this, because there the argument was “is this novel enough?”

Here, the question is much more fundamental: “are you allowed to learn from material without paying the copyright holder?”

That was irrelevant to this, because there the argument was “is this novel enough?”
Here, the question is much more fundamental: “are you allowed to learn from material without paying the copyright holder?”
What part of Title 17 prohibits it? And, by necessity, prohibits library patrons from learning from material they haven’t paid the copyright holder for, or students who haven’t paid the copyright holders for the textbooks they learn from?
(And there have been lawsuits over libraries loaning out books. That was settled over a century ago.)
Somebody paid for the material used in training, at some point. After that, the right of first sale comes into play.
The only legal issue is whether Ch
I would like to remind you that ban on abortion came from “privacy” protections. Your laser focus on “what’s in the text” misses the fact that judges are the ones who get to interpret the text, and they can choose to interpret it in very creative and clearly incorrect ways.
AI can be induced to reproduce portions of texts it has read. People have used that to reproduce copyrighted works, and even the basic orders given to the AI by its developers.
The copyright holders argue that if someone makes an AI they are responsible for its output, and if it copies big sections of their work then that’s copyright infringement. The same as if a human reproduced copyrighted work from memory and sold it as a service, they would be responsible.
The other issue is if AI learning is the same as
>AI can be induced to reproduce portions of texts it has read. People have used that to reproduce copyrighted works…
Wacom tablet can be induced to reproduce portions of texts it has read. People have used that to reproduce copyrighted works…
Paint brush can be induced to reproduce portions of texts it has read. People have used that to reproduce copyrighted works…
[Musical instrument] can be induced to reproduce portions of texts it has read. People have used that to reproduce copyrighted works…
You
The difference is a paintbrush is not being sold as a tool to paint your artwork for you.
The legal argument is that AI is a service provided by the developer, not a tool. The company offers a writing service, just done by AI instead of by human authors. Therefore the same copyright rules that limit how human authors can use the source material applies, to the company providing the service.
But Wacom tablets are. And so are electronic keyboards. And logic of a paintbrush is the same. It paints for you, instead of having to do it with your fingers.
The legal argument is that AI is just another tool that requires human input and generates output based on said input. You’re far too used to strawmanning arguments, and it shows even here.

AI can be induced to reproduce portions of texts it has read. People have used that to reproduce copyrighted works, and even the basic orders given to the AI by its developers.

AI can be induced to reproduce portions of texts it has read. People have used that to reproduce copyrighted works, and even the basic orders given to the AI by its developers.
Instructions are a different thing they typically come from context window rather than the model itself. Often you can simply ask the AI to repeat what was said earlier from a brand new session and it’ll reveal what was injected by the vendor into the context. I hear some of the vendor instructions to models are fairly insulting and derogatory towards humans.
As I argue above here: https://slashdot.org/comments…. [slashdot.org]
It is actually worse than that. I’m not sure if copyright holders suing understand it or not, but they’re de jure targeting “ability to learn from copyrighted content” and attempting to establish a precedent that you must have a permission from copyright holder to learn from any copyrighted material and produce any output based on it.
This would effectively shatter the concept of copyright as it exists, and reverse the intended purpose of copyright when
The only real question is whether ChatGPT’s owners have broken the law by illegally copying works without permission (namely, by downloading the works onto some storage devices, temporarily or not).
Wacom tablet is not an artist. It’s a thing. It’s not even a living thing. It has no rights, and whether its “work” derives from those (“who?”) that came before is irrelevant.
Painter’s brush is not an artist. It’s a thing. It’s not even a living thing. It has no rights, and whether its “work” derives from those (“who?”) that came before is irrelevant.
[musical instrument] is not an artist. It’s a thing. It’s not even a living thing. It has no rights, and whether its “work” derives from those (“who?”) that ca
That’s not how legal systems based on precedent work. Not even a little bit.
There may be more comments in this discussion. Without JavaScript enabled, you might want to turn on Classic Discussion System in your preferences instead.
Hundreds of Thousands Trafficked To Work as Online Scammers in Southeast Asia, Says UN Report
Leaked Microsoft Memo Tells Managers Not To Use Budget Cuts as Explainer for Lack of Pay Rises
Consultants are mystical people who ask a company for a number and then give it back to them.

source

Jesse
https://playwithchatgtp.com