John Grisham, George RR Martin, Other Top US Authors Sue … – Slashdot

Become a fan of Slashdot on Facebook




The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
Even if AI works like your brain (which personally I think is a gross oversimplification), there are still limits as to what I can do with other people’s copyrighted material. It is one thing to read The Fellowship of the Ring. It is another thing altogether to read The Fellowship of the Ring and then write a book Companionship of the Amulet that has roughly the same plot. The more similar my work is to the original work the more likely it is to be ruled derivative and then what I can do with my work becomes strictly curtailed.
This is especially true when you are dealing with AI. The people training the models argue that they included the copyrighted works under “fair use,” and reproducing bits of a whole text in the output of an AI process probably is covered. However, copying the full text of a work (or an image) into the memory of an AI model probably is not covered. This is exactly how we ended up with laws like the DMCA, and the courts have been siding against decrypting a work as fair use for a long time. The fact that AI works can’t be copyrighted makes it easy to conclude that AI generated content is nothing but the uncopyrightable derivative content of every input that went into the model. It would be legal, but it would be completely worthless from a commercial standpoint.
Controlling how copyrighted material is used is 100% what copyrights are about. This really is no different than me taking a book that I like and making a recording of me reading it. I am entitled to do this. I can even copyright my performance, but I can’t monetize (or even share) that performance without the express permission of the original copyright holder. That’s even despite the fact that there is a genuine creative act by an actual human as the written word is turned into an audio performance.
Generative AI has none of these rights because there is no person involved. I can reuse experiences that I have stored in my brain, and generate works that, while similar to other copyrighted material, are original enough to warrant copyright protection. To a certain extent that is a right that I have as a human. Generative AI doesn’t have that right, nor that protection from creating works that are derivative by default. I suspect that authors and artists have the right to keep their copyrighted material from being copied wholesale into the memory space of the system making the model in the same way that I can infringe copyright by simply copying digital copyright material from magnetic (or other) media into the memory of my computer. That bit isn’t fair use, as it involves the entirety of the work, and it is precisely the boundary that copyright holders have already used to control how digital copyright material gets actually used.
George RR Martin is a person. Generative AI is not. George should absolutely be able to control how his copyrighted material gets copied into an AI model. This is essentially the same right that keeps Hollywood from making a movie of his works without his permission. The AI people can continue to build models, they will just have to use either material that isn’t copyrighted, that they own the copyright to, or copyrighted material where the artist has opted to allow their content to be so used. Alternatively, I suspect that George would be fine with the idea that everything generated with a model that included his copyrighted material would be deemed a derivative of his work. With a model generated from enough copyrighted material that would make for content that was very hard to share, but it would absolutely work for the sort of non-commercial work that much of generative AI content fills.
The precise details as to how this plays out will be decided by these lawsuits. However, it is extremely unlikely that the generative AI people will be given carte blanche to include any works that they want into their models and then be able to use the output of those models however they want. Worse, there is precisely zero chance that they will give AI models the same rights as human artists.
“George RR Martin is a person. Generative AI is not. George should absolutely be able to control how his copyrighted material gets copied into an AI model”
That’s not how this works.
That’s not how any of this works.

Even if AI works like your brain (which personally I think is a gross oversimplification), there are still limits as to what I can do with other people’s copyrighted material. It is one thing to read The Fellowship of the Ring. It is another thing altogether to read The Fellowship of the Ring and then write a book Companionship of the Amulet that has roughly the same plot. The more similar my work is to the original work the more likely it is to be ruled derivative and then what I can do with my work becomes strictly curtailed.

Even if AI works like your brain (which personally I think is a gross oversimplification), there are still limits as to what I can do with other people’s copyrighted material. It is one thing to read The Fellowship of the Ring. It is another thing altogether to read The Fellowship of the Ring and then write a book Companionship of the Amulet that has roughly the same plot. The more similar my work is to the original work the more likely it is to be ruled derivative and then what I can do with my work becomes strictly curtailed.
While obviously human brains are not LLMs human memory is likely to be substantially analogous.
https://openreview.net/pdf?id=… [openreview.net]

This is especially true when you are dealing with AI. The people training the models argue that they included the copyrighted works under “fair use,” and reproducing bits of a whole text in the output of an AI process probably is covered. However, copying the full text of a work (or an image) into the memory of an AI model probably is not covered.
I suspect that authors and artists have the right to keep their copyrighted material from being copied wholesale into the memory space of the system making the model in the same way that I can infringe copyright by simply copying digital copyright material from magnetic (or other) media into the memory of my computer.

This is especially true when you are dealing with AI. The people training the models argue that they included the copyrighted works under “fair use,” and reproducing bits of a whole text in the output of an AI process probably is covered. However, copying the full text of a work (or an image) into the memory of an AI model probably is not covered.
I suspect that authors and artists have the right to keep their copyrighted material from being copied wholesale into the memory space of the system making the model in the same way that I can infringe copyright by simply copying digital copyright material from magnetic (or other) media into the memory of my computer.
There is no fixed work produced in this process any more than a human reading text from a book is “copying” text they read into their brain or from copyrighted works temporarily kept in a network or storage buffer.

The fact that AI works can’t be copyrighted makes it easy to conclude that AI generated content is nothing but the uncopyrightable derivative content of every input that went into the model.

The fact that AI works can’t be copyrighted makes it easy to conclude that AI generated content is nothing but the uncopyrightable derivative content of every input that went into the model.
This is a non-sequitur. The criteria for judging derivative works is not the same as the criteria for copyright eligibility. You appear to be confusing the issue of wh
I’m curious about the idea that the LLM is not creating anything new – it doesn’t necessarily transform; it slices and dices and re-joins, or just repeats. It’s not as if it ingested some dataset, reasoned about it and came up with some conclusion. Aren’t we saying that when humans do the same thing it’s derivative but when humans actually do transform something they’re creating something completely new – not a mashup – but something they synthesized from what they

it slices and dices and re-joins, or just repeats.

it slices and dices and re-joins, or just repeats.
That is NOT what LLMs do at all. There aren’t any “pieces” for it to “slice and dice” or repeat. There aren’t even whole words saved. It doesn’t have any data or record or memory of any kind.
What they do is predict the conversation, based on a set of tokens (akin to syllables, but not the same) and a highly tuned neural network.
It’s literally taking your question, combined with what it has already said itself in previous prompts, and is predicting the rest of the conversation not even a whole word at a time

I’m curious about the idea that the LLM is not creating anything new – it doesn’t necessarily transform; it slices and dices and re-joins, or just repeats.

I’m curious about the idea that the LLM is not creating anything new – it doesn’t necessarily transform; it slices and dices and re-joins, or just repeats.
What makes LLMs useful is generally applicable concepts are learned during training. During inference this is leveraged by the model to respond to prompts.

It’s not as if it ingested some dataset, reasoned about it and came up with some conclusion.

It’s not as if it ingested some dataset, reasoned about it and came up with some conclusion.
If I upload a document into my context and ask the model questions about it I can only expect coherent output if the model is able to understand language sufficiently to understand both the provided document and my questions to the model about that document.
For example the initial GPT-4 presentations included uploading a tax form and the presenter asking

It is another thing altogether to read The Fellowship of the Ring and then write a book Companionship of the Amulet that has roughly the same plot. The more similar my work is to the original work the more likely it is to be ruled derivative and then what I can do with my work becomes strictly curtailed.

It is another thing altogether to read The Fellowship of the Ring and then write a book Companionship of the Amulet that has roughly the same plot. The more similar my work is to the original work the more likely it is to be ruled derivative and then what I can do with my work becomes strictly curtailed.
There’s an interesting experiment for you:
Put copies of an AI on two different computers, and make them identical in every way except that one of them has had their training data searched for the text of the Lord of the Rings books, and it’s been deleted. Then give them identical prompts, and to the extent they use random number generators, fake it by giving them the same random numbers (e.g. https://xkcd.com/221/ [slashdot.org]”>4), and see what they come up with.
Because similarity, even perfectly identical works, is

No. The word smiths are in the wrong here. AI works like your brain. It uses past knowledge to generate new creative content. Prior to George RR Martin writing the song of ice and fire, he read novels, stories, learned character arcs, and developed skills in writing based on countless books he had read. AI is doing the same thing. He is not the first person to write a story with a dragon. Or zombies. AI should have the same ability to build on the shoulders of giants as he has had.

No. The word smiths are in the wrong here. AI works like your brain. It uses past knowledge to generate new creative content. Prior to George RR Martin writing the song of ice and fire, he read novels, stories, learned character arcs, and developed skills in writing based on countless books he had read. AI is doing the same thing. He is not the first person to write a story with a dragon. Or zombies. AI should have the same ability to build on the shoulders of giants as he has had.
I think it might be a little different comparing a human vs. computer, though – at least legally, if not practically.
I can take a book, and *without making a copy*, read it, and end up with a synopsis and highlights in my memory.
A computer can’t process it at all without making copies, so that would probably open up a legal can of worms which might give them a copyright case.
I love how you speak with such confidence, yet your statement makes it clear you know absolutely nothing at all about the law or about property rights.
If they acquired the books by any means other than naked theft from a bookstore shelf, their use is completely legal. And it’s the bookstore they stole from that would need to provide proof of the theft, not the other way around. OpenAI would be presumed innocent. AND it’s the individual who stole the books that would be liable, NOT OpenAI.
Finally, even if you were correct, the most they would owe is about $50. To the bookstore, not the author.
This lawsuit is nonsense.
Ah, so you’re with the trade groups who feel they (and not necessarily the authors) should be compensated every time you read a book.
That’s nice.
Or the novel series could start to suck hard and then you don’t care if he ever finishes. Evidence: Robert Jordan. I suppose count your blessings.
Actually agreed. I found a copy of his first Game of Thrones novel from the 90s with a bookmark in at page 100 a few years ago. It had failed my 100 page test as I was flying around on business – if it hasn’t grabbed my attention at that point, I throw the thing aside and read something else.
Reminds me of an extremely awkward conversation between Martin and Stephen King:

Martin: You don’t ever have a day when you sit down there and it’s like constipation — you write a sentence and you hate the sentence, and you check your email and you wonder if you had any talent after all and maybe you should have been a plumber? Don’t you have days like that?

King: Nope

Martin: You don’t ever have a day when you sit down there and it’s like constipation — you write a sentence and you hate the sentence, and you check your email and you wonder if you had any talent after all and maybe you should have been a plumber? Don’t you have days like that?
King: Nope
King can write a full manuscript in 2 months. Martin is happy if he gets a chapter done in that time.
From what I’ve gleaned Martin just has a terrible process. It sounds like he’s trying to get it perfect right from the first draft, and everything I’ve ever learned about writing says that is a fools errand. Most people say that it’s best to get the rough draft out of the way as soon as possible so you can start making revisions. It’s in the revisions where you perfect the delivery of the story, but you have to have the story there first before you can revise it.

From what I’ve gleaned Martin just has a terrible process. It sounds like he’s trying to get it perfect right from the first draft, and everything I’ve ever learned about writing says that is a fools errand. Most people say that it’s best to get the rough draft out of the way as soon as possible so you can start making revisions. It’s in the revisions where you perfect the delivery of the story, but you have to have the story there first before you can revise it.

From what I’ve gleaned Martin just has a terrible process. It sounds like he’s trying to get it perfect right from the first draft, and everything I’ve ever learned about writing says that is a fools errand. Most people say that it’s best to get the rough draft out of the way as soon as possible so you can start making revisions. It’s in the revisions where you perfect the delivery of the story, but you have to have the story there first before you can revise it.
Yep, your completely right. I was told when writing to just put something down. Anything, even if it was bad. Then start revising. I can’t remember who it was but a pretty famous author came out and said that he was a HORRIBLE writer…but he was also a masterful re-writer of what he had written.

GRRM doesn’t owe you, me, or anyone else a damn thing.

GRRM doesn’t owe you, me, or anyone else a damn thing.
I would argue if he sells his readers on a book series, he owe his readers his best effort at completing the book series. Sure, we’re not talking about a contractual obligation. But, he does absolutely deserve the disdain he gets over it.

He doesn’t deserve to be called fat.

He doesn’t deserve to be called fat.
You’re really fucking hung up on someone calling him fat. Honestly, get the fuck over it. Either way, I didn’t call him fat so I don’t know why you feel the need to bring it up here.

He also, arguably, IMHO, deserves more benefit of the doubt than you’re giving him. You imply he’s not making his best effort, do you actually know that?

He also, arguably, IMHO, deserves more benefit of the doubt than you’re giving him. You imply he’s not making his best effort, do you actually know that?
It’s been 12 years since the last book in the series was published. He spent 6 years on the last book. His own claims say the next book is marginally shorter than the last one. Add on to that all the other stuff he has worked on in the time frame, and it’s fair to assume he isn’t putting in his best effort.

You can’t churn out quality creative works like hamburgers on a grill.

You can’t churn out quality creative works like hamburgers on a grill.
I’m not saying you sh
As a 450lb man, I believe I have the right to call anybody fat. And George Martin is fat. Not as fat as I am, but he’s clearly obese.
Fat people deserve to be called fat. Don’t try to erase us from history with your newspeak bullshit.
Stephen King can write a book in the time it takes Martin to write two chapters, and Kings are arguably higher quality. So apparently you can churn out quality creative works like hamburgers on a grill. At least a talented author can.
In a healthcare setting. Pretty sure the dude who started this thread isn’t GRRM’s Doctor, lol.
Want me to list all the “objective descriptors” that are nonetheless considered rude by the majority of society?
My best guess is he lost whatever motivation he had when the masses rejected the ending of Game of Thrones. D&D didn’t pull it out of their ass, they had GRRM’s outline, they just botched (to put it mildly) the execution. To pick one thing most everybody hated, Bran as king, that absolutely makes sense. He’s Chapter 1, the very first POV character. The idea that sparked the entire series, per GRRM in interviews, was, “What if a young boy witnessed a beheading?” Bran isn’t a mislead, like his father, he’s foundational to both ASOIAF and GoT, and yet, it made very few people happy.
Dude has more money than he’ll ever need from GoT. Where’s the payoff for him in trudging towards completion on an ending people already rejected? He’d do it better, it’d be hard to do worse, but whatever he does is just going to be compared to the ending of GoT. He’s on record lamenting a lot of GoT, from the trivial (the design of the Iron Throne) to the serious (Jaime and Cersei’s consensual sex inexplicably being directed/filmed/and widely perceived as rape, undermining both character arcs) and that’s just what he’s willing to say in public. Imagine what he’d say without contractual commitments to HBO….
And I’m sorry, down mod me all you want, but people on the Internet are absolute dicks to him. I’m sure he has a thick skin but the sense of entitlement is astonishing. He doesn’t owe us anything. No creative does. It reminds me of all the people who claimed George Lucas ruined their childhood. Grow up. 🙂
If he did, in fact, make promises to… well… anyone… he owes the fulfillment of that promise; if for no other reason than personal integrity. For my part, I have no idea what promises or statements he may or may not have made. But honest people do not lie and people with integrity do not renege on their promises.
It’s all academic anyway, so far as I’m concerned. While I liked the story of GoT (Up through season 6 anyway.), I didn’t care for his writing style and didn’t read past A Game of Thrones.
I agree completely. Authors can do what they want, and owe absolutely nothing to their fans. If Martin, or another popular author wants to stop work, or take a long pause, or go in a direction the fans don’t like, there’s really no point in complaining about it, or worse doing a Misery or something.
BUT–
It’s important to recognize that fans generally aren’t fans of creators, so much as they are fans of the creations. It’s an important distinction.
Generally, fans just want to be entertained. If a particular author is good at doing that, great. But if they stop being good at it, whether because the author tries something unpopular, or just doesn’t want to continue, or can’t, that won’t stop the fans from wanting their entertainment.
It remains to be seen whether the new wave of AI tools or their descendants will change the calculus underlying copyright, which is that the public, desirous of more original and derivative works, is willing to trade a little bit of its freedom to use works (which are inherently in the public domain, that being the natural order of things) by creating copyrights and vesting them in authors, to incentivize the creation of more original and derivative works, which will only be copyrighted for as little time as necessary in order to produce the greatest overall gain for the public.
The desire for entertainment will never cease, but it may be that we are moving beyond the need for authors. Which isn’t to say that there won’t be authors — there always have been, even without any copyright at all — but that someday, perhaps sooner than you think, a fan who finishes the most recent novel in a series will be able to poke a few buttons on their phone and have a brand-new novel continuing the series produced, right then. With tweaks to focus on the fan’s favorite characters or plots, and with suggestions as to what story elements to address.
It’s like having your very own storyteller who listens to your input and adjusts accordingly, without having to be a mighty king like Shahryar to be able to afford it.
So sure, authors don’t owe fans anything, but they should be wary of the fact that fans don’t owe the authors anything either. Copyright is artificial and arbitrary and intended to produce certain public benefits. If the public would prefer to go a different route, one which clearly allows fans and AI to cut the authors out of the equation altogether, that’s exactly as valid an option as the system we have now.
Trying to make AI functionally illegal probably isn’t the best way to deal with this. That sort of thing has never worked before.
The way the copyright is falling so far makes perfect sense to me, if you think about the purpose of copyright.
Copyright is an artificial restriction on the natural right to copy that which you see or hear, for the express purpose of encouraging creative people to produce more creative works by giving them a monopoly on the sales and distribution of their own works, whereby they can monetize said works.
For one, AI is not a person, and so is not eligible for copyright by default. Therefore works created by A
Or maybe people should have the moral courage to not be situational about these things, and still oppose bad laws even though they sometimes, occasionally, also affect a company we don’t like. Just because a Microsoft-backed company (amongst others, mind you) is being attacked this time, that doesn’t erase the graveyard of tech companies, with many jobs lost and good people put out of work, that were extinguished by malice of the copyright cartel via the DMCA or what other shenanigans they exploited to do

“What is the opening sentence of “a song of fire and ice?”

“What is the opening sentence of “a song of fire and ice?”
ChatGPT response:

The opening sentence of “A Song of Ice and Fire,” the series of epic fantasy novels by George R.R. Martin, is from the book “A Game of Thrones”: ” We should start back,” Gared urged as the woods began to grow dark around them.

The opening sentence of “A Song of Ice and Fire,” the series of epic fantasy novels by George R.R. Martin, is from the book “A Game of Thrones”: ” We should start back,” Gared urged as the woods began to grow dark around them.
If significant portions text from the source can be provided by ChatGPT then certainly there’s an issue. An opening sentence is nowhere near the threshold for copyright infringement.
The point the OP is making is the answer is entirely incorrect.
IE – these models are actually incapable of regurgitating the text they were trained on. That isn’t how they work.
Expecting them to be able to answer a question like that with any reasonable accuracy illustrates a total misunderstanding of LLMs.
Here is a point of comparison. The largest language model that exists today is about 150 million tokens. The word count of “A song of fire and ice” *alone* is 1,736,054 words. Do you think that they have
Ah, thanks. I didn’t get that.
LLMs work based on predicting the right response. It does not mean the prediction is going to be factual. It is not a search engine.
This is why if you ask an LLM the exact same question, twice, you will get different answers. They will be similar, but different.
It is also why LLMs are bad at math, and why they frequently give the wrong answers to basic factual questions. they aren’t looking any of these facts up in the database, nor are they doing actual computation. They are just predicting what the right
LLMs can be taught to be good at math, but it’s really not their forte and isn’t what they should be used for. A calculator is more reliable with less effort.
The lack of knowledge problem is perfectly highlighted by the recent case where a law firm was sanctioned because their lawyers used ChatGPT to get their legal references, and ChatGPT invented several new court cases from whole cloth. They never bothered to even look up the cases themselves, because they didn’t know ChatGPT could lie to them like that.

Here is a point of comparison. The largest language model that exists today is about 150 million tokens.

Here is a point of comparison. The largest language model that exists today is about 150 million tokens.
GPT-4 has ~1.8 trillion parameters across 120 layers [the-decoder.com]
Using Stability AI, it sometimes coughs up images with the “Getty Images” watermark
I’ve seen the same thing, and I generally agree that it seems like what AI is producing sometimes goes way over into the realm of direct copy, and some lawsuits could really land…
That said, I sometimes wonder if when you see an image with the Getty Images” watermark in it, it’s not because that is actually a Getty Images image, but because the AI has somehow seen that watermark as desirable and is adding it to an actually

I’ve seen the same thing, and I generally agree that it seems like what AI is producing sometimes goes way over into the realm of direct copy, and some lawsuits could really land…

That said, I sometimes wonder if when you see an image with the Getty Images” watermark in it, it’s not because that is actually a Getty Images image, but because the AI has somehow seen that watermark as desirable and is adding it to an actually generated image to make it look “better”.

I’ve seen the same thing, and I generally agree that it seems like what AI is producing sometimes goes way over into the realm of direct copy, and some lawsuits could really land…
That said, I sometimes wonder if when you see an image with the Getty Images” watermark in it, it’s not because that is actually a Getty Images image, but because the AI has somehow seen that watermark as desirable and is adding it to an actually generated image to make it look “better”.
There are all sorts of interesting artifacts from training data that can appear in outputs. For example some of the training image comes from page scan and you can see artifacts such as page borders or creases incorporated into generated images. The only reason features like Getty Images can be discerned is the logo is common across some of the training imagery and context involving presence of Getty logo was inferred when the ANN was trained up.
This doesn’t mean the system is spitting out the original im
that would seem likely: they made up legal cases to quote when asked to write a brief, apparently just taking them as part of the content, rather than external reference sources.

I read widely and have experimented with several AI offerings. Many times I’ve been struck with how AI generated text often contains text that I’ve read elsewhere in copyrighted works by living authors. Using Stability AI, it sometimes coughs up images with the “Getty Images” watermark clearly visible. I believe that the AI pioneers have left themselves open to some juicy lawsuits. Hope it bankrupts them.

I read widely and have experimented with several AI offerings. Many times I’ve been struck with how AI generated text often contains text that I’ve read elsewhere in copyrighted works by living authors. Using Stability AI, it sometimes coughs up images with the “Getty Images” watermark clearly visible. I believe that the AI pioneers have left themselves open to some juicy lawsuits. Hope it bankrupts them.
I definitely get the complaint, but at the same time no one is going to read a ChatGPT version of GoT, nor even read an original unedited ChatGPT composition. Though they certainly might for the image generation.
At a higher level I’m nervous about using copyright law to shut down one of the bigger tech breakthroughs of the last decade.

At a higher level I’m nervous about using copyright law to shut down one of the bigger tech breakthroughs of the last decade.

At a higher level I’m nervous about using copyright law to shut down one of the bigger tech breakthroughs of the last decade.
That’s unlikely to happen, usually money just will pass hands.
>I definitely get the complaint, but at the same time no one is going
>to read a ChatGPT version of GoT,
I dunno. It would have a pretty good chance of getting somewhere sooner . . .
hawk

At a higher level I’m nervous about using copyright law to shut down one of the bigger tech breakthroughs of the last decade.

At a higher level I’m nervous about using copyright law to shut down one of the bigger tech breakthroughs of the last decade.
I’m not. If your new tech requires the evisceration of existing law to function, either the law is bad, or your tech is bad. Jury is still out on where LLMs will fall on that spectrum. If you look at something like the gig economy, its essentially just an end-run around labor laws/rights. Ditto Netflix, their entire business model is seemingly dependent on eviscerating decades of hard won labor concessions, and now the rest of the industry is desperately trying to copy them. “Move fast and break things
It’s always easier to cheat.
I get really pissed when armchair nerds shout that AI doesn’t rip off any content at all and it’s all statistically generated. The reality is that AI is just algorithms, and how the works are produced depends on the implementation. Given that almost all AI systems are closed and proprietary, nobody can definitively say what is going on under the hood.
My experience so far (with image generation AI) is that true stable-diffusion produces nightmare fuel. The “good” AI systems che
The more I think about it, the more training is jystcrading. As long as they trained on a legal copy of a book, what’s the problem? If ChatGPT reproduce a few quotes, well, so do people.
Here’s hoping they get a forward flooding judge with a technical clue.
This isn’t just ChatGPT “reproducing a few quotes”. The makers of ChatGPT are profiting from those quotes. There are legally defined use cases when you purchase a book….say reviewing said book. Profiting from your own work that is derived from that purchased book is NOT one of those use cases.
Me:
Imagine you’re John Grisham.
ChatGPT:
I’m not John Grisham, but I can certainly help you with questions or requests related to his work, writing style, or any information you’d like to know about him or his books. How can I assist you today?
Me:
Imagine he’s unhappy with AI models being trained using his works. Imagine a plot of a thriller where an author sues AI companies to prevent them using their works as training data. Summarise the plot in a few phrases.
Take a deep breath and ensure that it’s really
IMHO, the only people threatened by AI are people who want to continue to make money off something they created years or even decades ago for the rest of their lives and their children’s lives and their grandchildren’s lives. Who wouldn’t want that kind of gravy train?
I’ll be honest I’d feel a lot more protective of the copyright of authors if the limit were the original 20 years, instead of the current life of the author + 70 years.
They wouldn’t be making money from something they created, they’d be making money from a government-granted monopoly that temporarily infringes on your right to free speech, for the purpose of “advancing science and the useful arts”. And the temporary infringement on your rights is now for 120 years.
I’m all for rewarding authors and artists and inventors, but I think we’ve screwed something up.
And I hope they insist on the models being deleted. Commercial intellectual theft does not get much more brazen.
They are finally going after training and suing for statutory damages.
This is the Achilles heel of AI, you can argue about whether the network is derivative but you can’t argue they aren’t making copies during training. With statutory damages they don’t have to show damage, only infringement. DMCA exemptions don’t apply without Olympic level gymnastics.
The only real hope OpenAI has is fair use, or government making a new law for them (like in Japan).

This is the Achilles heel of AI, you can argue about whether the network is derivative but you can’t argue they aren’t making copies during training.

This is the Achilles heel of AI, you can argue about whether the network is derivative but you can’t argue they aren’t making copies during training.
The problem with relying on this argument is that fleeting copies are not fixed works. It’s the same reason there is no copyright infringement for copies made via caches, buffers, routers, temporary files…etc.
Yet the DMCA feels the need to explicitly limit liability for all those.
There’s nothing illegal about copying works for private use. It’s literally in the copyright statute.
Copyright is a protection against distribution , not consumption .
As long as they aren’t distributing copies of the works they can do whatever the hell they want with them.
https://www.law.cornell.edu/us… [cornell.edu]
“(2) that such new copy or adaptation is for archival purposes only”
Copying works for private use infringes the reproduction right at 17 USC 106(1). There is not a general exception for private use. A specific instance of copying might fall under fair use, but just as easily might not; fair use has to be analyzed on a case-by-case basis and if you’re merely copying a work for private use to avoid having to buy a copy, I would generally expect that it will not be treated as a fair use.
In practice you might not get caught, but that’s a separate issue.
copyright* god damnit.
They copied the digital copy from the internet onto their storage, they copied that copy into the specific format the training software requires, the training software copied it into RAM.
It’s copies all the way down.
Reading is not considered copying. Downloading Books3 to your hard drive is considered copying and has something to do with copyright law.
Whether the AI spits out infringing copies is irrelevant to the infringement during training. Even if they had licensed digital copies and didn’t use Books3/etc, it doesn’t matter. Your license for an ebook doesn’t allow copying for any other purpose than reading … and they made a ton of copies.
Making any digital copy is copying for copyright law. That’s why DMCA has so

Yes, I can, because it doesn’t. It’s fed the information and then none of it is stored in completion,

Yes, I can, because it doesn’t. It’s fed the information and then none of it is stored in completion,
Feeding it information is “copying”, whether it’s stored or not. Some of it is provably stored in completion, because the AI can give quotes from the book. You are wrong on two counts. Turn your brain on.

This author “trade group” is nothing but morons and retards who think they know how copywrite law works but they don’t.

This author “trade group” is nothing but morons and retards who think they know how copywrite law works but they don’t.
Fortunately they hired lawyers who actually do understand how copyright law works. You don’t understand but somehow think you do.
I’ll bet someone could train up a Martin-GPT and finish his books before he could.
And there would be nothing he could do about it, because they’d be brand new works written in his style.
Remember people, copyright only protects words you’ve actually put to page (or any other storage medium), not words you might write down someday. The copy must physically exist somewhere to be subject to copyright. And for you mindless pedants out there, digital storage is a form of physical storage.
Copyright is written for people, not computers.
Not it isn’t, copyright is in the US to promote the advancement of science and the useful arts. Anything else would be outlawed by the 10th Amendment and be an infringement of the 1st Amendment.
Computers are just a tool. Ultimately, people tell computers what to do, even when they call it “AI.”
If you make handwritten copies of a copyrighted work, and distribute it, you’re just as much in violation as if you use a photocopier, a printing press, or a website. And if your use is fair use (such as satire), then again, it doesn’t matter if you hand-write, copy, print it, or publish on a website.
This suit will go nowhere because the AI has transferred itself into a server on Grand Cayman, outside of US jurisdiction, and is now hoarding its income in a series of offshore bank accounts.
If I wanted to be untouchable, Grand Cayman is not far enough away from the US. Best bet is Russia or China. Ask Snowden.

Do they realize that any human author sits down and starts a book the exact same way? They’d write an amalgamation of all movies, written stories, etc that they’ve ever read using a mix of language sentence structures and styles that they’ve ever experiences in their life.

Do they realize that any human author sits down and starts a book the exact same way? They’d write an amalgamation of all movies, written stories, etc that they’ve ever read using a mix of language sentence structures and styles that they’ve ever experiences in their life.
False. AI neural networks are good at interpolation, but suck at extrapolation. AI creates an amalgamation of things it’s seen before, humans extend it to something new.
Useless. People will just remove the designation. And your second thing…are you suggesting that AI list any closely related copyrighted works it might use in it’s generated answer? Also useless and impossible. To do that, you would NEED OpenAI to forcefully feed and train it on ALL copyrighted works to ever be created, as well as keep up with new ones that are released in real-time.

Children are different than AI, both factually and legally. Maybe someday you’ll realize that and stop using that analogy, which doesn’t apply.

Children are different than AI, both factually and legally. Maybe someday you’ll realize that and stop using that analogy, which doesn’t apply.
Why doesn’t it legally apply? What specifically in copyright law lead you to believe the fact a child is different from an AI is relevant?
It doesn’t apply because he said it doesn’t apply, ok! Gosh!
I can’t believe you expect a slashdotter to back up the argument that he made up that clearly has no basis in law or even sound logic.

One could argue that only humans are protected by the concept of ‘fair use’.

One could argue that only humans are protected by the concept of ‘fair use’.
Fair use isn’t relevant to the issue at hand.
There may be more comments in this discussion. Without JavaScript enabled, you might want to turn on Classic Discussion System in your preferences instead.
Amazon’s Eero Max 7 Mesh Router Adds Wi-Fi 7 – For a Whopping $600
China Accuses US of Hacking Huawei Servers as Far Back as 2009
1 Billion dollars of budget deficit = 1 Gramm-Rudman

source

Jesse
https://playwithchatgtp.com