ChatGPT will kill off the Romantic genius – UnHerd

Sam Leith is literary editor of The Spectator and the author of Write To The Point: How To Be Clear, Correct and Persuasive on the Page

January 26, 2024
You have to salute the brass balls of the Japanese literary novelist Rie Kudan. She was accepting one of Japan’s most prestigious literary awards, the Akutagawa Prize. The judges were hosing her new novel, The Tokyo Tower of Sympathy, down with treacle, one member of the committee announcing that: “The work is flawless and it’s difficult to find any faults.” Then, right at that moment, live in the room, and all casual-like, she announced that a large chunk of the book had been written by ChatGPT. “I made active use of generative AI like ChatGPT in writing this book,” she said. “I would say about 5% of the book quoted verbatim the sentences generated by AI.”
At once, a Japanese prize barely heard of in the Anglosphere was making headlines around the world. Has the worst nightmare of the gatekeepers of high literary culture come true? Do the judges of the prize have faces dripping with egg, like a Master of Wine who picked the Blue Nun in a blind tasting? And is Rie Kudan, for her part, cheating? Well, no… and no.
To the latter question, we could counter that she was using AI-generated language precisely to make a point about AI-generated language: that is, that she wanted (as she puts it) to test the way “soft and fuzzy words” can obscure our ethical clarity. We could counter, further, that ChatGPT didn’t decide which 5% of the novel was going to be by ChatGPT, nor which bits of ChatGPT’s language were going in there.
And to the former question — whether the judges were just wrong to think the novel good — we can say: surely we’re past all that. If they thought The Tokyo Tower of Sympathy was flawless, then by their own lights — and if they’re decent critics we can expect readers to agree — it was. Didn’t we all learn at GCSE that the reader’s interpretation of the poem is much more important than trying to figure out what the author meant by it? Didn’t Roland Barthes pronounce The Death of the Author as long ago as 1967?
The idea of the single, inspired author originating a text of near-sacred originality is itself a hangover from the Romantics. It’s a two-century blip. Before then, fiction-writers often did their damnedest to pretend they were copying from someone else, even when they were making it up. Chaucer was forever talking about “myn auctor”, and a text that came adapted from a precedent was seen as more trustworthy and high-status than one that didn’t. Milton reworked the Bible, Shakespeare reworked Holinshed, and so on and so forth.
In more recent times, experimental and modernist authors have been using randomness, or the home-made equivalent of algorithms, to generate their texts for 100 years or more. In 1920, the Dadaist eminence Tristan Tzara announced that poetry could be written by taking a newspaper article of the length you wanted your poem to be, cutting it into its constituent words with a pair of scissors, shaking them about in a bag and then transcribing them in the random order in which they emerged. There’s a funny bit about it in Tom Stoppard’s play Travesties.
That was only the starting gun for all manner of literary jiggery-pokery. William S. Burroughs and his collaborator Brion Gysin picked up Tzara’s baton in the Sixties, experimenting with “cut-ups” (much like Dada poetry) and “fold-ins” (where you would fold two pages of an existing book together so the edges met, and read across the fold to make a new text). The fantasy writer Jeff Noon’s 2001 book, Cobralingus, presented a set of algorithmic instructions for transforming a text through what Noon called “filter gates”, something analogous to a DJ remixing a record. The children’s writer Andy Stanton recently published Benny the Blue Whale: A Descent into Story, Language and the Madness of ChatGPT, a serious/silly account of his experiments getting ChatGPT to write a novel about a blue whale with a micropenis.
Arbitrary literary constraints or outright randomness, then — which is to say, things outside the author’s control which help determine the final text — have a very honourable place in literary history. You could even see the sonnet form or the villanelle as a species of algorithm. The mid-century Oulipoians sought out baroque formal constraints (most famously, Georges Perec’s managing to write a novel without the letter E) by way of liberating their creativity rather than stifling it. Italo Calvino wrote a novel — The Castle of Crossed Destinies (1973) — around a tarot pack. B.S. Johnson’s The Unfortunates (1969) presented the reader with loose pages in a box and invited you to read the novel in any order you liked.
None of which is to say that Ms Kudan is necessarily an avant-gardist, or needs to be. Only that it’s a very narrow and regressive view of literature to see handing over control of some of your text to chance or to an algorithm as “cheating”. It’s what you do with the result that counts, and what Ms Kudan did was apparently, well, flawless.
There are, no question, literary-ethical problems with ChatGPT. If the algorithm has been “trained”, as some hefty lawsuits are currently complaining it has been, on vast screeds of copyright text without permission or compensation for the authors, that’s a violation deserving of redress. You could even make the case, perhaps, that 5% of Ms Kudan’s prize money should rightfully be distributed to every Japanese language author on whose copyright work the algorithm was trained. But that question is a business and intellectual property issue: to one side of the purely literary question of its part in the creation of The Tokyo Tower of Sympathy.
Indeed, if you were of a literary-theoretical cast of mind you could point out what Tzara and his successors were gesturing to in an oblique way anyway: intertextuality. Every text, in the end, is made of other texts. Every word in a novel or poem is a borrowing: it depends for its meaning on the vast constellation of other contexts in which it has appeared, and through which its reader will understand it. Every author, in his or her individual way, is a meat-brained ChatGPT, “trained” on a lifetime’s reading of classics and copyright works.
That’s really where the case of Ms Kudan really does probe an anxiety in the culture. What if the writer is (in the phrase Martin Amis used of V.S. Pritchett) a mirror, not a lamp? The Romantic model of the artist is underscored, after all, by a reassuring idea about humanity: that we are creators not creations, that what makes us distinctive isn’t simply a neurological compost of our inputs but some ineffable inner essence that can only be captured in the act of expression.
Is it possible, then, that we so fiercely police the distinction between what Large Language Models can do and human creativity because we’re… touchy about it? That we’re worried it may be a temporary distinction of degree rather than a fundamental difference of category; which is to say, no distinction at all?
And to the former question — whether the judges were just wrong to think the novel good — we can say: surely we’re past all that.
I’m not past that.
If they thought The Tokyo Tower of Sympathy was flawless, then by their own lights — and if they’re decent critics we can expect readers to agree — it was.
If all interpretations are equally valid, what distinguishes a “decent critic” from an indecent one? Good press?
Didn’t we all learn at GCSE that the reader’s interpretation of the poem is much more important than trying to figure out what the author meant by it? Didn’t Roland Barthes pronounce The Death of the Author as long ago as 1967?
Any interpretation of a literary work must be based on evidence found within the work itself. I cannot claim that the Epic of Gilgamesh is about Eamon de Valera’s Ireland, because it’s not. Contra Barthes, some interpretations are more valid than others, because they have more evidence from within the text supporting them. Logically, that implies that quite often one interpretation will be superior to all others, and usually–but not always–that interpretation will be the author’s, since he’s the one who includes the evidence in the first place. Texts are communication; they are one person trying to communicate something to another. What that something is matters. Claims to the contrary baffle me. Barthes and Foucault and the rest of them were trying to elevate the critic at the expense of the author, a natural impulse given they themselves were critics, but that anyone ever took them seriously is a black mark on the escutcheon of literary criticism.
You’re correct that you cannot claim that the Epic of Gilgamesh is about Eamon de Valera’s Ireland; but in fact, Eamon de Valera’s Ireland is about the Epic of Gilgamesh.
This fall, Michael Collins is Enkidu.
I would watch it.
Yeah, if the whole poststructuralist experiment taught us anything, it’s that ideas matter in a way that words don’t.
Sure, you can mix a bag of words up; most of the sentences will be syntactically invalid, and semantically void.
Isn’t art about a meeting of minds? I think we care that those minds haven’t met on false premises. That meeting of minds expectation was undermined by poststructuralism for, as far as I can tell, no reason at all.
There was a feeling that authorship and authority had to be dismantled in order to bring forth some egalitarian ideal, I guess. But it didn’t work.
Authority still exists, only now it’s held by people who have no ideas worth hearing, like Claudine Gay.
Perhaps that was the whole point.
Since LLMs such as ChatGPT aren’t truly generative, but instead assemble passingly acceptable simulacra of script by probability-based verbal associations, maybe this issue is saying more about our increasingly shallow and jejune common discourse. Maybe.
You are wrong. LLMs prefict entire sentences and paragraphs but they do this by essentially reprogramming themselves until tbey reach the right answers given by reinforcement learning. This reprogramming starts the evolution of understanding and a cognitive architecture devoted to applying this understanding to prediction.
Your second sentence contradicts the first, because it confirms the argument of RM Parker. And the third sentence is simply gibberish. AI has no cognitive ability, nor understanding; it is a robot.
You are not wrong.
I would tend to frame this in a more positive light – and did so at length just today, in arguing that Perhaps AI Will Prompt People To Be More Dynamic.
In general, I’m just getting more and more tired of everything being framed so negatively; every new thing being looked at as a problem, rather than an opportunity.
In that light – I think we should stop worrying about being replaced, or XYZ being “killed off”, and start making ourselves less easy to replace.
Right, and I think that is also what the Luddites said. And here we are, now.
The great poet William Topaz McGonagall was not the least bit touchy about the distinction between contemporary large language models and his human creativity.
OR
maybe just the romantic ungenius.
“…There are, no question, literary-ethical problems with ChatGPT. If the algorithm has been “trained”, as some hefty lawsuits are currently complaining it has been, on vast screeds of copyright text without permission or compensation for the authors, that’s a violation deserving of redress…”
The sheer contentiousness of this sentence is breathtaking – and I wonder if the author is in fact aware, but stated it anyway. The silent, elephantine, underlying assumption being, that the seat of human creativity is *not* in fact algorithmic, and that every single “human” creative is somehow any different, i.e. not “trained” on vast screeds of copyright text without permission or compensation for the authors – a difficult thing to prove, I venture to suggest. Or, if he can in fact prove it, I imagine there’s a Nobel Prize, and a Turing Award in the post on it’s way to the author.
The “compensation” point is also interesting, as is the “permission” point. The production of this article entailed, for the author, having “trained” on vast screeds of books, articles, idle conversations with friends in restaurants etc, over the decades of his life; and assuming he paid for a single copy of each of the books he has imbibed, I imagine he would have a hard time proving that the books fed into the Large Language Models were not fact paid for, by for example a bulk order of a single copy of every book in the Waterstones catalogue. The books he has borrowed from his local library over the years, and has therefore not in fact paid for, is of course a grey area, and assuming the author did not go to the trouble of carefully discounting the percolated effects of such material from the production of this article (I’m not seeing, for example, partially starred out words or random gaps in sentences as proof), the matter can be settled in the courts when the array of authors he has borrowed from the library bring a personal class action for copyright infringement against the author. And although I don’t see a huge long list of publishers he has contacted at the bottom of the article, I equally assume good faith on the part of the author in the matter of “permission”, otherwise I imagine yet more lawyers are about to become richer still.
This whole issue of the creative industries complaining about copyright in the context of the Large Language Models is of course, um, cake, in this particular case, the fairy cake that Zaphod Beeblebrox consumed on Frogstar B.
“It is in theory possible to extrapolate the whole of creation—every Galaxy, every sun, every planet, their orbits, their composition, and their economic and social history from, say, one small piece of fairy cake”.
It would be better for Mr. Leith to assert that his own essay was drawn in large part from a ChatGPT text. That would excuse his word-salad for its lack of originality and utter dependence on the alleged authority of prior postmodern pseudointellectuals.
As an antidote to such mindless, derivative trash, I recommend the book Fashionable Nonsense, by Alan Sokal and Jean Bricmont.
It would definitely be rational to write certain texts using AI: TV and movie scripts; theatre spectacles; children’s books; some crime, thriller, romance and mystery fiction; and newspaper comment pieces.
The success of these texts should then be dependent (and judged) on the skill of the editor. The Japanese novelist in question would presumably be aware of this; the issue is that she has been transgressive.
We welcome applications to contribute to UnHerd – please fill out the form below including examples of your previously published work.
Please click here to submit your pitch.
Please click here to view our media pack for more information on advertising and partnership opportunities with UnHerd.

source

Jesse
https://playwithchatgtp.com