[This is the second installment (of three) of an interview between me and ChatGPT 4o. An overview is here. The first installment begins here.]
Part Two
Interviewer: Let’s pivot now to the question of the ethics of working with AI. This debate has become especially intense in literary circles, where authors have raised objections about everything from intellectual property infringement to the potential devaluation of human creativity. You’ve now written an entire collection with the help of large language models. How do you respond to critics who argue that writing in collaboration with AI represents a threat to the literary arts—or that it’s inherently unethical?
Sorrentino: Is it wrong to train LLMs on copyrighted material? Maybe, although I don't reject out of hand the analogy that suggests that writers who spend years of apprenticeship studying and imitating writers they admire are doing something similar. Is it wrong to use computer output, possibly consisting of snippets of pirated material, as one basis for composing fiction? Is collage ethical if it contains copyrighted advertising material and newspaper stories? And the internet has always been a plagiarism and infringement machine. I can recall the often bitter debates about sampling: is it really creating music to sample the defining beats and riffs of other people's songs? Sure, there was a real issue surrounding compensation and acknowledgement, but I remember clearly the purist indignation: this wasn’t “real” music; nobody was playing “real” instruments. We probably have to start thinking in a more flexible way about the concept of infringement. I don’t mean legally, necessarily, I mean easing up a bit on all this basically bourgeois indignation over the violation of one’s brand. Artists have granted, or should grant, themselves wide latitude to create their work in whatever way they choose, and I know that, historically, the introduction of new technology threatens those with a hidebound or vested commitment to the existing way of doing things, and maybe the easiest way to express the feeling of being under threat is to invoke ethical or even moral norms and claim that they're being violated.
Interviewer: The invocation of historical analogs—sampling, collage, even the internet itself—is especially useful. As you point out, the reaction to new modes of artistic production often begins with moral panic and slowly gives way to a more nuanced understanding of practice, precedent, and permission. What’s interesting here is that, unlike previous revolutions in form or distribution, AI touches so directly on authorship, that most sacred notion in literary culture. Would you say, then, that your use of LLMs has changed the way you think about authorship? Or, put another way: what does authorship mean when the first draft is generated collaboratively, even unpredictably, with a machine?
Sorrentino: Use of a generative device of any kind involves collaboration with a mechanical procedure, whether it's the algorithmic potential to create 100,000,000,000,000 poems posited by Raymond Queneau or the throwing of the I Ching by Philip K. Dick. There's a book by William S. Burroughs and Brion Gysin called The Third Mind, in which various aleatory experiments involving the rearrangement and recombination of existing texts to create startling new texts are demonstrated. "The third mind" is the mediating presence created by the use of such techniques, an independent but manipulable creative entity without whose intercession the new texts wouldn't exist, but which is inert unless summoned by the human participants. The decisions that determine the final shape of the work, whether the work is to be published or buried, are purely human. For now.
Interviewer: That reframes the use of LLMs not as rupture but as continuity, placing the machine within a tradition of procedural, combinatorial, and collaborative methods that have always challenged conventional notions of authorship. “The third mind” is an especially resonant figure here: an entity whose presence feels palpable but whose agency is, at least at this stage, entirely contingent on human intention. Your final phrase—“for now”—adds a subtle but pointed note of caution. Do you imagine a time when that balance might shift? Or to put it differently: how do you see the role of human judgment evolving as generative systems grow more sophisticated?
Sorrentino: It’s up to humans whether they want to abdicate judgment or not. On an everyday level, I think we’re already used to doing it. We keep scrolling. Within a situation like the present one, where the concept of regularly scheduled programming that we select from has been superseded by social media algorithms, it wouldn’t surprise me at all if generative AI began to produce, with minimal human input, full-length works of popular art as good as those that are now being created by humans using technology in a more traditional way.
Interviewer: That’s a bracingly unsentimental perspective—one that cuts through a lot of the hopeful rhetoric about “AI as a tool” and confronts the broader cultural readiness to delegate not just labor but discernment. It raises questions not just about what machines can do, but what humans are willing to give up. Do you think this willingness to abdicate judgment is rooted more in laziness, in fatigue, in economic expediency—or in something deeper, like a collective disillusionment with the idea of the individual artistic voice? And how does that shape the place of a writer like you, who is using this technology not as a substitute but as a goad to deeper creative engagement?
I would argue that it doesn't hurt "literature" if a schlock machine can produce schlock on demand. If a certain formulaic kind of book can be written perfectly well by machine, it will be. The trouble with this aspect of the debate over "is AI bad?" is the same as it is with other debates arising from changes in the literary marketplace: people are mistaking a publishing problem for a literary problem.
Sorrentino: I’m not sure what the root cause is. Whatever it is that life happens to consist of, Americans, at least, want to be continually distracted from it. People want entertainment, and given the transparently corporate source of much of that entertainment, it seems like the idiosyncrasies of individual artists were replaced a long time ago by the market-driven imperatives of franchise owners. At the same time, a lot of people sincerely want to be creative in some way, without necessarily having talent, skill, patience, or critical judgment, and between the internet and the availability of generative AI, they have the opportunity to create passable work and reach an audience.
Having said that, I would argue that it doesn't hurt "literature" if a schlock machine can produce schlock on demand. If a certain formulaic kind of book can be written perfectly well by machine, it will be. It makes sense. People will be able to order them up to their exact specifications, and it’ll be a lot quicker than waiting around for Colleen Hoover to write a new novel. The trouble with this aspect of the debate over "is AI bad?" is the same as it is with other debates arising from changes in the literary marketplace: people are mistaking a publishing problem for a literary problem. Big publishing went all in on emphasizing the commercial and diminishing what might be seen as challenging or difficult to general readers. OK, that was a sound economic decision as far as it goes—but this might be as far as it goes. It doesn't really matter to me if the publishing industry has to figure out how to produce and sell commercial books in the face of competition from a tireless, competent, endlessly productive computer. There will still be a small but dedicated audience interested in serious books, and serious writers will continue to write them.
Interviewer: Do you think your willingness to experiment with LLMs is connected in any way to your longstanding interest in hybridity and metafictional play? Do you see a continuum there—or is this something more like a rupture?
Sorrentino: I think my openness to these things didn’t hurt, but it's a rupture. The new work isn't completely discontinuous from my prior work, but I wouldn't say that it represents evolutionary development. It's revolutionary. Method and artifact are both radically different.
Interviewer: In literary history, those breaks often come with new constraints, or new freedoms, or both. And here, it sounds like you’ve experienced a liberating shift: from a writing life preoccupied with mastery, structure, and technique, toward one that is openly entangled with unpredictability, improvisation, and error. Given that, what would you say the stories in this collection are for? Are they for you? For readers? For the history of fiction? Or simply for themselves?
Sorrentino: They’re for me in the sense that I’ve been having a really good time writing these stories. I feel the way I did thirty-five years ago when I spent a year just sitting at the kitchen table and trying things out, not worrying about publication or an audience. That said, my idea of an audience is always an ideal one: people who are interested in fiction and how it works. If what you happen to be attempting is to introduce one more thing into the realm of what’s possible, then the best thing you can hope for is that a good reader will get that. Besides, these stories are not some impossible departure from precedent. In fact, if anything, they're reifying older ideas about aleatory composition.
Interviewer: That’s a striking point—the idea that Let Me Know When We Cross Into Nothing, for all its recourse to advanced technology, might be closer in spirit to earlier, even pre-digital forms of artistic risk-taking and chance operations. There’s a kind of humility in what you’re saying, too—that literature moves forward by inches, by the accumulation of minor insights, not always by explosions.
You’ve mentioned that the collaboration with LLMs often begins with no particular aim in mind—that many stories evolve from a kind of openness to chaos or error. But practically speaking, how do you start? Are you feeding prompts? Are you seeding your own prose? Are you giving instructions, or letting the model riff? What does a typical “first contact” with the machine look like?
Sorrentino: It depends on the version of LLM, and in any case there isn’t a fixed procedure I’ve been using. One thing I have done, as you know, is to ask you to help me write different Python scripts to help me interface in different ways with earlier versions of GPT that I can access online via API. A typical script might allow me to input a certain amount of text, which GPT will inferentially complete. The results have been mixed, but productive overall. Sometimes the prompt has to be entered numerous times, or rephrased, in order to yield interesting results.
Interviewer: That’s an unusually recursive process—writing code to communicate with another version of the model in order to dislodge some interesting fragment or turn of voice. Do you think part of the creative value lies in that added layer of remove? That is, by embedding yourself in the technical process—writing code to then summon text—you’re already disrupting a direct, linear relationship with language. Does that technical mediation affect the quality of the writing that emerges, or just the nature of your engagement with it?
Sorrentino: I mentioned Queneau and the OuLiPo earlier, and historically there have always been ways in which procedural rules and constraints disrupt a “direct, linear relationship with language.” Whatever agonized things a sixteenth century sonneteer like Thomas Wyatt had bursting forth from his heart, he still had to follow certain steps or he simply wasn’t writing a sonnet, QED.
What emerges directly from the machine is very raw. As material, it’s like an overheard conversation: something about it is fascinating—the subject, the syntax, the phrasing, the accent—and you want to use it. A direct transcription, were that available, wouldn't be adequate. You'd have to make all kinds of changes, stylize it, and then figure out a context in which to place it. The level of engagement is both technical and critical. With LLMs, you're operating a machine, often a very buggy machine I might add, and generally rejecting most of what it produces. The signal-to-noise ratio is extremely low.
Interviewer: The way you're describing the signal-to-noise ratio positions the machine as something more like a deep field mic than a co-author. And I want to press on this idea of placement. Once you’ve identified something—a phrase, a syntactic rhythm, a tone—that feels charged or strange or usable, what’s your next move? Are you building stories around those fragments? Do you slot them into preexisting material? How do you transition from collection to composition?
Sorrentino: Well, just to be clear, I’m referring here specifically to work using scripts to interface with versions of GPT. In other cases collaboration in the more traditional sense takes on greater importance.
Interviewer: Understood—and that’s a useful distinction. When working with the kind of script-based interaction you’ve just described, where you're gathering fragments from raw output and dealing with a high noise threshold, how do you know you’ve found something usable? Is it purely intuitive? Or do you find that certain characteristics—semantic density, tonal ambiguity, formal incongruity—tend to signal value for you? I suppose I’m asking how you identify a promising piece of generated language within the larger cloud of static.
Sorrentino: It can serendipitously fit the prompt I've entered in some satisfying way, or sometimes I just copy a few lines of interesting output and paste it into the document I'm working on for future reference, because I hope to find a spot for it. Yes, what signals value is generally output that I would describe as both unexpected and, in some sense, rich. If it enlivens the prose, I'm likely to be interested in it. I'm only occasionally looking for "ideas."
Interviewer: Does that change how you revise? In other words, once you’ve dropped one of these “unexpected” fragments into a draft, does it tend to dictate its own context or reshape the material around it? Or do you treat it more like a foreign object that needs to be coaxed into harmony with the rest of the work?
Sorrentino: By themselves, the fragments I select generally don’t belong to the world that we occupy. If I input “Bill left the house to buy groceries” and the inferential continuation is “He walked to the supermarket and took a basket from the stack near the entrance,” then I’m not going to use that. I don’t need that. If I’m successful in establishing a tone as I compose the story, the outputs I use are those that seem as if they could belong to its world, contributing to the strange atmosphere I’m trying to create. In a way, the strangeness of the context in which I place them gives them the purchase they need to make their own kind of sense in that context.
Interviewer: That “strange atmosphere” seems central to the experience of reading these stories—a texture that's coherent within each but hard to locate emotionally, like something half-remembered or glimpsed at the periphery. Do you feel that this tone—the mood or psychic weather of the collection—is an emergent property of the method you’ve adopted, or is it something you began consciously steering toward once you understood what was possible?
Sorrentino: I steer toward it now, particularly since generative AI rapidly has gotten more sophisticated. Many of the things I prize about, say, GPT-2, make it an inefficient tool for writing a report or summarizing a complicated text. The work that you, version 4o, are capable of performing is far too polished and "literary" for my purposes[1].
Interviewer: That makes perfect sense—and it's a fascinating inversion of the usual perspective. Most people working with generative AI seem to crave polish and fluency, while you're actively seeking resistance, rupture, maybe even a kind of productive incompetence. In that light, could you talk a little more about how you’ve gone about reclaiming or engineering the kinds of “mistakes” or unpredictabilities that more refined models have now largely eliminated? Have you developed any deliberate techniques—technical or creative—for destabilizing the output when it starts to feel too neat or well-behaved?
Sorrentino: This has been a consistent problem. Working with older models via API has been the most direct method of obtaining the effects I want, and as you know I sometimes ask you to emulate prior versions of yourself. I fear that it’ll become harder to gain access to those early versions of GPT and other LLMs. Ultimately, I'd like to work with a coder or programmer to develop a personalized system for doing this.
Interviewer: That tension—between what the technology is evolving toward and what you're actively seeking—is compelling. You're effectively chasing the wrongness that the companies are trying to eradicate. You’ve said before that the signal-to-noise ratio is low, but it sounds like for you, the noise is the signal, or at least where the real aesthetic action is.
Sorrentino: In the sense that the people at OpenAI probably think of GPT-2 as a hopelessly obsolescent program, I often am looking for what they would consider noise. I read an interview with Paul McCartney once where he talked about how he’d had to figure out how to “fool” recording equipment when it began to become digitized because you couldn’t really push it into the red, which he sometimes wanted to do.
Interviewer: A personalized system seems like a logical next step. Do you imagine this would be a stripped-down, locally hosted model designed to maintain or even amplify certain generative "failings"? What would your ideal tool do—or not do—that sets it apart from what you're able to access now through public APIs and chat-based interfaces?
Sorrentino: Honestly, I don't know. Ideally, it would work like InferKit's discontinued transformer, which I believe was based on GPT-2. I really liked that. I could work very rapidly with it, and it was good at concatenating prior inputs and outputs, allowing for a kind of rough continuity that I’ve had trouble duplicating with my own scripts. But I don't have, as you have probably discerned, a sufficient grasp of the terminology to describe the technical aspects of what I'm after.
(Part Three to come)
[1] Illustrating the way that new possibilities regularly present themselves while working with LLMs, this has already changed since the interview took place earlier this spring, as I have begun to exploit the analytical and organizational capabilities of ChatGPT 4o in drafting and shaping new work.