Select Page

Before the release of ChatGPT, the first thing a writer would ask when I said I’m an editor was, “Who’s your favorite writer?” As a freelance editor, my response revealed whether I’d be a good fit for the writer. After the release of ChatGPT in 2022, however, everyone I tell I’m an editor asks the same exact thing, “What do you think about AI?”  

Having studied AI at a full gallop (like many others), it’s apparent I need a clear response so I’m not out of step with the industry or even my personal ethics.

The laws surrounding the use of AI will take years for the American legal system to elaborate, a process likely accelerated by a series of lawsuits. Because of this, publishers like Amazon and Taylor & Francis are rushing to protect themselves from legal issues, principally those of copyright, privacy, and intellectual property. To ensure our writers smoothly transition to publication and protect our writers from legal threats, it’s necessary for editors to address the use of AI in our business relationships.

First, some definitions.

Writing desk

OpenAI’s ChatGPT—colloquially Artificial Intelligence (AI)—is one of several Large Language Models (LLMs), including Google’s PaLM and Microsoft’s Copilot, among others. The market value of OpenAI has risen at a blistering pace to between $80 and $100 billion. Put differently, this value measures the ability of LLMs to replace flesh-and-blood writers.

LLMs are commonly used for language generation, such as writing college essays, novels, and journalism like sports writing. You may have read an AI-generated article already without knowing it.

Writer Maggie Harrison Dupré, in her article titled “Sports Illustrated Published Articles by Fake, AI-Generated Writers,” notes an insidious problem in the subtitle: “We asked them about it—and they deleted everything.” Readers of Sports Illustrated felt duped by the venerable publication, and rightly so. Dupré combines questions about legality, labor relations, authenticity, and public reputation to show Sports Illustrated is owned by media conglomerate Arena Group, whose CEO was fired because of the fallout.

In light of this, writers, editors, publishers and their lawyers, and even software engineers are responding to AI. Copyeditors often use software such as PerfectIt to ensure consistency. PerfectIt creator Daniel Heuman recently published the announcement “Why We’re Not Adding AI to PerfectIt,” where he distinguishes between four kinds of AI use: Collaborative, Corrective, Extractive, and Generative.

As tools for writing and editing, the first of these uses don’t pose the greatest legal threat. Many in marketing already use Collaborative AI for “drafting assistance,” even if that’s something as simple as next-word prediction in Jasper or Copy.ai. Many also use Corrective AI, such as Grammarly, to copyedit their own work. AI might improve the efficiency of editors using consistency software like PerfectIt. So why won’t PerfectIt incorporate AI?

The answer to that is intellectual property and privacy.

If I use Corrective AI to copyedit, I transmit a writer’s manuscript over the internet to an AI that transmits it back with the edits. Unless otherwise stated, this writing is inevitably used to train the AI, precisely why the big authors are suing. In such a case, I’ve not only shared the writing with a third party, violating the writer’s privacy, but also provided the AI company with intellectual property that will be used as training data.

Eyeglasses on a manuscript

Deploying Extractive AI, which could feasibly analyze a narrative for a developmental edit, represents the same type of violation of my writer’s trust. “Tell me if there’s a meet-cute in this romance novel,” I might say, submitting a writer’s manuscript to the AI. I can’t think of it otherwise—asking Extractive AI to search for narrative elements violates privacy and shares intellectual property.

I already include in my editing contracts a section limiting my ability to share a writer’s manuscript. Permission is required. That section now explicitly mentions that the editor will not share the writer’s manuscript with a third party by using AI.

For Generative AI, the problem is the need for a responsible party.

A wise French philosopher once argued that the author is an invention of the legal system. A flesh-and-blood writer must be responsible if things like guilt and punishment are to work, so the thinking goes. Even before Henry Miller or Allen Ginsberg, publishers had a long tradition of printing for a public audience and, therefore, fighting individuals and the government in court over obscenity, privacy, libel, and copyright. With this history in mind, publishers and their lawyers are already changing contracts and taking public positions limiting the use of Generative AI for market writing.

Publishing giant Hachette, for example, has taken a position against this type of “machine creativity.” This same culture doesn’t hold, however, in the creation of what we might call institutional writing.

Stack of books

Institutional writing is the kind of writing that is not sold to the reader. America’s corporations and institutions produce massive volumes of internal and external publication, the kind of communication that rarely enters the court, such as memos, case studies, announcements, and websites. This type of writing uses an institutional voice—words written without a specific writer. Minus the pressure of the legal system to designate a flesh-and-blood writer who might be censured, fined, fired, or jailed, the use of Generative AI appears far more acceptable for institutional writing (despite dangers, such as revealing company secrets).

Here goes: As an editor, I assure my writers I will not use AI in the editing of their manuscripts. Respectively, my writers assure me, the editor, that they have not used Generative AI to produce the language of their market writing. At minimum, writers must disclose the use of Generative AI for all writing, as recommended by the Authors Guild and Modern Language Association. With these contractual elements in place, both writer and editor will avoid most legal problems.

Not that I’m a lawyer, but let’s consider this enormous legal mess beside the point. Let me tell you the true reason I’m changing my contracts and updating my business.

It is a gut reaction, one I feel any time someone proposes using AI to write copy, whether that’s market or institutional writing. When I watch a video that demonstrates the sausage-making process of using AI to write an article, I feel the floor dropping out from underneath.

If the writer doesn’t care to write, then why should the reader care to read?

My pleasure in editing comes precisely from the amount of care put into the writing of a manuscript. The writing of a novel or memoir takes dedication—completion alone is admirable. Some writers recognize their novel as their singular passion, one they will pour decades into the writing. When I developmentally edit a family biography written as a memorial to a deceased parent, an intimate bond erupts from the editing like sparks from an anvil. This connection is a miracle of human making.

The pleasure of working in the literary arts is collaborating with literary artists. Why would I want to give that up for polishing machine generated pablum? The legal concerns are enough to keep my editing from crossing any lines, but it is only my connection with flesh-and-blood writers and their very own words that will keep me editing for life.

So, who’s your favorite writer?

 …

This article was originally published in Freelancer by the Editorial Freelancers Association in April, 2024 and is republished here with permission.