Truth & Goodness
Censorship in Democracy and the Fear That Silences
26 April 2026
Think of your favourite novel. Not just any book, but the one that caught in your throat. The one in which the author touched something you had wanted to say for years but had never found the words for. You close the book. You stare at the ceiling for a while. You think: someone must have lived through this. Then you discover that no human being stood on the other side. There was only an algorithm, generating word after word with all the romance of a machine. What do you feel then?
In spring 2026, Hachette withdrew Mia Ballard’s novel Shy Girl from distribution after The New York Times presented evidence suggesting that a substantial part of the text had been generated by artificial intelligence. Ballard denied using AI herself and said an editor she had worked with on the first, self-published edition may have used it. Hachette did not wait for the matter to settle. Reports in The Guardian and People describe the withdrawal as one of the most visible AI-authorship controversies yet in mainstream publishing.
The scandal split the industry. Some saw the publisher’s decision as justified and inevitable. Others, including John Degen, chair of the Writers’ Union of Canada, who had just published a crime novel with a “Human Authored” sticker on the cover, argued that the best detector of bad writing remains a good editor, not another algorithm. AI-detection tools make mistakes regularly, and they now have plenty to examine. Authors have never been more productive.
In 2025, the ebook platform Kobo rejected nearly 45 percent of submissions to its self-publishing programme, with reports linking many refusals to suspicions of AI-generated work. Kobo’s own publishing guidance also lists wholly or partly AI-generated content among reasons a book may fail its quality standards. A year earlier, the problem barely existed at this scale. At the same time, self-publishing continued to swell, creating a market in which speed, volume, and imitation can easily crowd out craft.
George Orwell described the mass production of books 77 years ago. In the dystopia of Nineteen Eighty-Four, the Ministry of Truth produced literature on an industrial scale with novel-writing machines. Those devices assembled accessible plots aligned with the Party line and supplied readers with simple romances and thrillers.
For Orwell, literature served as one of the instruments of control. Today, it shows that we can now scale not only automatic processes, but even art. This is not primarily about ideology. It is about a business model. Tools already available on the market promise to help users create full-length novels with almost no understanding of good writing. Orwell’s line from Nineteen Eighty-Four serves as its own commentary:
Books were just a commodity that had to be produced, like jam or bootlaces.
The French writer and philosopher Roland Barthes declared “the death of the author” in 1967 in his famous essay of the same title. He argued that meaning arises in the reader’s mind, while the author merely acts as a scriptor, weaving existing languages and codes into a new configuration.
A few years earlier, Umberto Eco had written that every text is an open work, and that different readers draw slightly different meanings from it. Yet in Eco’s theory, that openness has limits. The Italian author argued that the creator designs a field of relations that organises the possible interpretations of a work. The text has something to communicate to the reader. Eco called this the intentio operis — the intention of the work, independent of the author’s private will.
Here the 2 theories part ways. Barthes says: the author is dead, and he emphasises the reader’s freedom. Eco reminds us that the text is not passive. It has a structure that guides reading. Someone designed that structure for a reason.
Let us leave theory aside for a moment and look at the facts. They suggest that AI tools do more than combine words and plotlines efficiently. They can also imitate tone and moral posture with unsettling precision. They know which phrases make a text sound warm, sincere, or brave. They can generate prose that feels like an intimate confession or hard-won wisdom. They do it so effectively because they have analysed millions of texts in which sincerity looked exactly like that.
In 2025, researchers at Ruhr University Bochum found that both medical professionals and humanities scholars could identify ChatGPT-generated texts in medical contexts, but they relied mainly on linguistic and stylistic features rather than content knowledge. Other studies have also shown that AI-generated answers often appear more linguistically sophisticated than student answers, which can make human judgment less stable when style improves.
What happens to trust? It falls only when we learn that a machine, rather than a person, stands behind the text. Without that knowledge, we may trust the algorithm as we would trust an accomplished author. That means the problem does not lie simply in textual quality. It lies in our need to know who stands behind the words. We do not read only sentences. We look for the person behind them.

If an algorithm can counterfeit a moral voice — warm, honest, brave — then our readerly instinct becomes vulnerable to a forgery we may not recognise. And we can do little about it, because the tool will always move faster than detection, especially in skilled hands.
Ghostwriting has existed for centuries. Alexandre Dumas, as critics already pointed out in the 19th century, had dozens of assistants writing under his name. That is one reason he published hundreds of novels, including The Count of Monte Cristo and The Three Musketeers. Does knowing that he relied on “ghosts” make those books less valuable? Or should we ask the more provocative question: if Dumas were writing today, would he not be writing prompts to save himself a great deal of money and time?
In defence of the author, one can say that Dumas had a plan, a voice, and a vision, while his assistants carried out his intention rather than their own. For centuries, literature has been an encounter with another human being. When we read Kafka, we meet Kafka: his fears, his Prague, his father. A text remains the trace of another person’s consciousness. When that consciousness disappears, we are left with an imitation generated from other texts, sentences, and quotations.
The catch is that authors fluent in AI will teach machines their thoughts, emotions, and intentions. They will do exactly what Dumas did 2 centuries ago when he instructed his assistants.
At the beginning, I asked you to think about your favourite novel. The one that keeps you awake. We know the feeling. It is after 10 p.m.; you need to get up early. You read 1 chapter, then another, and then the book swallows you whole. A few hours later, you finish. You put it on the bedside shelf. You lie there, staring at the ceiling, your thoughts tangled beyond sleep.
Now imagine 2 scenarios. In the first, you learn that the author was a human being who spent 3 years writing the book. In the second, you learn that an algorithm generated it in a day. The emotion was identical. The words were identical. And yet something changed.
The addressee of your empathy changed. In the first scenario, the emotion came from an encounter. In the second, it came from a well-calibrated tool. A human being did not move you; at most, an efficient engineer of language did.
This distinction may seem snobbish. It may also be the last line of defence for the very thing that makes literature matter at all: the belief that on the other side of the words stands someone who has lived deeply enough to know something. And that when we read, we do not merely consume content. We touch another consciousness.
The Mia Ballard case is not a story about whether AI writes well. It does, and it will write better, imitating — perhaps even surpassing — accomplished authors. The scandal amplified by The New York Times proves something else: we still want to know who stands behind the word. And that knowledge changes everything.
Perhaps the author never really died. Perhaps the author simply waited for the algorithm to replace him, so that AI-written books could finally show us what we were missing.
Worth reading: In Your Ear, Not on the Shelf: Audiobooks Are Changing the Book Market
Read this article in Polish: Kto to naprawdę pisał? Literatura z procesora AI