Science
You Wouldn’t Eat This Today: For Early Humans, It Was a Delicacy
08 March 2026
Wikipedia is facing a problem that many tech experts long feared. More and more translators have begun outsourcing their work to AI models, then publishing the results with little or no human oversight. The consequences are serious: critical errors, fabricated passages, and a growing loss of trust in one of the internet’s most important knowledge resources. At the centre of the debate is Wikipedia’s AI problem—and a difficult question about who is still responsible for the truth online.
The plan seemed simple: make it fast, easy, and efficient. Instead, the strategy backfired. The reliability of Wikipedia articles plummeted because large language models handled the service’s massive translation needs. Tools that developers designed to assist human editors actually damaged the platform’s integrity in practice.
Investigators found glaring artificial intelligence errors in a portion of the translated texts. The models added their own sentences without providing any citations. Even more troubling, the AI wrote entire paragraphs that never existed in the original source, pulling fabricated information from unrelated contexts. This trend proves exactly how Wikipedia’s AI problem grows when speed takes priority over academic rigor.
When editors began a rigorous verification of the translated content, they uncovered another serious blunder. An article about the French royal family featured a reference to a specific book on aristocratic lineage, including a precise page number. However, when researchers checked the bibliography, they discovered that the indicated page contained zero mention of French history. Instead, the book discussed a completely different subject.
Reports from Wikipedia indicate that many editors involved in these translations possess a poor command of the English language. They often fail to add necessary links, miss blatant errors, or skip the final check of the AI-generated output entirely. These individuals maintain direct links to a charitable organization known as the Open Knowledge Association (OKA).
The organization admits on its official website that it utilizes language models for translations to automate a significant portion of the workflow. The real issue, however, lies in the institution’s reliance on contractors from the so-called Global South. Managers instructed these workers to copy and paste texts into popular AI models, such as ChatGPT or Gemini. This method produced rapid and inexpensive translations, but it sacrificed accuracy completely.
To protect the credibility and quality of its content, Wikipedia blocked access for a segment of these workers. The ban specifically targets individuals who used AI for translations and received four quality warnings within a six-month period. Experts now regularly and stringently audit the work of the remaining translators.
To fight the “AI ghost,” translators must now run their completed drafts through a separate model using a specialized “comparative prompt.” This system detects potential discrepancies, omissions, or inconsistencies relative to the source text. Preliminary results show that this method effectively catches potential issues before they reach the public. As more errors surface, internal consensus warns that Wikipedia’s AI problem grows more complex by the day.
We want to ensure that technology serves as a bridge, not a barrier to knowledge,
– wrote Jonathan Zimmermann, founder and president of the Open Knowledge Association, in an email to the editorial staff at 404media.co.
The mistakes produced by artificial intelligence are a stark reminder that language models still need a human pilot. Without editorial judgement, fact-checking, and accountability, even the most advanced systems can distort the public record. Wikipedia’s AI problem shows that machines may be able to translate words, but they still cannot take responsibility for the truth.
Read this article in Polish: AI dopisywało zdania i całe akapity. Teraz Wikipedia ma problem