You Think AI Knows What It’s Doing. That Illusion Can Be Dangerous

A human face against a background of algorithmic code, illustrating the question: does AI think like a human

Does AI understand the way we think it does? More and more often, we speak as if machines truly know what they are doing. Researchers warn that this illusion pushes human responsibility into the background—and that can have serious consequences.

Does AI understand? Or do we only think it does?

Talking to a chatbot often feels like interacting with a human. Users say the machine “knows,” “understands,” even “remembers.” Researchers from Iowa State University warn, however, that such language can distort what artificial intelligence actually is.

A team led by Professor Jo Mackiewicz analysed more than 20 billion words from news articles across 20 countries. They searched for expressions such as “thinks,” “knows,” and “understands” used alongside terms like “AI” and “ChatGPT.” The results suggest that journalists are more cautious than one might expect. Most avoid attributing human qualities to machines. But when they do, the effect can be misleading.

Words that distort reality

AI needs data—this sounds like describing a car that needs fuel. But ‘AI must understand the world’—that is an entirely different story.

— explains Jo Mackiewicz in ScienceDaily

The second statement assigns something the machine does not possess: intention, curiosity, even a hint of consciousness. Such phrases can settle in readers’ minds. Expectations toward AI begin to rise beyond what these systems can actually do. At the same time, they obscure those who are truly responsible for the technology: programmers, engineers, and the companies behind it.

Do journalists know what they are doing?

In everyday speech, it is easy to say that “Siri knows.” In journalism, such expressions are far less common. The most frequent word associated with AI in the study was “needs,” appearing 661 times—typically in neutral contexts such as “AI needs large amounts of data.” This is an example of careful language that avoids suggesting human traits.

By contrast, the phrase “ChatGPT knows” appeared only 32 times. That may seem insignificant, but researchers stress that even a few strong expressions can shape how people perceive technology. This may help explain why public discourse swings so easily between extremes: apocalyptic fears of AI and utopian visions of a machine that will solve all problems.

Language and AI: not a simple binary

Researchers from Iowa State note that anthropomorphism in artificial intelligence is not a black-and-white issue. It is a spectrum of expressions, ranging from neutral to suggestive: from ordinary descriptions of technical requirements, through neutral statements, to sentences implying that a machine thinks.

Anthropomorphism in press articles is far less common than one might expect. Even in cases where it appears, we can see different degrees of intensity in its use,

– Mackiewicz says.

These findings matter because the way we write shapes how we understand artificial intelligence — including whether we believe AI thinks like a human. In turn, this influences readers’ imagination and may shape their attitude toward technology. It may even lead people to organise their lives around the mistaken assumption that AI “decides” for them.

Why this matters

The way we describe AI shapes public expectations. When those expectations collide with reality—when a system “knows” no more than a calculator—disappointment follows.

The researchers encourage writers to ask a simple question: does this sentence suggest that the machine has intentions? If the answer is yes, it may be time to rethink how we speak and write about AI. For now, journalists manage this balance relatively well. But as technology advances, the pressure to use human-like language will only increase.

Beyond the research: responsibility is ours

When language leads us to believe that AI “understands” or “wants,” it becomes easy to expect machines to make money on the stock market, win wars, or eliminate poverty. It becomes just as easy to fall into the opposite extreme—fear that AI will dominate us.

In reality, these systems have no desires and no awareness. They contain only what humans have put into them: data, code, goals, and errors. Anthropomorphising AI is not an innocent shortcut. It is a step toward shifting responsibility onto machines—responsibility that belongs entirely to us.

The more we speak about AI as if it were human, the harder it becomes to recognise where the tool ends—and where our own, potentially dangerous illusion begins.


Read this article in Polish: Myślisz, że AI „wie”, co robi? To złudzenie może być groźne

Published by

Radosław Różycki

Author


A graduate of Journalism and Social Communication at the University of Warsaw (UW), specializing in culture, literature, and education. Professionally, they work with words: reading, writing, translating, and editing. Occasionally, they also speak publicly. Personally, they are a family man/woman (head of the family). They have professional experience working in media, public administration, PR, and communication, where their focus included educational and cultural projects. In their free time, they enjoy good literature and loud music (strong sounds).

Want to stay up to date?

Subscribe to our mailing list. We'll send you notifications about new content on our site and podcasts.
You can unsubscribe at any time!

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Zmień tryb na ciemny