Truth & Goodness
Berlin Introduces Mass Surveillance. Lawyers Raise the Alarm
18 December 2025
Learning through ChatGPT is convenient, fast, and easily accessible. However, it has one major flaw: it is highly ineffective. American scientists proved this after analyzing the way we acquire knowledge, showing the surprising downside of AI in learning process. The problem, however, lies not in the tool we use, but in ourselves.
Artificial intelligence has permanently entered our lives. Many of us use it to search for information—from the simple to the complex. Although large language models excel at easy topics, they have a massive problem with more challenging ones. What problem? Researchers from the University of Pennsylvania in the USA answered this question.
They conducted an experiment for this purpose. They involved over 10,000 individuals who were tasked with learning something new, such as how to establish a vegetable garden. Participants were randomly assigned to different methods of acquiring information: they either used ChatGPT or searched for data via the Google search engine. Afterwards, they were asked to share their knowledge by writing a guide for a friend.
After the task was completed, the participants who used AI themselves concluded that their knowledge of the given topic was smaller. They admitted that they put less effort not only into acquiring the information but also into preparing instructions for a friend. Their advice was less reliable and more generalized.
This was not merely their subjective observation. Independent reviewers, who did not know where the authors of the notes obtained the necessary data, confirmed the thesis put forward by the subjects themselves.
Artificial intelligence is a powerful tool, but it is not always the best choice. Discover the situations where AI truly shines:
The majority of the people asked to read the instructions did not want to use the materials created with AI. Why? They deemed them less informative and not very helpful.
“We found these differences to be robust across contexts. For example, one possible reason why LLM users wrote shorter and more generic advice is simply that the LLM results presented them with less diverse information than Google search results,” wrote Prof. Shiri Melumad, one of the study’s authors, in her article published on theconversation.com.
It might seem that the language model has shallower knowledge than Google, leading to the user having an incomplete overview and, therefore, limited knowledge. Nothing could be further from the truth. The reason why internet users retained less knowledge on the given subject is completely different.
When researchers expanded the study and gave all volunteers the exact same information—whether sourced from AI or Google—the internet users who consulted the language model still learned less. It turns out that the effort we invest in learning is extremely crucial during the knowledge acquisition process.
If we engage in finding information, analyze it, and extract what we deem essential, we will assimilate knowledge faster than if everything is simply handed to us “on a silver platter.”
Nevertheless, the researchers have an important message for us.
The test authors do not diminish the value of artificial intelligence, as they wrote in their article:
“We do not believe that the solution is to avoid using LLMs, especially given the undeniable benefits they offer in many contexts.”
On the contrary, experts encourage us to use language models more consciously. Users should know which areas artificial intelligence can actually be useful in, and when it is detrimental.
The study’s conclusion is simple. ChatGPT, Gemini, or Perplexity will help with learning, but only for short and simple topics. If you need to find out more and delve deeper into a subject, you must search for that knowledge yourself—in other sources.
Read this article in Polish: Korzystali z AI, by się czegoś nauczyć. Google okazał się tu lepszy