Science
The Alps are Losing Their Glaciers: They Could Disappear Within Decades
06 January 2026
Chatbots were supposed to be just tools. However, research shows they can influence our views—including political and social ones. Worse still, they do it more effectively than most humans. This raises a critical question: Are we still in control of AI, or is AI starting to control us?
AI chatbots have permanently entered our lives. Conversations with a machine pretending to be human can be helpful, but they can also make life difficult. Just a few minutes of talking to a chatbot can be enough to change the way you think about major issues. This isn’t just theory; it’s backed by hard data from a massive new study.
It turns out that machines can sway our opinions—including political ones. This is according to research from experts at Stanford University and the UK AI Security Institute. The study was conducted due to growing concerns that AI chatbots could be used for fraud or grooming, particularly involving minors.
The experiment involved nearly 80,000 Britons who interacted with 19 different artificial intelligence models.
During the conversations, volunteers and AI chatbots exchanged an average of seven messages over about ten minutes. The topics varied, covering issues like the “cost-of-living crisis and inflation” as well as “public sector pay and strikes.”
Participants spoke with a model specifically designed to persuade them to take a certain stance. Before and after the conversation, the participants stated whether they agreed with a series of public opinion statements.
The results were startling. The most powerful responses were those from artificial intelligence that appeared well-documented. When a chatbot cited facts and evidence, its persuasive power increased significantly.
However, there is a catch. Models that relied most heavily on “hard data” were more likely to provide inaccurate information than more cautious ones.
The study, published in the journal Science, shows that the biggest breakthrough didn’t happen at the start, but later—when the models were further modified. That was when their ability to persuade spiked.
Researchers linked AI chatbots with reward mechanisms for the most influential responses. In practice, this meant the system was learning how to speak in a way that most effectively influenced humans.
Experts have no doubt: the capabilities of artificial intelligence systems significantly amplify the power of persuasion. Why? Through mass content generation, AI can influence people more effectively than even the most charismatic human. The study’s authors point directly to the risk:
“Furthermore, we reveal a concerning trade-off: as AI systems are optimized for persuasion, they may increasingly employ misleading or false information,”
– the study reads.
In 2025, AI chatbots go beyond customer support, acting as emotional virtual therapists and even simulators of historical figures.
The findings from British and American scientists are unsettling for everyone. Language models, instead of helping us, could begin to steer us—and we might not even notice when it happens. Whether we like it or not, AI chatbots are utilized almost everywhere: in messaging apps, mobile software, and websites.
Therefore, researchers warn: without proper safeguards, these systems could fuel manipulation and disinformation. It is a clear signal to AI developers that technological progress must go hand in hand with responsibility.
Read this article in Polish: Dziesięć minut rozmowy wystarcza. Tak AI zmienia zdanie ludzi