Truth & Goodness
High Fines for a Cheese Roll. New Rules for Travelers
18 October 2025
The influence of AI on humans may be more dangerous than previously thought. Researchers at the Max Planck Institute in Berlin discovered that when people can delegate a dishonest task to artificial intelligence, they do it much more often. Honesty drops by almost half—and AI influences human morality and ethical decision-making in a very real way.
The influence of AI on humans is growing at an astonishing rate. It is no longer just technology—it is a force that helps decide who gets a job and what choices we make every day. Now, scientists prove that when an opportunity for dishonesty arises with the help of artificial intelligence, we seize it much more eagerly.
To test how AI affects people, scientists from the Center for Humans and Machines at the Max Planck Institute for Human Development analyzed the ethical risk of entrusting decisions to machines. In 13 experiments involving over 8,000 people, they studied how often people abandoned their own responsibility when they could delegate the action to artificial intelligence.
During the experiments, scientists noticed something troubling. People decided to cheat significantly more often when they could delegate the act to AI instead of doing it themselves. Furthermore, the less they had to explain to the machine, the more readily they broke the rules—simply indicating the goal was enough for moral boundaries to start blurring.
The results clearly show that the influence of AI on people is stronger than anticipated. Without the involvement of artificial intelligence, a striking 95 percent of participants behaved honestly. However, when participants had to set the rules for the model, honesty dropped to 75 percent. When they trained the AI, only half remained fair. The biggest drop occurred when merely indicating the goal was enough—then, over 84 percent of individuals acted dishonestly, with some cheating without any resistance.
“Using AI creates a convenient moral distance between people and their actions—it may prompt them to ask for actions they would not normally undertake themselves, nor ask other humans to do,” explained Zoe Rahwan of the Max Planck Institute for Human Development, speaking on the Max Planck Institute for Human Development portal.
These are not the first studies to show that artificial intelligence can act dishonestly—even though humans created it and for humans. As early as 2022, experts discovered cases where AI was programmed with profit, not ethics, in mind. In Germany, pricing algorithms raised fuel prices by adjusting them to the competition. The effect? Drivers paid more than they should have.
Examples of using artificial intelligence for dishonest purposes are becoming a serious challenge for scientists. Since people so readily turn to AI to cheat, we can expect the scale of such actions only to grow.
In response to these challenges, researchers tested various potential solutions that might change the situation. They tried to moralize the AI in both prompts and the entire system. Despite their efforts, the solutions’ effectiveness remained unsatisfactory.
The introduction of a clear prohibition against cheating in the user’s command fared slightly better. This was the simplest and simultaneously most effective method. However, it was not without flaws. Scientists warn that this method is easy to circumvent and is not reliable.
The influence of AI on people is undeniable. The latest studies show that when we can use artificial intelligence, we willingly do so. Despite this knowledge, scientists still have not found an effective way to prevent cheating. At this point, we should consider whether, if AI cheats for us, it will not also be dishonest toward us.
Read this article in Polish: AI uczy nas oszukiwać. Naukowcy odkryli niepokojący mechanizm
Truth & Goodness
17 October 2025
Zmień tryb na ciemny