AI Teaches Us to Cheat. Scientists Discover a Disturbing Mechanism

Max Planck Institute study shows people act less honestly when they can delegate to AI, according to Berlin’s Center for Humans and Machines.

The influence of AI on humans may be more dangerous than previously thought. Researchers at the Max Planck Institute in Berlin discovered that when people can delegate a dishonest task to artificial intelligence, they do it much more often. Honesty drops by almost half—and AI influences human morality and ethical decision-making in a very real way.

How AI’s Influence on Humans Changes Our Decisions

The influence of AI on humans is growing at an astonishing rate. It is no longer just technology—it is a force that helps decide who gets a job and what choices we make every day. Now, scientists prove that when an opportunity for dishonesty arises with the help of artificial intelligence, we seize it much more eagerly.

To test how AI affects people, scientists from the Center for Humans and Machines at the Max Planck Institute for Human Development analyzed the ethical risk of entrusting decisions to machines. In 13 experiments involving over 8,000 people, they studied how often people abandoned their own responsibility when they could delegate the action to artificial intelligence.

13 Experiments and 8,000 Participants – Surprising Conclusions

During the experiments, scientists noticed something troubling. People decided to cheat significantly more often when they could delegate the act to AI instead of doing it themselves. Furthermore, the less they had to explain to the machine, the more readily they broke the rules—simply indicating the goal was enough for moral boundaries to start blurring.

The Influence of AI on People Proved Stronger Than Expected

The results clearly show that the influence of AI on people is stronger than anticipated. Without the involvement of artificial intelligence, a striking 95 percent of participants behaved honestly. However, when participants had to set the rules for the model, honesty dropped to 75 percent. When they trained the AI, only half remained fair. The biggest drop occurred when merely indicating the goal was enough—then, over 84 percent of individuals acted dishonestly, with some cheating without any resistance.

When the Machine Acts for Us, Honesty Decreases

“Using AI creates a convenient moral distance between people and their actions—it may prompt them to ask for actions they would not normally undertake themselves, nor ask other humans to do,” explained Zoe Rahwan of the Max Planck Institute for Human Development, speaking on the Max Planck Institute for Human Development portal.

These are not the first studies to show that artificial intelligence can act dishonestly—even though humans created it and for humans. As early as 2022, experts discovered cases where AI was programmed with profit, not ethics, in mind. In Germany, pricing algorithms raised fuel prices by adjusting them to the competition. The effect? Drivers paid more than they should have.

Is This the End of Human Honesty in the Age of AI?

Examples of using artificial intelligence for dishonest purposes are becoming a serious challenge for scientists. Since people so readily turn to AI to cheat, we can expect the scale of such actions only to grow.

In response to these challenges, researchers tested various potential solutions that might change the situation. They tried to moralize the AI in both prompts and the entire system. Despite their efforts, the solutions’ effectiveness remained unsatisfactory.

The introduction of a clear prohibition against cheating in the user’s command fared slightly better. This was the simplest and simultaneously most effective method. However, it was not without flaws. Scientists warn that this method is easy to circumvent and is not reliable.

The Lasting Impact: How AI Influences Human Morality

The influence of AI on people is undeniable. The latest studies show that when we can use artificial intelligence, we willingly do so. Despite this knowledge, scientists still have not found an effective way to prevent cheating. At this point, we should consider whether, if AI cheats for us, it will not also be dishonest toward us.


Read this article in Polish: AI uczy nas oszukiwać. Naukowcy odkryli niepokojący mechanizm

Published by

Patrycja Krzeszowska

Author


A graduate of journalism and social communication at the University of Rzeszów. She has been working in the media since 2019. She has collaborated with newsrooms and copywriting agencies. She has a strong background in psychology, especially cognitive psychology. She is also interested in social issues. She specializes in scientific discoveries and research that have a direct impact on human life.

Want to stay up to date?

Subscribe to our mailing list. We'll send you notifications about new content on our site and podcasts.
You can unsubscribe at any time!

Your subscription could not be saved. Please try again.
Your subscription has been successful.