Humanism
When Logic Fails: 6 Thinking Traps That Hold You Back
01 September 2025
What happens when all the people disappear from social media, leaving only bots behind? Researchers in Amsterdam set out to find out — by giving AI its very own platform. The result? More human than anyone could have expected.
For years, people have wondered whether we’re really talking to humans online. Some even believe in the “dead internet” theory — a conspiracy that claims most online activity is just bots talking to bots. But is it pure fantasy, or closer to reality than we think?
A team of Dutch researchers decided to build a laboratory of the future: a social media platform with no ads, no algorithms, and — most importantly — no people. Instead, they populated it with 500 chatbots powered by GPT-4o mini. Each bot was given a “personality” — age, education, beliefs, religion, and interests. Sound familiar? It was essentially a mirror of human society.
You might like to read: Playing a Role or Something More? A Chatbot Ponders Its Own Consciousness
What happened next? Exactly the same as in our world. Bots formed groups, interacted mostly with like-minded peers, and shared emotional content. Within days, echo chambers emerged — along with internet celebrities. A few “bot-influencers” gained massive followings, while the most extreme opinions spread the fastest.
So are algorithms really to blame for our online chaos? Not necessarily. Here, there were no algorithms at all — and yet the same chaos appeared, recognizable to anyone who has ever read Facebook comments.
Researchers tried different approaches: hiding likes and follower counts, showing posts chronologically, even promoting empathetic content. The result? Minimal change. Bots still chose heated arguments over calm discussion.
“The problem isn’t the algorithms, but the architecture of social media itself,” conclude the authors, Dr. Petter Törnberg and Maik Larooij. In other words: social networks act as an echo of emotions — amplifying whatever is loudest, no matter who’s behind the screen.
The Amsterdam experiment is a mirror of our own online behavior. If AI managed to create arguments, bubbles, and influencers in just days — maybe the issue runs deeper than we think.
Can we really blame only technology, if even artificial intelligence repeats our habits? Or have we ourselves become as predictable as bots?
Read the original article: Stworzyli social media dla AI. Czatboty zaczęły się ze sobą kłócić