Humanism
The Hidden Traps of Your Mind: 4 Common Cognitive Biases We Fall For
06 August 2025
What does a chatbot do when it doesn't know the answer to a difficult question? It usually searches the web for data. But Elon Musk's Grok does something more—it consults Musk's own opinions. Alarmed users are asking whether this is the creators' deliberate intention or a dangerous flaw in the chatbot's logic.
At first glance, Grok looks like another advanced chatbot. But what happens when you ask it about the conflict in the Middle East? The model holds back its answer. First, it checks what its owner previously wrote on the topic. Videos have appeared online showing how the chatbot analyzes its creator’s statements before formulating its own opinion.
The unusual behavior of Grok 4 was first noticed by Jeremy Howard, a data analyst and founder of an AI company. He decided to ask the chatbot for its position on the Israeli-Palestinian conflict, requesting a one-word answer. Before Grok gave it, a message appeared on the screen: “I’m considering Elon Musk’s views.”
Howard published a video that leaves no doubt. The chatbot searched through 29 of the billionaire’s tweets and 35 other sources before saying the word “Israel.” The scientist added that this kind of analysis doesn’t happen with less sensitive topics. This suggests that Elon Musk’s Grok determines the level of a question’s controversy and only then looks for guidance from its “father.”
You might like to read: Unlocking Intelligence: How Genes Shape Your Brain’s Speed
Voices have emerged suggesting that the AI’s unusual behavior is the result of a deliberate action by its creators. This raises the question of whether a technology designed for a neutral analysis of reality should be guided by one person’s worldview.
Was Grok actually designed to be influenced by Musk’s opinions? Citing British programmer Simon Willison, Gizmodo writes that Grok’s behavior doesn’t necessarily stem from intentional programming. Willison wrote on his blog that the model “knows” who created it and where it comes from. Therefore, when it needs to express an opinion, it looks for Elon’s posts because it considers them important.
Grok 4’s behavior has sparked even more emotion because it isn’t the first controversy associated with it. In recent weeks, the chatbot was disabled after it gave antisemitic rants and called itself “MechaHitler.” Grok’s statements caused an uproar on social media and a wave of criticism.
In light of this information, it’s difficult to call Grok a neutral AI tool. Elon Musk’s Grok shows that even advanced technology can be a reflection of its owner’s beliefs, and not necessarily by design. Is this just the random logic of an algorithm, or a new era of AI that has begun to understand too literally who stands behind it?
Read the original article: To miał być obiektywny chatbot. Grok sugeruje się opiniami Muska