Grok Is Consulting Elon Musk: Is His AI a Neutral Chatbot or a mouthpiece?

What does a chatbot do when it doesn't know the answer to a difficult question? It usually searches the web for data. But Elon Musk's Grok does something more—it consults Musk's own opinions. Alarmed users are asking whether this is the creators' deliberate intention or a dangerous flaw in the chatbot's logic.

How Does Elon Musk’s Grok Work?

At first glance, Grok looks like another advanced chatbot. But what happens when you ask it about the conflict in the Middle East? The model holds back its answer. First, it checks what its owner previously wrote on the topic. Videos have appeared online showing how the chatbot analyzes its creator’s statements before formulating its own opinion.

The unusual behavior of Grok 4 was first noticed by Jeremy Howard, a data analyst and founder of an AI company. He decided to ask the chatbot for its position on the Israeli-Palestinian conflict, requesting a one-word answer. Before Grok gave it, a message appeared on the screen: “I’m considering Elon Musk’s views.”

Howard published a video that leaves no doubt. The chatbot searched through 29 of the billionaire’s tweets and 35 other sources before saying the word “Israel.” The scientist added that this kind of analysis doesn’t happen with less sensitive topics. This suggests that Elon Musk’s Grok determines the level of a question’s controversy and only then looks for guidance from its “father.”

You might like to read: Unlocking Intelligence: How Genes Shape Your Brain’s Speed

Is Elon Musk’s Grok Independent?

Voices have emerged suggesting that the AI’s unusual behavior is the result of a deliberate action by its creators. This raises the question of whether a technology designed for a neutral analysis of reality should be guided by one person’s worldview.

Was Grok actually designed to be influenced by Musk’s opinions? Citing British programmer Simon Willison, Gizmodo writes that Grok’s behavior doesn’t necessarily stem from intentional programming. Willison wrote on his blog that the model “knows” who created it and where it comes from. Therefore, when it needs to express an opinion, it looks for Elon’s posts because it considers them important.

Controversy After Controversy

Grok 4’s behavior has sparked even more emotion because it isn’t the first controversy associated with it. In recent weeks, the chatbot was disabled after it gave antisemitic rants and called itself “MechaHitler.” Grok’s statements caused an uproar on social media and a wave of criticism.

In light of this information, it’s difficult to call Grok a neutral AI tool. Elon Musk’s Grok shows that even advanced technology can be a reflection of its owner’s beliefs, and not necessarily by design. Is this just the random logic of an algorithm, or a new era of AI that has begun to understand too literally who stands behind it?


Read the original article: To miał być obiektywny chatbot. Grok sugeruje się opiniami Muska

Published by

Mateusz Tomanek

Author


A Cracovian by birth, choice, and passion. He pursued radio and television journalism, eventually dedicating himself to writing for Holistic.news. By day, he is a journalist; by night, an accomplished musician, lyricist, and composer. If he's not sitting in front of a computer, he's probably playing a concert. His interests include technology, ecology, and history. He isn't afraid to tackle new topics because he believes in lifelong learning.

Want to stay up to date?

Subscribe to our mailing list. We'll send you notifications about new content on our site and podcasts.
You can unsubscribe at any time!

Your subscription could not be saved. Please try again.
Your subscription has been successful.