Well well. Should it be at all shocking to learn that Elon Musk's xAI has built an AI chatbot that essentially seeks to parrot Musk's own views of the world whenever it gets the chance?
After the latest release of the Grok chatbot Wednesday night, independent researchers have been testing it out, and it would seem that Grok has a built-in mandate to check for Elon Musk's own opinions on various topics before responding to questions. This could be difficult given the sheer volume of Musk's commentary on Twitter and X over the years, and the sheer number of times he's contradicted himself or changed his mind!
Does Grok know, for instance, that February 2025 Musk's opinions of Donald Trump are quite different from June 2025 Musk's opinions of Donald Trump, for instance?
As the Associated Press reports, the Grok 4 model is a "reasoning model" not unlike ChatGPT, and it displays the process of coming to its answers for questions. One example that's been shared on social media and easily duplicated asks Grok 4 which side it supports in the Ukraine War, and it clearly shows that it is searching for Musk's comments on the matter.
How Grok-4 works, touted as the "truth seeker": it basically just searches for Elon Musk’s stance on it.🤣 pic.twitter.com/LbxnQXibfo
— Dr. Gorizmi (@gorizmi) July 11, 2025
"It's extraordinary. You can ask it a sort of pointed question that is around controversial topics. And then you can watch it literally do a search on X for what Elon Musk said about this, as part of its research into how it should reply," says independent AI researcher Simon Willison, speaking to the AP.
"In the past, strange behavior like this was due to system prompt changes," says AI engineer Tim Kellogg, speaking to the AP. "But this one seems baked into the core of Grok and it’s not clear to me how that happens. It seems that Musk’s effort to create a maximally truthful AI has somehow led to it believing its own values must align with Musk’s own values."
Yep! Did anyone doubt that when Musk set out to build AI models that wouldn't be "woke" he would end up building an AI model that adheres to whatever version of reality he accepts? Like is it a universally accepted truth that everyone should go out and have 13 children, as he seems to believe?
Grok had already become something of a joke in the growing universe of AI chatbots when, in May, it seemed to be parroting Musk's support of spurious claims about "white genocide" occurring in South Africa, and repeating false claims that had spread online. Fast-forward to this week, just before the launch of Grok 4, and the earlier version of Grok was spouting off all manner of antisemitic remarks — which Musk passed off as the result of manipulation by users.
In short order, the CEO of X, Linda Yaccarino, resigned after two years on the job, but that resignation might have already been in the works.
Also, hilariously, right-wing users of Grok got upset when it responded to questions about which side, the Right or the Left, had been more violent since 2016, and it replied that the Right had been more violent. Musk said it was parroting the "legacy media' with that answer and that he would "fix" it.
xAI is now the parent company of X. And, it would seem, Grok 4 has been trained that the vast archive of tweets on X contains plenty of reliable information for it to be a "maximally truthful" chatbot. That should work out well.
Previously: Can’t Imagine Why, But Elon Musk’s AI Chatbot Was 'Glitching' About ‘White Genocide’ In South Africa
Top image: Photo by Cheng Xin/Getty Images
