Elon Musk’s AI tool Grok has gone a on a bizarre bender where it’s spewing answers about fictional South African “white genocide” to completely unrelated queries, raising questions of whether Musk’s crackpot racism is baked into the product.

It explains a lot about Elon Musk that he was born into a wealthy ruling-class family in apartheid-era South Africa, and is maybe one of those Afrikaners who’d like to see some of that apartheid come back. And you'd have to think there’s some sort of connection there that Musk’s xAI chatbot Grok went through a strange phase last week where it was giving answers about a racist conspiracy theory about a “white genocide” happening among farmers in South Africa, as Bloomberg reports, and giving these answers in response to complete mundane, unrelated questions about video games, HBOMax, and baseball player salaries.    


As seen above, the Grok chatbot is kicking up some serious “Sir, this is a Wendy’s” sentiment with these unrelated answers about “white genocide” that, for whatever reason, Grok is obsessed with dragging the into the conversation. Users galore last week just typed the prompt  “@grok Is this true?” into a Twitter thread, and they’d be served a strange primer in whether or not “white genocide” ever happened in South Africa.


Maybe this is what Elon Musk meant when he said Grok was a chatbot "with a rebellious streak." And it sure calls to mind that time that Musk was crowing about other AI chatbots being too politically correct!


Whoever runs public relations for Grok was forced to post an embarrassed Thursday statement, claiming there was some sort of “unauthorized modification” to its technology, a modification which “violated xAI’s internal policies and core values.”

“Our existing code review process for prompt changes was circumvented in this incident,” the statement added. “We will put in place additional checks and measures to ensure that xAI employees can’t modify the prompt without review.”


Golly, any theories on which "employee" may have performed this “unauthorized modification?” And is it really that easy to make an AI tool go haywire across a whole platform?


This raises questions about not just how frequently AI gets things flat-out wrong, but also the degree to which Grok and other AI tools can be manipulated by tech executives, or outside bad actors hoping to promote misinformation.


The higher-ups at Grok now claim that they will be publishing “our Grok system prompts openly on GitHub. The public will be able to review them and give feedback to every prompt change that we make to Grok.” They also said they’re reviewing their code review process, and putting in a “24/7 monitoring team” that will be human and not just AI.

But it seems almost certain that something like this is going to happen again, whether it’s done by some malicious outside actor, or if it's just Musk getting under the hood and fiddling with the tool’s mechanics for his own ideological ends.

Related: Google Suspends Gemini Image Module After Backlash Over Diverse Depictions of Founding Fathers, Nazis [SFist]

Image: WASHINGTON, DC - MARCH 24: White House Senior Advisor, Tesla and SpaceX CEO Elon Musk (L) listens during a cabinet meeting held by U.S. President Donald Trump at the White House on March 24, 2025 in Washington, DC. This is Trump's third cabinet meeting of his second term, and it focused on spending cuts proposed by the Department of Government Efficiency (DOGE) (Photo by Win McNamee/Getty Images)