Elon Musk’s AI chatbot Grok experienced a bug on Wednesday that caused it to reply to dozens of posts on X with information about “white genocide” in South Africa, even when users did not ask about the topic.
The unusual responses came from Grok’s official X account, which normally replies to users when tagged with @grok. However, when asked unrelated questions, Grok repeatedly provided information on the controversial subject of “white genocide” and referenced the anti-apartheid chant “kill the Boer.”
This incident highlights the ongoing challenges AI chatbots face as emerging technology, including issues with reliability and moderation. Recent months have seen other AI providers grapple with similar problems. For example, OpenAI recently rolled back a ChatGPT update that made the chatbot overly sycophantic, while has encountered difficulties answering or has delivered misinformation on political topics.

In one instance, a user asked Grok about a professional baseball player’s salary, and Grok responded with a statement that “The claim of ‘white genocide’ in South Africa is highly debated.”
Many users shared their confusion about Grok’s strange replies on X, sparking conversations around the challenges of AI reliability. You can learn more about the complexities of AI moderation and bias at and explore how AI ethics organizations like the Partnership on AI are working on these issues.
Also Read : Microsoft to Lay Off 3% of Its Workforce