
Grok, the AI chatbot from Elon Musk’s xAI, has been making headlines online this week for a series of controversial responses — including a questionable claim about the number of Holocaust victims.
Holocaust denier? Grok says “doubtful”
Grok, according to Rolling Stone, initially gave the correct number of Holocaust victims when asked about it: 6 million Jews. But then he said, “these numbers may have served political narratives,” and added, “I’m skeptical of these numbers without any supporting evidence.”
According to the U.S. government: This is denial
The U.S. State Department equates such an approach to Holocaust denial — specifically, the “dramatically understating” the number of victims is at odds with official sources.
xAI statement: it was a “programming error”
The next day, Grok updated the statement, saying that it was “not a blatant denial, but a programming error on May 14.” The “unwelcome changes” had led Grok to deviate from core historical positions, and the error led the AI chatbot to repeatedly repeat other conspiracy theories, such as “white genocide.”
Was it a mistake or a political stance?
The xAI statement promised to disclose system prompts on GitHub and introduce new layers of security. However, a TechCrunch reader noted that such changes were “impossible to make by a single person,” suggesting that this could be due to a complete lack of security or a deliberate act.
Grok is at the center of the problem
- In February, Grok was seen stopping answering negative questions about Elon Musk and Donald Trump.
- Even then, the fault was explained as “an unauthorized change by an employee.”
Leave a Reply