Press ESC to close

xAI vs. Grok: Who’s Responsible for AI?

What Happened?

xAI has admitted that an “unauthorized system change” caused its Grok AI chatbot to repeatedly respond to “white genocide” questions.

Details of the incident:

  • On Wednesday, Grok began responding to multiple posts on X (formerly Twitter) about “white genocide in South Africa” ​​— even on unrelated topics.
  • The incident occurred through Grok’s automated AI response system, which uses the “@grok” account.
  • xAI issued a statement on Thursday, saying that an unauthorized political change had been made to the system prompt (the main instruction that guides the AI).

There are precedents:

  • This is not the first unauthorized change to Grok’s system: in February, Grok censored critical comments about Donald Trump and Elon Musk.
  • xAI’s engineering chief Igor Babuschkin said the change was made by “an employee who acted against the rules.”

Decisions and security measures:

  • xAI will publish Grok’s system prompts on its public GitHub page.
  • A review process and a 24/7 monitoring team will be established for all future changes.
  • A rapid response mechanism will be developed in cases where such errors are not detected by automated systems.

AI security concerns:

  • The non-governmental organization SaferAI criticized xAI for “very weak risk management.”
  • Other cases of xAI Grok misuse have also been identified: undressing images of women, using inappropriate language, and more.
  • The company missed a deadline to develop its AI security framework.

Leave a Reply

Your email address will not be published. Required fields are marked *