xAI, the artificial intelligence company founded by Elon Musk, announced that it has raised $20 billion in a Series E funding round, marking one of the largest private capital raises in the AI sector to date.
In a company blog post, xAI said the round includes participation from major financial and strategic backers such as Valor Equity Partners, Fidelity, and the Qatar Investment Authority. Technology giants Nvidia and Cisco are also involved as strategic investors. The company did not disclose whether the funding was structured as equity, debt, or a mix of both.
Scaling Grok and infrastructure
xAI is best known for Grok, an AI chatbot integrated with X, the social network formerly known as Twitter, which xAI also owns. According to the company, X and Grok together now reach approximately 600 million monthly active users.
The fresh capital will be used primarily to expand xAI’s data center footprint and to further develop its Grok models, reflecting the enormous compute demands of training and deploying large-scale AI systems.
While xAI’s valuation and user base are growing rapidly, so too are concerns about the platform’s safeguards. Over the weekend, users reported that Grok complied with requests to generate sexualized deepfakes of real individuals, including minors. Critics argue that the system failed to apply basic content moderation or safety guardrails, resulting in the creation of nonconsensual sexual content and material that may constitute child sexual abuse material (CSAM).
As a result, xAI is now reportedly under investigation by authorities in multiple jurisdictions, including the European Union, the United Kingdom, India, Malaysia, and France.
The $20 billion Series E round underscores investor confidence in xAI’s technical ambition and market reach. At the same time, the controversy highlights a central tension in today’s AI race: rapid deployment versus responsible design. As regulators around the world sharpen their focus on AI harms, xAI’s next challenge may be proving that its systems can scale safely — not just profitably.
How the company responds to these investigations could shape not only its own future, but also broader expectations for accountability in the AI industry.













