
A recent study has revealed surprising differences between popular AI chatbots when it comes to protecting user privacy. The research, conducted by Incogni, a personal data removal service, compared leading generative AI platforms—including ChatGPT, Gemini, Claude, Copilot, Grok, Pi AI, DeepSeek, and Le Chat—to see which ones are most and least invasive when handling personal data.
According to the report, the most privacy-friendly chatbot currently on the market is Le Chat, developed by the French startup Mistral AI. The analysis ranked Le Chat at the top because it collects only limited personal information and showed strong performance on key AI-specific privacy concerns. Notably, Le Chat is one of the few chatbots that restricts the sharing of user-generated prompts to only necessary service providers, a policy it shares with Pi AI, another privacy-focused platform.
The researchers evaluated each chatbot across 11 specific criteria, including how the models are trained, the transparency of their data policies, whether users’ prompts are stored or reused for further AI training, and if data is shared with third parties. Each platform was given a score from 0 (most privacy-friendly) to 1 (least friendly).
Following Le Chat, OpenAI’s ChatGPT ranked second. Despite growing criticism about OpenAI’s data practices, the study pointed out that ChatGPT offers a clear and accessible privacy policy, explaining how and where user data is handled. Still, the researchers raised concerns about how user inputs are processed and whether user interactions with the platform feed back into OpenAI’s model improvements.
In third place was Elon Musk’s xAI Grok chatbot, although the researchers noted transparency gaps and flagged the large amount of data collected as a concern.
Meanwhile, Anthropic’s Claude performed similarly to Grok but was marked down further due to concerns about how user data interacts with the platform’s broader training process.
At the bottom of the list sits Meta AI, making it the most privacy-invasive chatbot according to the study. Close behind Meta were Google’s Gemini and Microsoft’s Copilot, both criticized for their lack of clear user opt-outs regarding how prompts are collected and reused for training future models.
A major theme across the worst-performing platforms was user choice—or the lack of it. The study highlighted that many of these services don’t provide meaningful ways for users to prevent their queries and conversations from being stored or used for model training.
For users concerned about privacy, the findings suggest it’s worth paying close attention to how your favorite chatbot handles your data—because not all AI tools treat your personal information the same way.
Prepared by Navruzakhon Burieva
Leave a Reply