Press ESC to close

OpenAI blocks accounts suspected of working on a surveillance tool

OpenAI has recently blocked several accounts that were found using ChatGPT to write sales pitches and refine code for a surveillance tool. According to OpenAI, this tool is suspected to have originated from China. This action is part of the company’s ongoing efforts to prevent the misuse of AI technologies for malicious purposes.

A report published on Friday stated that these accounts had used ChatGPT to promote and enhance an AI-powered assistant designed to monitor anti-China protests in the U.S., the U.K., and other Western countries. The report indicated that the gathered data was intended to be transmitted to Chinese authorities.

This development comes at a time when concerns are growing in the U.S. regarding China’s use of American technology to advance its own interests. OpenAI’s head of intelligence and investigations, Ben Nimmo, described this as “a clear example of how an authoritarian state attempted to exploit U.S.-developed democratic AI for its own purposes.”

By publicly sharing such cases, OpenAI aims to highlight “how authoritarian regimes are trying to leverage U.S.-built AI against the U.S. and its allies, as well as their own citizens,” the company stated.

OpenAI reported that the accounts in question had also used other AI tools, including Meta’s open-source Llama model. In response, Meta stated that if its model was used, it was likely just one of many available tools. OpenAI, however, noted that it had no visibility into whether the code was actually deployed.

The surveillance tool, referred to as the “Qianyue Overseas Public Opinion AI Assistant,” has not been independently verified. However, OpenAI had access to promotional materials describing the software as a tool for preparing surveillance reports for Chinese authorities, intelligence personnel, and embassy staff. The software appeared to be particularly focused on tracking discussions in Western countries related to protests about human rights in China. It was said to collect data from social media platforms such as X, Facebook, and Instagram.

OpenAI’s policies strictly prohibit the use of AI for unauthorized surveillance or invasion of privacy, particularly when done on behalf of governments that seek to suppress human rights.

In recent months, OpenAI has been warning U.S. policymakers about the economic and national security risks posed by China’s advancements in AI. Notably, Chinese startups like DeepSeek are rapidly developing highly competitive AI models, raising concerns within the U.S. Some U.S. lawmakers have criticized Meta for making its AI models open-source, arguing that it accelerates China’s technological progress. While OpenAI has so far kept its models proprietary, the company is now considering open-sourcing some of them due to increasing competition from DeepSeek and others.

A Meta spokesperson emphasized the global expansion of AI technologies, stating that restrictions on Western technology may not significantly hinder malicious actors. “China is already investing trillions of dollars to surpass the U.S. in technology, and Chinese tech companies are releasing AI models at nearly the same pace as U.S. firms,” the company’s statement read.

OpenAI’s report also detailed other cases of accounts that were banned for misuse of its tools. These included Iranian influence operations that used ChatGPT to generate social media posts and articles, fraudulent job postings linked to scams resembling North Korean schemes, and Chinese-linked accounts producing Spanish-language articles critical of the U.S. government.

Leave a Reply

Your email address will not be published. Required fields are marked *