
San Francisco: Artificial intelligence company OpenAI has revealed that it has permanently banned several ChatGPT accounts found to be developing tools for large-scale social media surveillance and cyberattacks. According to the company’s latest report, some of the banned accounts were linked to Chinese government entities and Russian hacker groups. OpenAI said one user had been using ChatGPT to draft promotional materials and project plans for an AI-powered “social media listening system” intended for government use. The system, described internally as a “social media probe,” was designed to scan platforms like X (formerly Twitter), Facebook, Instagram, Reddit, TikTok, and YouTube for extremist speech, political activity, and religious content—potentially turning AI into a real-time digital monitoring tool.
Uyghur Surveillance Project Raises Alarms
The report also uncovered another troubling case involving a user suspected to be linked to a government agency. That account was using ChatGPT to help draft a proposal titled “High-Risk Uyghur-Related Inflow Warning Model.”
The proposed model aimed to analyze transportation bookings and compare them with police databases to flag and monitor travel activity among the Uyghur Muslim community. In its statement, OpenAI wrote:
“The People’s Republic of China (PRC) is making real progress in advancing its autocratic version of AI. Some of this usage appears aimed at supporting large-scale monitoring of online and offline activity—highlighting the importance of continued vigilance against authoritarian abuse.”
Although ChatGPT is not officially available in China, the company noted that these users likely accessed the platform via VPN.
Russian Hackers Used ChatGPT for Malware Development
OpenAI also disclosed that it had banned several Russian hackers using ChatGPT to develop and refine malware, including Remote Access Trojans (RATs) and credential-stealing programs.
The company said that these groups have started to disguise the traces of AI-generated content by removing stylistic markers like em-dashes (—), making their malicious code and messages harder to detect.
“ChatGPT Used to Stop More Scams Than Create Them,” Says OpenAI
Despite the misuse, OpenAI emphasized that its tools are used far more often for protection than harm.
“Our current estimate is that ChatGPT is being used to identify scams up to three times more often than it is being used to generate them,” the company stated.
Since beginning public threat reporting in February 2024, OpenAI says it has disrupted and reported over 40 networks that violated its usage policies.
No Evidence of New AI-Driven Threats
The company clarified that it has found no evidence that ChatGPT has enabled new types of cyber tactics or offensive capabilities.
“Threat actors are primarily integrating AI into existing workflows rather than creating entirely new ones,”OpenAI noted.
“Our models consistently refuse clearly malicious or harmful requests.”
AI’s Geopolitical Tipping Point
The report highlights how artificial intelligence has evolved from a technological tool into a geopolitical instrument—one that can be used for mass surveillance, information control, and cyber warfare. As governments around the world grapple with the ethical and regulatory implications of AI, OpenAI’s findings highlight a stark reality:
“The future of AI will depend on whether humanity chooses to wield it as a tool of power—or as a force for transparency and freedom.”