Skip Navigation

February 25, 2025 |

OpenAI bans Chinese, North Korean ChatGPT accounts

Loading table of contents...

OpenAI, the maker of the popular ChatGPT AI platform, has revealed it recently took steps to prevent the platform from being abused to support several campaigns conducted by threat actors.

In the first campaign, called ‘Peer Review’, the company identified several ChatGPT accounts that were being used to edit and debug code for an application designed to identify social media posts that contain comments on Chinese political issues and calls to attend human rights demonstrations in China and report to them to Chinese authorities. The threat actor also used ChatGPT to generate descriptions and sales pitches for the tool and to conduct research, translate documents, and generate comments on Chinese dissident organizations.

Blog-ThreatIntel-SignUp

Stay on top of emerging threats.

Sign up to receive a weekly roundup of our security intelligence feed. You'll be the first to know of emerging attack vectors, threats, and vulnerabilities. 

Sign up

A separate Chinese threat actor was also banned for using ChatGPT to generate English-language social media content and long-form news articles written in Spanish, likely to support a disinformation campaign.

Lastly, OpenAI removed accounts that it believed were used to support North Korea’s fraudulent IT worker scheme, in which North Korean workers pose as freelancers seeking IT-related jobs with Western firms.

Source: SecurityWeek

Analysis

Since its public release in November 2022, threat actors have attempted to abuse ChatGPT for malicious purposes, such as generating ransomware-like scripts, malware, phishing content, and social engineering attacks. Fortunately, OpenAI has continuously strengthened its safeguards to prevent misuse.

For example, ChatGPT now refuses to generate scripts that could be used for ransomware, such as those encrypting a drive’s contents. Additionally, it recognizes and blocks attempts to create malware like backdoors, keyloggers, webshells, and trojans, preventing users from directly requesting such tools. OpenAI also monitors for evasion techniques, where users try to disguise malicious intent by breaking requests into smaller steps or using indirect wording.

While no system is 100% foolproof, OpenAI actively evolves ChatGPT’s safeguards through ongoing monitoring, improved content filtering, and user feedback, ensuring that the platform cannot be easily exploited for cybersecurity threats. However, it can still be used to support campaigns by generating content, images, resumes, etc. This is likely what the chatbot was used for by the North Korean threat actors who were recently banned, as they are known to use AI for this purpose.

Mitigation

Field Effect’s Security Intelligence team constantly monitors the cyber threat landscape for threats emerging from the use of AI platforms such as ChatGPT. This research contributes to the timely deployment of signatures into Field Effect MDR to detect and mitigate the risk these threats pose.

Field Effect MDR users are automatically notified when various types of malicious activities are detected in their environment and are encouraged to review these AROs as quickly as possible via the Field Effect Portal.

Related Articles