At a glance: Google Threat Intelligence Group uncovered new malware campaigns using AI tools like Gemini and Hugging Face to evolve and automate attacks. Malware such as PROMPTFLUX and PROMPTSTEAL use AI APIs to rewrite code, generate commands, and evade detection. Organizations should monitor AI API activity, restrict token access, and update endpoint protections to defend against these adaptive, AI-driven threats.
Threat summary
On November 5, 2025, Google Threat Intelligence Group (GTIG) published new findings on adversaries leveraging generative artificial intelligence (AI) tools, including Google Gemini and open-source models hosted on Hugging Face, in live malware campaigns. This update builds on Google’s January 2025 report, which first documented interest in AI tools by state-backed and financially motivated groups.
One of the malware strains, named PROMPTFLUX, prompts the Google Gemini application programming interface (API) to rewrite its own source code on an hourly basis, then stores the obfuscated version in the Startup folder to maintain persistence. It also attempts lateral movement by replicating itself to removable media and mapped network locations.
Another malware, linked to Russian state actor APT28, and named PROMPTSTEAL, queries a large language model (LLM) to dynamically generate system commands, replacing traditional hard-coded instructions. Disguised as an image generation tool, it guides users through prompts while covertly executing commands in the background, likely using stolen API tokens. These commands collect system data and copy documents, which are then exfiltrated. Recent samples show ongoing development, including added obfuscation and changes to command-and-control infrastructure.
The use of AI involvement was determined by GTIG through forensic analysis of malware samples and API traffic. PROMPTFLUX includes hard-coded prompts and API keys that interact with Gemini’s endpoint. PROMPTSTEAL logs queries to Hugging Face’s model.
GTIG observed additional cases where threat actors from China and Iran successfully manipulated Gemini by impersonating students involved in capture-the-flag (CTF) competitions or academic research projects. Through this social engineering, the actors used Gemini to generate phishing content, design technical infrastructure, and create custom malware, including web shells and Python-based command-and-control servers.
In some cases, reliance on LLM led to operational security failures. One actor submitted a script to Gemini that revealed hard-coded elements such as the command-and-control domain and encryption key, allowing defenders to disrupt the campaign and gain insight into the attacker’s infrastructure. Google also observed cybercriminal marketplace posts advertising AI tools and services using language similar to legitimate marketing. These posts promoted AI for tasks such as phishing, malware creation, reconnaissance, vulnerability exploitation, and code generation.
Analyst insight
While malware-as-a-service platforms have historically relied on automation, much of the data analysis has required manual coding by human operators. The integration of artificial intelligence is now streamlining these processes, enabling threat actors with limited resources to operate more efficiently and at reduced cost. As adversaries continue to refine their use of AI, the likelihood of scalable and adaptive attacks will grow. The security of the broader ecosystem will increasingly depend on AI tool providers, such as Google, to implement countermeasures like disabling compromised assets and strengthening model safeguards to limit misuse.
Meanwhile, organizations can restrict access to generative AI tools in sensitive environments and monitor API usage for anomalies. Prompt filtering, audit logging, and usage throttling can help detect and disrupt misuse. AI developers can enhance model classifiers, enforce stricter usage policies, and remove malicious assets. These actions support early detection and containment of AI-enabled threats.
Field Effect protects against threats like malware through layered, intelligence-driven defenses that combine advanced analytics, real-time monitoring, and proactive threat hunting. The Field Effect threat intelligence team tracks emerging tactics, including the use of LLM in malware development and integrates these insights into Endpoint Detection and Response detection rules.
By correlating behavioral indicators across environments, Field Effect can identify signs of AI-driven obfuscation, lateral movement, and data exfiltration. The platform would flag suspicious interactions with generative AI tools, helping clients assess risk and contain threats early. Combined with expert-led analysis and tailored recommendations, Field Effect enables organizations to stay ahead of evolving adversary techniques.