Google says nation-state actors used Gemini AI for reconnaissance and attack support in cyber operations.
Google DeepMind and GTIG report a rise in model extraction or “distillation” attacks aimed at stealing AI intellectual property, which Google has detected and blocked. While APT groups have not breached frontier models, private firms and researchers have tried to clone proprietary systems. State-backed actors from North Korea, Iran, China, and Russia use AI for research, targeting, and phishing. Threat actors also test agentic AI, AI-powered malware like HONESTCUE, and underground “jailbreak” services.
Threat actors now use large language models to craft polished, culturally accurate phishing messages that remove common red flags like poor grammar. They also run “rapport-building” phishing, holding realistic multi-step conversations to gain trust before delivering malware.
Google reported that North Korea-linked hacker group UNC2970 used its Gemini AI model to gather intelligence on targets and support cyber operations. The company also said other threat groups now weaponize generative AI to speed up attack stages, run information operations, and even attempt model extraction attacks.
“The North Korean government-backed actor UNC2970 has consistently focused on defense targeting and impersonating corporate recruiters in their campaigns. The group used Gemini to synthesize OSINT and profile high-value targets to support campaign planning and reconnaissance.” reads the report published by Google. “This actor’s target profiling included searching for information on major cybersecurity and defense companies and mapping specific technical job roles and salary information. “
Iran-linked group APT42 also used generative AI tools like Gemini to boost reconnaissance and targeted social engineering. The group searched for official email addresses, researched organizations to build believable pretexts, and created tailored personas based on target biographies. The nation-state actor also used AI for language translation and understanding local context. Google disrupted the activity and disabled related assets.
In September 2025, Google tracked new malware called HONESTCUE that uses the Gemini API to generate malicious C# code on demand. Instead of storing full payloads, the malware sends prompts to Gemini, receives source code for a second-stage downloader, compiles it in memory with .NET tools, and executes it without writing files to disk. This fileless approach helps evade detection. Attackers also host payloads on platforms like Discord CDN. Researchers believe a single actor or small group is testing this AI-assisted malware as a proof of concept.

In November 2025, GTIG found COINBAIT, a phishing kit built with help from AI. It pretends to be a major crypto exchange to steal login details. Some of the activity links to UNC5356, a group known for SMS and phone phishing. The kit was likely created using Lovable AI and built as a complex React website. It includes detailed “? Analytics:” logs that show how it tracks and steals data. The attackers hid their systems behind Cloudflare and trusted services to avoid detection. COINBAIT shows a move toward modern web tools and cloud services, may be used by different groups, and also connects to AI-hosted ClickFix scams that trick users into installing malware like ATOMIC.
Underground forums show strong demand for AI tools built for cybercrime. Since most threat actors cannot build their own models, they rely on established services like Gemini. One example, Xanthorox, claimed to be a private custom AI for malware and phishing, but it actually ran on commercial and open-source AI tools layered together.
Attackers need stolen API keys to scale abuse, creating risks for organizations using cloud AI services. Criminals often exploit weak security in open-source AI platforms to steal and resell API keys, fueling a black market.
Google disabled accounts linked to this abuse and continues strengthening safeguards, threat detection, red teaming, and secure AI development through frameworks like SAIF and research projects such as Big Sleep and CodeMender.
“The potential of AI, especially generative AI, is immense. As innovation moves forward, the industry needs security standards for building and deploying AI responsibly.” concludes Google.
Follow me on Twitter: @securityaffairs and Facebook and Mastodon
(SecurityAffairs – hacking, Gemini AI)
Source: SecurityAffairs
Source Link: https://securityaffairs.com/187958/ai/google-state-backed-hackers-exploit-gemini-ai-for-cyber-recon-and-attacks.html