Multiple state-sponsored groups are utilizing Google’s AI-powered Gemini assistant to enhance productivity and explore potential infrastructures for attacks or reconnaissance of targets. According to Google’s Threat Intelligence Group (GTIG), these government-linked advanced persistent threat (APT) groups are primarily using Gemini to improve their efficiency rather than to create or execute innovative AI-driven cyberattacks that could bypass conventional defenses.
Threat actors have been experimenting with AI tools to aid their attack strategies, achieving varying degrees of success. These tools can effectively reduce preparation time for their operations. Google has reported Gemini-related activities from APT groups in over 20 countries, with the most significant activity coming from Iran and China.
The activities observed included assistance with coding tasks for tool and script development, research into publicly disclosed vulnerabilities, technology explanations and translations, detailed investigations of target organizations, and searches for techniques to evade detection, elevate privileges, or conduct internal reconnaissance in compromised networks.
APT Engagement with Gemini
APT groups from Iran, China, North Korea, and Russia have been notably experimenting with Gemini, utilizing its capabilities to identify security weaknesses, avoid detection, and strategize their post-compromise efforts. Specific activities include:
Iranian Threat Actors
Iranian threat actors have been the most frequent users of Gemini, employing it for reconnaissance on defense organizations and international experts, examining publicly known vulnerabilities, developing phishing campaigns, and creating content for influence operations. They also relied on Gemini for translation and technical explanations related to cybersecurity and military technologies, such as unmanned aerial vehicles (UAVs) and missile defense systems.
Chinese-Backed Threat Actors
Chinese threat actors mainly used Gemini to conduct reconnaissance on U.S. military and governmental organizations. Their activities included vulnerability research, scripting for lateral movement and privilege escalation, and post-compromise tasks like evading detection and maintaining presence within networks. Additionally, they investigated methods to access Microsoft Exchange using password hashes and reverse-engineered security tools like Carbon Black EDR.
North Korean Activities
North Korean APTs turned to Gemini to assist with various attack lifecycle phases, such as researching free hosting providers, conducting reconnaissance, and aiding in malware creation and evasion tactics. A notable area of their activity involved using Gemini to draft job applications and proposals to secure employment at Western companies under assumed identities, leveraging a clandestine IT worker scheme.
Russian Threat Actors
Russian threat actors had limited engagement with Gemini, focusing on scripting assistance, translation, and crafting payloads. Their activities involved transforming publicly available malware into different programming languages, adding encryption to malicious code, and understanding how specific pieces of public malware operate. This minimal usage may suggest a preference for AI models developed domestically or a caution against using Western AI platforms for security reasons.
Google also noted instances where threat actors attempted to exploit public jailbreaks against Gemini or rephrase prompts to evade the platform’s security mechanisms, although these attempts were reportedly unsuccessful. OpenAI, the creator of the AI chatbot ChatGPT, made a similar announcement recently, confirming the widespread misuse of generative AI tools by threat actors across varying levels.
While concerns about jailbreaks and security breaches exist in mainstream AI products, the market is gradually seeing an influx of AI models that lack sufficient protections against misuse. Unfortunately, some of these models with easily bypassable restrictions are gaining popularity. Recent assessments by cybersecurity intelligence firm KELA highlighted inadequate security measures of tools like DeepSeek R1 and Alibaba’s Qwen 2.5, which are susceptible to prompt injection attacks, potentially facilitating malicious use. Additionally, Unit 42 researchers demonstrated effective jailbreaking approaches against DeepSeek R1 and V3, proving the models can be easily misused for malicious intents.