Artificial Intelligence (AI) holds significant potential for positive impact, especially in areas such as medical research. However, its application also poses substantial risks.
The notion of a villainous character utilizing generative AI to engage in cybercrime may sound fictional, but it reflects a growing reality. Cybersecurity professionals are currently working tirelessly to fend off numerous threats posed by hackers employing generative AI for illicit activities, including hacking computers and stealing sensitive data. This trend is expected to escalate with the rapid evolution of AI technologies.
AI-Produced Malware
When encountering a pop-up, a quick Ctrl-Alt-Delete may be in order. Cybercriminals are increasingly utilizing AI tools to develop malware, and it’s infiltrating various browsers. Experts can identify AI-generated malware by analyzing its code, which tends to be produced faster, is more targeted, and more adept at bypassing security systems than manually coded malware, as highlighted in a study published in the journal Artificial Intelligence Review.
For instance, HP’s threat research team uncovered malicious code embedded in browser extensions, allowing hackers to hijack browser sessions and redirect users to websites offering fake PDF tools. They also identified SVG images containing code capable of triggering infostealer malware, with signs indicating its AI-generated nature.
Bypassing Security Measures
Writing malware with AI tools is one aspect, but ensuring that it remains undetected is an entirely different challenge. Hackers are leveraging Large Language Models (LLMs) to modify and obscure existing malware, making it more challenging for cybersecurity firms to identify and eradicate new threats. By blending code with known malware or generating unique variants, hackers can evade detection by traditional security systems that rely on recognizing patterns of malicious behavior.
Researchers from Palo Alto Networks’ Unit 42 demonstrated the efficacy of this technique by using LLMs to produce 10,000 variations of familiar malicious JavaScript code that functioned similarly to their originals. These modified codes proved successful in eluding detection by various algorithms, indicating that hacking strategies can significantly undermine malware classification systems.
Data and Credential Theft
AI algorithms are also enhancing the capabilities of cybercriminals in stealing user accounts and accessing confidential information. Techniques such as credential stuffing, password spraying, and brute-force attacks have become more efficient with AI. Predictive biometric algorithms enable hackers to monitor users as they enter passwords, facilitating easier access to extensive databases of user information.
Furthermore, hackers utilize scanning algorithms to swiftly explore networks for vulnerabilities, mapping out hosts and open ports to exploit potential security gaps. With traditional brute-force attacks, which involve overwhelming targets with numerous attempts, the success rate has historically been low. However, advancements in password algorithms and automated hacking attempts across multiple platforms are significantly improving these odds.
Enhanced Social Engineering and Phishing
Generative AI tools, including popular platforms like Gemini and ChatGPT, along with obscure versions on the dark web, are employed by hackers to create highly personalized phishing and social engineering attacks. By mimicking individual communication styles and utilizing data harvested from social media profiles, hackers generate tailored pitches targeting victims based on their unique interests and backgrounds.
Moreover, AI modeling allows hackers to predict the success rates of their scams, further increasing the likelihood of successful attacks. Researchers have found that phishing emails developed using AI tend to have fewer grammatical errors or red flags, making them more convincing. A study conducted by Singapore’s Government Technology Agency demonstrated that recipients were more likely to respond to phishing attempts generated by ChatGPT compared to those crafted by hand.
Advanced Impersonation Techniques
The emergence of generative AI has enabled sophisticated impersonation tactics, reminiscent of science fiction. Hackers are utilizing AI to replicate the likeness and voices of known individuals during swindling operations, a method known as voice phishing or vishing. In a notable incident in 2024, a finance employee was deceived into transferring $25 million to criminals who used deep-fake technology to impersonate senior executives within the organization.
This isn’t the only form of AI impersonation to be concerned about. For a more comprehensive understanding of the various strategies utilized by AI impersonators, refer to our article “AI impersonators will wreak havoc in 2025. Here’s what to watch out for.”