NATIONAL HARBOR, Md. — Artificial intelligence is significantly enhancing the capabilities of hackers, aiding in tasks such as malware creation and phishing message generation. However, a cybersecurity expert at a recent industry event noted that the impact of generative AI has notable limitations.
According to Peter Firstbrook, distinguished VP analyst at Gartner, while generative AI is being leveraged to streamline social engineering and automate attacks, it does not introduce fundamentally new attack methods. This was shared during Gartner’s Security and Risk Management Summit.
Experts anticipate that AI could transform how attackers generate customized intrusion tools, enabling even inexperienced hackers to quickly create malware that can steal data, record computer activity, or erase hard drives.
Firstbrook acknowledged the productivity boosts provided by AI code assistants, calling them a “killer app” for generative AI. He highlighted the substantial efficiency gains seen in these applications.
In a September report, HP researchers revealed that AI was utilized by hackers to develop a remote access Trojan. Firstbrook remarked, “It would be difficult to believe that the attackers are not going to take advantage of using Generative AI to create new malware. We are beginning to see this.”
Furthermore, attackers are employing AI to craft deceptive open-source utilities, enticing developers into inadvertently integrating malicious code into their legitimate applications. Firstbrook cautioned that if developers aren’t vigilant, they risk downloading compromised utilities that can backdoor their code before it even reaches production.
Deepfakes Still Limited in Impact
While the use of AI in traditional phishing schemes is increasing, its current impact appears to be modest. A recent Gartner survey found that 28% of organizations faced a deepfake audio attack, 21% experienced a video attack, and 19% encountered a media attack that circumvented biometric protections. Nonetheless, only 5% of organizations reported deepfake attacks leading to monetary or intellectual property theft.
Despite this, Firstbrook noted that “this is a big new area.” Analysts express concern that AI could enhance the profitability of certain attacks due to the increased volume it can facilitate. Firstbrook stated, “If they can automate the full spectrum of the attack, then they can move a lot quicker.”
Interestingly, concerns about AI fostering entirely new attack techniques seem exaggerated for now, as researchers have yet to observe such developments. Firstbrook mentioned that new attack techniques appear only once or twice per year, according to data from the MITRE ATT&CK framework.