The integration aims to extend CrowdStrike’s existing protection for Nvidia Enterprise AI Factories, providing validated designs to assist organizations in deploying AI infrastructure on-premises. This enhancement ensures that enterprises can effectively run and scale LLM applications across hybrid and multicloud environments at every stage—from development to runtime and posture management. CrowdStrike claims that this partnership offers comprehensive lifecycle protection for AI and over 100,000 LLMs, marking a significant development in the fast-changing AI landscape.
CrowdStrike Chief Business Officer Daniel Bernard emphasized the increasing risks that LLMs face when deployed in the cloud, including issues like prompt injection, data leakage, and API abuse. Such threats can occur even in the absence of a conventional security breach. Bernard noted that this partnership extends CrowdStrike’s runtime protection to Nvidia’s AI infrastructure, allowing organizations to secure LLMs wherever they are deployed—whether in the cloud or production—using a unified platform that also protects workloads, identities, and endpoints.
LLMs Facing Heightened Cyber Threats
LLMs serve as crucial components in contemporary AI operations but are increasingly vulnerable to cyberattacks. According to experts from Wiz, a cloud cybersecurity firm recently acquired by Google, securing LLMs is a complex and ever-changing challenge. They highlighted that unlike traditional systems, LLMs exist within a rapidly evolving domain where attackers and defenders are continuously adapting.
Given LLMs’ ability to process vast amounts of data that often originate from unknown sources, their flexible and unpredictable interaction with the world broadens the attack surface dramatically, creating a diverse array of potential attack vectors. Moreover, securing AI and machine learning involves specialized knowledge that is still developing, necessitating ongoing updates to security operations in light of rapid innovation.
Last year, the Open Worldwide Application Security Project (OWASP) published a list of the top risks facing LLMs, underscoring challenges like prompt injection attacks that evade safeguards, training data poisoning that compromises model accuracy, and the risk of accidental disclosure of sensitive information. Organizations also face challenges in vetting malicious content generated by LLMs, which can lead to disruptions in downstream systems.
AI vs. AI
The use of AI for protection against malicious actors employing similar technologies presents a continuous challenge. Bernard highlighted that adversaries are leveraging AI as well, accelerating their efforts and targeting new surfaces, particularly in cloud environments. He expressed a strong belief that AI-driven threats will surpass traditional security tools, emphasizing the need for evolving strategies to keep pace with these advanced dangers.
Expanding MSSP Services
MSSPs have historically aided organizations in implementing outsourced cybersecurity solutions, and this partnership provides them with another layer of service to offer. Bernard stated, “MSSPs are on the front lines helping customers implement AI in their businesses and secure AI in the cloud, especially as internal teams struggle to keep up.”
As MSSPs already provide protection for endpoints, cloud workloads, and identities, this collaboration allows them to extend their coverage to include AI, all within a single platform that avoids the complications of separate tools. This integration promises a more efficient approach to making AI security a fundamental service for MSSPs.
Comprehensive Protection Features
Falcon Cloud Security includes a variety of pre-deployment protective measures, including AI Security Program Management (AI-SPM) and malware scanning for AI models. It also detects trojanized models and backdoors while identifying shadow AI—capabilities introduced into the platform earlier this year. With these features, CrowdStrike aims to provide threat intelligence that works seamlessly with Nvidia NeMo Safety workflows, securing foundational models as new AI applications are deployed.