As artificial intelligence (AI) transforms enterprise operations, the security challenges associated with its adoption are escalating. Organizations are encountering heightened risks, including AI model poisoning, deepfake-driven phishing attacks, data privacy issues, and compliance hurdles. As a result, developing trustworthy, secure, and robust AI systems has become a primary focus for businesses across the globe.
In an exclusive interview with DataQuest, Shailaja Shankar, Senior Vice President of Engineering at Cisco Security, provides insights into Cisco’s commitment to security in AI innovation. She elaborates on the company’s Responsible AI Framework, the necessity for proactive security initiatives, and how Cisco’s AI Defense solutions assist organizations in protecting their AI models and data from evolving cybersecurity threats. Additionally, she stresses the significance of industry-wide AI security frameworks and offers advice for enterprises eager to adopt AI while remaining compliant with best security practices.
Understanding AI Security Challenges
Organizations are enthusiastic about enhancing efficiency, automation, and speed in their core processes with AI. However, many lack a clearly defined AI strategy or governance framework that aligns leadership, prepares the culture, and navigates substantial technical and operational challenges. The evolving threat landscape includes risks such as AI model integrity compromise and data poisoning, alongside more traditional cyber threats like deepfake-enabled phishing, insider risks from compromised identities, and rapidly adapting malware.
Cisco’s Approach to Secure AI Innovation
Cisco’s definition of “secure AI innovation” encompasses a commitment to safeguarding AI technologies, embedding advanced security measures, and fostering a culture of responsible innovation. Central to this initiative is the Responsible AI Framework, founded on six principles: transparency, fairness, accountability, privacy, security, and reliability. While Cisco values rapid innovation, it prioritizes adherence to this framework, ensuring security is not compromised in favor of speed.
Proactive Security in AI Development
Cisco applies its Responsible AI Framework throughout the secure software development lifecycle, whether creating products for customers, using third-party models, or deploying AI tools internally. This encompasses implementing security measures early in the design phase to identify and mitigate potential vulnerabilities related to data privacy, system integrity, and other risks, ensuring security is inherently integrated from the start.
AI Defense Solutions for Resilience
Cisco adopts a platform-based approach by embedding AI defense capabilities across its portfolio to protect customers throughout the entire lifecycle of generative AI application development. The AI Defense suite safeguards training data from interference, thereby ensuring the integrity and security of the data used in building models. This protection extends into runtime security, where user queries are scanned for safety and privacy risks.
The Need for Standardized AI Security Frameworks
There is an evident need for standardized frameworks governing AI security to create trust and ensure the responsible, secure, and ethical development of AI systems. Cisco actively participates in establishing global standards, collaborating with regulatory bodies and industry peers to develop guidelines. The AI Security Incident Collaboration Playbook, developed with the U.S. Cybersecurity & Infrastructure Security Agency and other stakeholders, aims to facilitate coordinated response efforts to AI security incidents.
Addressing Bias and Ensuring Accountability
AI accountability is essential and is incorporated into Cisco’s Responsible AI framework. The focus on transparency, fairness, and accountability means actively working to eliminate bias in algorithms and training data. Measures including documentation of AI use cases, conducting impact assessments, and mechanisms for customer feedback allow Cisco to remain responsive to concerns while ensuring ethical AI-driven decisions throughout the lifespan of their AI solutions.
Advice for Enterprises Adopting AI Securely
For enterprises looking to incorporate AI while maintaining security compliance, it is crucial to prioritize establishing a robust AI strategy and governance framework. Organizations should invest in AI security to understand and mitigate risks associated with both user interactions and application levels. The Cisco AI Readiness Index reveals a trend of declining readiness despite rising pressures to implement AI, emphasizing the importance of addressing security concerns to enable successful adoption.