Introduction

As AI advances at a rapid pace, implementing comprehensive security measures becomes increasingly crucial. The integration of AI into critical business operations and society is growing, highlighting the importance of proactive security strategies. While there are concerns and challenges surrounding AI, there is also significant potential for leaders to make informed, strategic decisions. Organizational leaders can effectively navigate the complexities of security for AI by seeking clear, actionable guidance and staying informed amidst abundant information. This proactive approach will help mitigate risks and ensure AI technologies’ safe and responsible deployment, ultimately fostering trust and innovation.

Effective governance ensures that AI systems are secure, ethical, and compliant with regulatory standards. As organizations increasingly rely on AI, they must adopt comprehensive governance strategies to manage risks, adhere to legal requirements, and uphold ethical principles. This second part of our series on governing AI systems focuses on the importance of defensive frameworks within a broader governance strategy. We explore how leading organizations have developed detailed frameworks to enhance security for AI and guide the development of ethical AI guidelines, ensuring responsible and transparent AI operations. Tune in as we continue to cover understanding AI environments, governing AI systems, strengthening AI systems, and staying up-to-date on AI developments over the next few weeks.

Step 1: Defensive Frameworks

As tools and techniques for attacking AI become more sophisticated, a methodical defensive approach is essential to safeguard AI. Over the past two years, leading organizations have developed comprehensive frameworks to enhance security for AI. Familiarizing yourself with these frameworks is crucial as you build out your secure AI processes and procedures. The following frameworks provide valuable guidance for organizations aiming to safeguard their AI systems against evolving threats.

MITRE ATLAS

MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) is a comprehensive framework launched in 2021, detailing adversarial machine learning tactics, techniques, and case studies. It complements the MITRE ATT&CK framework and includes real-world attacks and red-teaming exercises to provide a complete picture of AI system vulnerabilities.

In 2023, MITRE ATLAS was significantly updated, adding 12 new techniques and 5 unique case studies, focusing on large language models (LLMs) and generative AI systems. Collaborations with Microsoft led to new tools like the Arsenal and Almanac plugins for enhanced threat emulation. The update also introduced 20 new mitigations based on case studies. ATLAS now includes 14 tactics, 82 techniques, 22 case studies, and 20 mitigations, with ongoing efforts to expand its resources. This community-driven approach ensures that ATLAS remains a critical resource for securing AI-enabled systems against evolving threats.

NIST AI Risk Management Framework

Released in January 2023, the NIST AI RMF provides a conceptual framework for responsibly designing, developing, deploying, and using AI systems. It focuses on risk management through four functions: govern, map, measure, and manage.

Google Secure AI Framework (SAIF)

Introduced in June 2023, SAIF offers guidance on securing AI systems by adapting best practices from traditional software development. It emphasizes six core elements: expanding security foundations, automating defenses, and contextualizing AI risks.

OWASP Top 10

In 2023, OWASP released the Top 10 Machine Learning Risks, highlighting critical security risks in machine learning and providing guidance on prevention. Additionally, OWASP outlined vulnerabilities in large language models (LLMs), offering practical security measures.

Gartner AI Trust, Risk, and Security Management (AI TRiSM)

Gartner’s AI TRiSM framework addresses bias, privacy, explainability, and security in AI/ML systems, providing a roadmap for building trusted, reliable, and secure AI systems.

Databricks AI Security Framework (DAISF)

Released in February 2024, DAISF provides a comprehensive strategy to mitigate cyber risks in AI systems, with actionable recommendations across 12 components of AI systems.

IBM Framework for Securing Generative AI

IBM’s framework, released in January 2024, focuses on securing LLMs and generative AI solutions through five steps: securing data, models, usage, infrastructure, and establishing governance.

Step 2: Governance and Compliance

Ensuring compliance with relevant laws and regulations is the first step in creating ethical AI guidelines. Your AI systems must adhere to all legal and regulatory requirements, such as GDPR, CCPA, and industry-specific standards. Compliance forms the backbone of your security for AI strategy, helping you avoid legal pitfalls.

Who Should Be Responsible and In the Room:

  • Compliance and Legal Team: Ensures AI systems meet all relevant laws and regulations, providing legal guidance and support.
  • Chief Information Security Officer (CISO): Oversees the integration of compliance requirements into the overall security strategy.
  • AI Development Team: Integrates compliance requirements into the design and development of AI systems.
  • Data Privacy Officer (DPO): Ensures data protection practices comply with privacy laws such as GDPR and CCPA.
  • Chief Information Officer (CIO) & Chief Technology Officer (CTO): Provides oversight, resources, and strategic direction for compliance efforts.

Step 3: Ethical AI Guidelines

While working on Step 3, implement ethical AI guidelines to steer AI development and usage responsibly and transparently. Start by forming an ethics committee that includes AI developers, data scientists, legal experts, ethicists, cybersecurity professionals, and, if needed, community representatives. This diverse group will oversee the creation and enforcement of the guidelines.

Identify core ethical principles such as fairness, transparency, accountability, privacy, and safety. Fairness ensures AI systems avoid biases and treat all users equitably. Transparency makes AI processes and decisions understandable to users and stakeholders. Accountability establishes clear lines of responsibility for AI outcomes. Privacy involves protecting user data through strong security measures and respecting user consent. Safety ensures AI systems operate securely and do not cause harm.

Consult internal and external stakeholders, including employees and customers, to gather insights. Draft the guidelines with a clear introduction, core ethical values, and specific measures for bias mitigation, data privacy, transparency, accountability, and safety. Circulate the draft for review, incorporating feedback to ensure the guidelines are comprehensive and practical.

Once finalized, conduct training sessions for all employees involved in AI development and deployment. Make the guidelines accessible and embed ethical considerations into every stage of the AI lifecycle. Establish a governance framework for ongoing oversight and regular audits to ensure compliance and address emerging ethical issues. Regularly update the guidelines to reflect new insights and encourage continuous feedback from stakeholders.

Conclusion

Effective governance is essential for managing AI systems in an era of sophisticated threats and stringent regulatory requirements. By integrating comprehensive defensive frameworks such as MITRE ATLAS, NIST AI RMF, Google SAIF, OWASP Top 10, Gartner AI TRiSM, Databricks AI Security Framework, and IBM’s generative AI framework, organizations can enhance the security of their AI systems. However, governance goes beyond security; it encompasses ensuring compliance with laws and regulations, such as GDPR and CCPA, and embedding ethical principles into AI development and deployment. Forming a diverse ethics committee and establishing clear guidelines on fairness, transparency, accountability, privacy, and safety are crucial steps in this process. By embedding these principles into every stage of the AI lifecycle and maintaining ongoing oversight, organizations can build and sustain AI systems that are not only secure but also ethical and trustworthy. In our next section, we will guide you on strengthening your AI systems. 

Read the previous installment, Understanding AI Environments.