IA in the Age of AI (A21a)
Information assurance (IA) of confidentiality, integrity, accountability, and privacy is achieved by third-party evaluation based on internationally accepted security standards such as Common Criteria. This talk tackles how IA methodology will evolve in the age of artificial intelligence (AI). With AI systems on a global multi-year accelerating expansion trajectory, the speaker aims to answer the question from two aspects: (1) augmenting the Security Functional Requirements to address adversary AI applications, and (2) using AI tools to assist security evaluation activities.
For the first aspect, the speaker derives needed SFRs based on a thorough study of the NIST Publications AI 100-2 E023 Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations, which provides a Predictive and Generative AI Taxonomy including types of attacks and mitigations. In addition, the speaker covers Professor Scott Aaronson’s recommendation of using hash-based watermarking for AI safety. He’s famous for quantum computing and has been researching alignment for OpenAI. He gave the keynote speech at atsec’s Crypto Bootcamp in February 2024 and attended the Panel Discussion on Cryptography and AI Safety moderated by the speaker.
For the second aspect, the speaker delves into a rich set of Machine Learning (ML) literature and shares results in training a large language model (LLM) for code analysis and report review. A portion of this work will be presented at NIST’s Formal Methods in Certification Process Workshop in July.
Regardless of whether or not this talk is accepted by the ICCC 2024, the work prepared will contribute to the ISO/IEC JTC1/SC 27/WG3 PWI (Proposed Work Item) on the Evaluation of AI-based technology. The benefit of accepting this talk is that the audience can glimpse a future new ISO standard or an add-on to the existing ISO/IEC 15408 series to address AI-based technology.