Release date: March 8, 2023
Comments due:
Email comments to:
author(s)
Alina Oprea (Northeastern University), Apostol Vassilev (NIST)
Notice
This NIST report on artificial intelligence (AI) develops a taxonomy of attacks and defenses and defines the terminology in the field of adversarial machine learning (AML). Taken together, the taxonomy and terminology are intended to inform other standards and future best practices for assessing and managing the security of AI systems by providing a common language for understanding the rapidly evolving AML landscape. Future updates to the report are likely to be released as attacks, defenses, and terminology evolve.
In particular, NIST is interested in comments and recommendations on:
What are the latest attacks threatening the existing landscape of AI models? What are the latest mitigations that will stand the test of time? What are the latest trends in AI technologies that will transform industry/society? What potential vulnerabilities do they have? Which promising countermeasures can be developed for them? Is there new terminology that needs to be standardized?
NIST intends to keep the document open for comment for an extended period of time to engage with stakeholders and invite input on an updated taxonomy that serves the needs of the public.
This NIST AI report develops a taxonomy of concepts and defines terminology in the field of adversarial machine learning (AML). The taxonomy is based on a review of the AML literature and is arranged in a conceptual hierarchy that includes key types of ML methods and attack stages in the life cycle, attacker goals and objectives, and attacker skills and knowledge over the learning process. The report also provides appropriate methods to mitigate and deal with the consequences of attacks and points out relevant open challenges to be considered in the life cycle of AI systems. The terminology used in the report is consistent with the AML literature and is supplemented by a glossary that defines key terms related to the safety of AI systems and is intended to help non-expert readers. Taken together, the taxonomy and terminology are intended to inform other standards and future best practices for assessing and managing the security of AI systems by providing a common language and understanding of the rapidly evolving AML landscape.
This NIST AI report develops a taxonomy of concepts and defines terminology in the field of adversarial machine learning (AML). The taxonomy is based on a review of the AML literature and is organized in a conceptual hierarchy that includes key types of ML methods and attack stages in the life cycle,… See full abstract This NIST AI report develops a taxonomy of concepts and defines the Terminology in the field of adversarial machine learning (AML). The taxonomy is based on a review of the AML literature and is arranged in a conceptual hierarchy that includes key types of ML methods and attack stages in the life cycle, attacker goals and objectives, and attacker skills and knowledge over the learning process. The report also provides appropriate methods to mitigate and deal with the consequences of attacks and points out relevant open challenges to be considered in the life cycle of AI systems. The terminology used in the report is consistent with the AML literature and is supplemented by a glossary that defines key terms related to the safety of AI systems and is intended to help non-expert readers. Taken together, the taxonomy and terminology are intended to inform other standards and future best practices for assessing and managing the security of AI systems by providing a common language and understanding of the rapidly evolving AML landscape.
Hide Full Summary Keywords artificial intelligence; machine learning; attack taxonomy; Evade; data poisoning; Violation of privacy; attack mitigation; data modality; Trojan attack, backdoor attack; Chatbot Control Families
nothing selected