Requires the Secretary of Defense to develop a cybersecurity policy for AI/ML systems no later than 180 days after the act is passed. Develop a comprehensive review of the effectiveness of the AI/ML policies. Addresses potential security risks, implements methods to mitigate those risks, and establishes standard policy. Requires a comprehensive report of the threats and cybersecurity measures by August 31, 2026.
Requires the Secretary of Defense to develop a cybersecurity policy for AI/ML systems no later than 180 days after the act is passed.
Addresses potential security risks specific to AI, such as model serialization attacks, model tampering, data leakage, adversarial prompt injection, model extraction, model jailbreaks, and supply chain attacks.
Implements methods to mitigate those risks using industry-recognized frameworks to ensure AI/ML systems are trustworthy.
Requires training for the workforce of the Department of Defense to ensure personnel are prepared to identify and mitigate vulnerabilities that are specific to AI/ML.
Establishes standard policy for governance, testing, and auditing of AI systems.
Requires the Secretary of Defense to submit a comprehensive review and report of the effectiveness of the AI/ML cybersecurity practices, including an identification of gaps in the existing security measures, by August 31, 2026.
This machine-generated summary is awaiting review by an AGORA editor. Use with caution.
Key facts
🏛️ This document has been enacted by the United States Congress.
For authoritative text and metadata, visit the official source.
🎯 This document primarily applies to the government, rather than the private sector.
📜 This document's name is National Defense Authorization Act for Fiscal Year 2026, Section 1512 ("Artificial intelligence and machine learning security in the Department of Defense").
AGORA also tracks this document under the name FY2026 NDAA, Section 1512 ("Artificial intelligence and machine learning security in the Department of Defense"). It is part of FY2026 NDAA.
↳ This document is part of a longer one: FY2026 NDAA.
Some AGORA documents are "split off" from longer documents that mix AI
and non-AI content, such as omnibus authorization or appropriations laws
in the United States Congress. Read more >>
Themes AI risks, applications, governance strategies, and other themes addressed in AGORA documents.
Thematic tags are in progress.
Full text
This is an unofficial copy. The document has been
archived and reformatted in plaintext for AGORA. Footnotes, tables, and
similar material may be omitted. For the official text, visit the original source.
SEC. 1512. ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING SECURITY IN THE
DEPARTMENT OF DEFENSE.
(a) Cybersecurity Policy for Artificial Intelligence and Machine
Learning Use.--Not later than 180 days after the date of enactment of
this Act, the Secretary of Defense, in consultation with other
appropriate Federal agencies, shall develop and implement a Department
of Defense-wide policy for the cybersecurity and associated governance
of artificial intelligence and machine learning systems and
applications, as well as the models for artificial intelligence and
machine learning used in national defense applications.
Requires the Secretary of Defense to develop and implement a cybersecurity policy for AI and ML systems.
Requires the Secretary of Defense to develop and implement a cybersecurity policy for AI and ML systems.
(b) Policy Elements.--The policy required under subsection (a)
shall address the following:
(1) Protection against security threats specific to artificial
intelligence and machine learning, including model serialization
attacks, model tampering, data leakage, adversarial prompt
injection, model extraction, model jailbreaks, and supply chain
attacks.
(2) Use of cybersecurity measures throughout the life cycle of
systems using artificial intelligence or machine learning.
(3) Adoption of industry-recognized frameworks to guide the
development and implementation of artificial intelligence and
machine learning security best practices.
(4) Standards for governance, testing, auditing, and monitoring
of systems using artificial intelligence and machine learning to
ensure the integrity and resilience of such systems against
corruption and unauthorized manipulation.
(5) Training requirements for the workforce of the Department
of Defense to ensure personnel are prepared to identify and
mitigate vulnerabilities that are specific to artificial
intelligence and machine learning.
Specifies AI security measures, including threat protection, cybersecurity, frameworks, governance standards, and workforce training.
Specifies AI security measures, including threat protection, cybersecurity, frameworks, governance standards, and workforce training.
(c) Review and Report.--
(1) Review.--The Secretary of Defense shall conduct a
comprehensive review to identify and assess the effectiveness of
the artificial intelligence and machine learning cybersecurity and
associated governance practices of the Department of Defense.
(2) Report.--
(A) In general.--Not later than August 31, 2026, the
Secretary of Defense shall submit to the Committees on Armed
Services of the House of Representatives and the Senate a
report on the findings of the review conducted under paragraph
(1).
(B) Contents.--The report required under subparagraph (A)
shall include--
(i) an assessment of the current security practices for
artificial intelligence and machine learning across the
Department of Defense;
(ii) an assessment of the cybersecurity risks posed by
the use of authorized and unauthorized artificial
intelligence software, including models developed by
companies headquartered in or operating from foreign
countries of concern, by the Department;
(iii) an identification of gaps in the existing
security measures of the Department related to threats
specific to the use of artificial intelligence and machine
learning;
Requires the Secretary of Defense to review and report on AI/ML cybersecurity and governance practices.
Requires the Secretary of Defense to review and report on AI/ML cybersecurity and governance practices.
(iv) an analysis of the potential of security
management, access, and runtime capabilities for artificial
intelligence in the commercial sector for use by the
Department to defend systems using artificial intelligence
from threats, minimize data exposure resulting from the use
of such systems, and maintain the trustworthiness of
applications of the Department that use artificial
intelligence;
(v) an evaluation of the alignment of the policies of
the Department with industry frameworks;
(vi) recommend actions to enhance the security,
integrity, and governance of artificial intelligence and
machine learning models used by the Department; and
(vii) an identification of any additional authorities,
resources, or legislative actions required for the
Department to effectively implement artificial intelligence
and machine learning model security policy required by
subsection (a).
Analyzes AI security management for defense, evaluates policy alignment, recommends security enhancements, identifies additional legislative needs.
Analyzes AI security management for defense, evaluates policy alignment, recommends security enhancements, identifies additional legislative needs.
(d) Definitions.--In this section:
(1) The terms ``artificial intelligence'' and ``machine
learning'' have the meanings given such terms, respectively, in
section 5001 of the National Artificial Intelligence Initiative Act
of 2020 (15 U.S.C. 9401).
References definitions of "artificial intelligence" and "machine learning" in the National Artificial Intelligence Initiative Act 2020.
References definitions of "artificial intelligence" and "machine learning" in the National Artificial Intelligence Initiative Act 2020.