Requires National Security Agency Director to develop AI security guidance defending against nation-state theft or sabotage, identify vulnerabilities, and collaborate with external entities. Requires publishing and updating the guidance at classified or unclassified levels.
Amends the Intelligence Authorization Act for Fiscal Year 2025 to require the National Security Agency Director to develop security guidance to defend AI technologies from theft by nation-state adversaries.
Requires identification of vulnerabilities in AI technologies, focusing on cybersecurity risks and security challenges unique to protecting such technologies from theft or sabotage by nation-state adversaries.
Requires identification of AI supply chain elements that could benefit adversaries if accessed.
Requires identification of supply chain, development, or product lifecycle elements that, if accessed by adversaries, would either accelerate their AI progress or provide opportunities to compromise the confidentiality, integrity, or availability of AI systems or associated supply chains.
Requires identification of strategies for AI technologies to identify, protect, detect, respond to, and recover from cyber threats.
Permits the NSA Director to collaborate, on a voluntary basis, with other U.S. government departments and agencies, research entities, and private sector entities on AI model safety and security.
Permits the NSA Director to provide computing resources the Director deems appropriate in support of such collaboration.
Requires the NSA Director to publish, and may update from time to time, AI security guidance to be shared with relevant public and private sector entities at unclassified or classified levels.