AI Biorisk Assessment Act

Proposed 2023-07-18 | Official source

Summary

Requires the Assistant Secretary for Preparedness and Response to conduct risk assessments and implement strategic initiatives that address threats to public health posed by AI, especially using AI to develop novel pathogens and bioweapons.

Key facts

🏛️ This document was proposed and/or enacted by the United States Congress but is now defunct. For authoritative text and metadata, visit the official source.

🎯 This document primarily applies to the government, rather than the private sector.

📜 This document's name is Artificial Intelligence and Biosecurity Risk Assessment Act. AGORA also tracks this document under the name AI Biorisk Assessment Act.

Themes AI risks, applications, governance strategies, and other themes addressed in AGORA documents.

Full text

  • This is an unofficial copy. The document has been archived and reformatted in plaintext for AGORA. Footnotes, tables, and similar material may be omitted. For the official text, visit the original source.
A BILL To require the Assistant Secretary for Preparedness and Response shall conduct risk assessments and implement strategic initiatives or activities to address threats to public health and national security due to technical advancements in artificial intelligence or other emerging technology fields. Be it enacted by the Senate and House of Representatives of the United States of America in Congress assembled, SECTION 1. Short title. This Act may be cited as the “Artificial Intelligence and Biosecurity Risk Assessment Act”. SEC. 2. Regular assessment of emerging risks. Section 2811 of the Public Health Service Act (42 U.S.C. 300hh–10) is amended by adding at the end the following: “(h) Assessment of emerging risks.—In carrying out subsection (b)(4)(I), the Assistant Secretary for Preparedness and Response shall conduct risk assessments and implement strategic initiatives or activities to address whether technical advancements in artificial intelligence, such as open-source artificial intelligence models and large language models, can be used intentionally or unintentionally to develop novel pathogens, viruses, bioweapons, or chemical weapons. Such initiatives and activities may include— “(1) regularly monitoring and researching potential global biological catastrophic risks in which biological agents could lead to sudden, extraordinary loss of life and sustained damage to national governments, international relationships, economies, societal stability, or global security; and “(2) including in the National Health Security Strategy under section 2802 a summary of the risk assessment conducted under this subsection.”.