Bipartisan Framework for U.S. AI Act

Proposed 2023-09-07 | Official source

Summary

Establishes a licensing regime for AI developers, requiring risk management and compliance with oversight audits. Holds AI companies accountable for harms, including privacy violations and deepfake creation. Limits AI technology exports to adversaries. Demands transparency and consumer protection, especially for children.

  • This summary is awaiting validation (peer review by a second AGORA editor).
  • This document has not been enacted or otherwise finalized and is subject to change. This summary is based on a copy of the document collected 2024-09-18 - refer to the official source for the most current version.

Key facts

🏛️ This document has been proposed by an authority not tracked by name in AGORA, but is not yet enacted. For authoritative text and metadata, visit the official source.

📜 This document's name is Bipartisan Framework for U.S. AI Act.

Themes AI risks, applications, governance strategies, and other themes addressed in AGORA documents.
  • Thematic tags for this document are awaiting validation (peer review by a second AGORA editor).
  • This document has not been enacted or otherwise finalized and is subject to change. This summary is based on a copy of the document collected 2024-09-18 - refer to the official source for the most current version.

Governance strategies (10)

Incentives for compliance (1)

Full text

  • This is an unofficial copy. The document has been archived and reformatted in plaintext for AGORA. Footnotes, tables, and similar material may be omitted. For the official text, visit the original source.
  • Thematic tags for this document are awaiting validation (peer review by a second AGORA editor).
  • This text may be out of date. According to the latest data in AGORA, this document has been proposed, but is not yet enacted or otherwise finalized. This text was collected 2024-09-18 and may have been revised in the meantime. Visit the official source for authoritative text.
Establish a Licensing Regime Administered by an Independent Oversight Body: Companies developing sophisticated general-purpose A.I. models (e.g., GPT-4) or models used in high-risk situations (e.g., facial recognition) should be required to register with an independent oversight body. Licensing requirements should include the registration of information about AI models and be conditioned on developers maintaining risk management, pre-deployment testing, data governance, and adverse incident reporting programs. The oversight body should have the authority to conduct audits of companies seeking licenses and cooperate with other enforcers, including considering vesting concurrent enforcement authority in state Attorneys General. The entity should also monitor and report on technological developments and economic impacts of A.I., such as effects on employment. Personnel must be subject to strong conflict of interest rules to mitigate capture and revolving door concerns.
Ensure Legal Accountability for Harms: Congress should ensure that A.I. companies can be held liable through oversight body enforcement and private rights of action when their models and systems breach privacy, violate civil rights, or otherwise cause cognizable harms. Where existing laws are insufficient to address new harms created by A.I., Congress should ensure that enforcers and victims can take companies and perpetrators to court, including clarifying that Section 230 does not apply to A.I. In particular, Congress must take steps to directly prohibit harms that are already emerging from A.I., such as non-consensual explicit deepfake imagery of real people, production of child sexual abuse material from generative A.I., and election interference.
Defend National Security and International Competition: Congress should utilize export controls, sanctions, and other legal restrictions to limit the transfer of advanced A.I. models, hardware and related equipment, and other technologies to China, Russia, and other adversary nations, as well as countries engaged in gross human rights violations.
Promote Transparency: Congress should promote responsibility, due diligence, and consumer redress by requiring transparency from the companies developing and deploying A.I. systems. Developers should be required to disclose essential information about the training data, limitations, accuracy, and safety of A.I. models to users and companies deploying systems, including through simple, comprehendible disclosures and to provide independent researchers access to data necessary to evaluate A.I. model performance. Users should have a right to an affirmative notice that they are interacting with an A.I. model or system. A.I. system providers should be required to watermark or otherwise provide technical disclosures of A.I.-generated deepfakes. The new oversight body should establish a public database and reporting so that consumers and researchers have easy access to A.I. model and system information, including when significant adverse incidents occur or failures in A.I. cause harms.
Protect Consumers and Kids: Companies deploying A.I. in high-risk or consequential situations should be required to implement safety brakes, including giving notice when A.I. is being used to make decisions, particularly adverse decisions, and have the right to a human review. Consumers should have control over how their personal data is used in A.I. systems and strict limits should be imposed on generative A.I. involving kids.