Urges agencies to address AI bias risks. Requires employing experts for developing bias-mitigation frameworks. Mandates biennial reports to Congress on efforts ensuring AI safety, testing for bias, and resource needs for effective bias correction over eight years.
Require international affairs agencies to employ experts, including technologists, social scientists, and legal experts, to develop a risk-mitigation framework for trustworthy AI systems, addressing bias in AI training data and applications.
Mandate reports from each agency to appropriate congressional committees, describing efforts to ensure safe, secure, and trustworthy AI development and use, including testing and correcting for bias, starting one year after the enactment of the Act and every two years for the following eight years.
Impose a congressional expectation to implement measures addressing AI bias to prevent negative or discriminatory outcomes in agency operations.
This machine-generated summary is awaiting review by an AGORA editor. Use with caution.
Key facts
🏛️ This document was proposed and/or enacted by the United States Congress but is now defunct.
For authoritative text and metadata, visit the official source.
📜 This document's name is American Foreign Affairs Talent Expansion Act: Diversity in Diplomacy and Development, Section 112 (Mitigating bias in artificial intelligence use.).
AGORA also tracks this document under the name American FATE Act, Sec 112 ("Mitigating bias in artificial intelligence use."). It is part of American FATE Act.
↳ This document is part of a longer one: American FATE Act.
Some AGORA documents are "split off" from longer documents that mix AI
and non-AI content, such as omnibus authorization or appropriations laws
in the United States Congress. Read more >>
Themes AI risks, applications, governance strategies, and other themes addressed in AGORA documents.
Thematic tags are in progress.
Full text
This is an unofficial copy. The document has been
archived and reformatted in plaintext for AGORA. Footnotes, tables, and
similar material may be omitted. For the official text, visit the original source.
SEC. 112. Mitigating bias in artificial intelligence use.
(a) Sense of Congress.—It is the sense of Congress that, with the integration of artificial intelligence into agency work and operations, measures should be taken to address bias in artificial intelligence models to reduce the likelihood of negative results or discriminatory outcomes.
(b) Experts and technologists.—The head of each international affairs agency shall employ experts, including technologists, social scientists, and legal experts, and fellows from established programs, to support the development of a risk-mitigation framework that promotes trustworthy artificial intelligence systems, including testing and correcting for racial, ethnic, gender, age, national origin, geographic, and other bias in artificial intelligence training data and applications.
Mandates employing experts to develop frameworks mitigating bias in AI, ensuring trustworthy systems across various attributes.
Mandates employing experts to develop frameworks mitigating bias in AI, ensuring trustworthy systems across various attributes.
(c) Reports.—Not later than 1 year after the date of the enactment of this Act, and every 2 years thereafter for the following 8 years, the head of each agency shall submit a report to the appropriate congressional committees that—
(1) describes the agency's efforts to support the safe, secure, and trustworthy development and use of artificial intelligence; and
(2) includes agency efforts to test and correct for any bias in artificial intelligence training data and applications, and any resources needed to improve the effectiveness of such efforts.
Requires agencies to report biennially on AI development safety and bias correction efforts to Congress.
Requires agencies to report biennially on AI development safety and bias correction efforts to Congress.