Requires Homeland Security, via FEMA, to develop AI testing centers for civilian agencies, ensuring rights protection and democratic principles. Establishes an AI Incident Reporting Office. Mandates biannual reports to Congress, and authorizes $20 billion for implementation.
Establishes federal civilian agency laboratories for AI testing and certification under the Secretary of Homeland Security through FEMA.
Mandates development of AI training and testing centers to evaluate AI systems for federal use, prioritizing privacy, transparency, accountability, and protection of individual rights.
Prohibits automated decision-making that threatens due process and constitutional rights.
Requires real-world use cases to inform AI system outcomes in federal settings, ensuring no personal data is used.
Creates a digital repository for use cases, ensuring Federal agencies can train and test AI systems appropriately.
Establishes an Office of Artificial Intelligence Incident Reporting within the Department of Homeland Security for agency collaboration and reporting on AI experiences.
Requires biannual reports from DHS to Congress on implementation, challenges, and resource needs, and agency reports on adverse AI experiences.
Authorizes $20 billion in appropriations to implement these measures.