Requires entities that decline access to important services (e.g. healthcare and financing) on the basis of an AI system's decision to evaluate the system for accuracy & discrimination, to disclose the system's use, and to enable affected individuals to appeal the decision to a human being.
Requires any entity in New York that uses AI or another automated process as the sole basis of a decision that denies access to financial services, insurance, housing, public accommodation, healthcare, or basic necessities to:
(a) Clearly and conspicuously disclose that the decision was made solely on the basis of an automated process; and
(b) Enable any affected individual to appeal the decision, including by giving them the right to have a human review it.
Requires the above entities, and additionally those that use an automated decision systems to determine access to employment and education, to annually solicit an outside auditor to conduct, and make publicly available, an impact assessment of the system that:
(a) Evaluates the system's objectives, design, and training data, as well as how the system was tested for accuracy and discrimination; and
(b) Assesses whether the system produces discriminatory results on the basis of race, color, ethnicity, religion, national origin, sex, gender, gender identity, sexual orientation, familial status, biometric information, lawful source of income, or disability.