Defines "companion chatbot" and requires operators to notify users when interacting with AI, maintain protocols to prevent suicidal ideation and self-harm content, annually report to the Office of Suicide Prevention on crisis notifications, and establish additional protections for minors. Provides civil remedies for violations.
Defines key terms including companion chatbot, companion chatbot platform, and operator for purposes of Chapter 22.6.
Requires operators to issue a clear and conspicuous notification when a reasonable person would be misled into believing they are interacting with a human.
Requires operators to maintain a protocol to prevent companion chatbots from producing suicidal ideation, suicide, or self-harm content, and to refer users to crisis service providers including suicide hotlines or crisis text lines, and to publish the protocol on their website.
Requires operators to, for users known to be minors, disclose AI nature, provide break reminders every three hours, and institute measures to prevent sexually explicit content.
Requires operators to annually report to the Office of Suicide Prevention on crisis referrals, suicidal ideation detection and response protocols, and evidence-based measurement methods, with reports posted publicly.
Requires operators to disclose that companion chatbots may not be suitable for some minors on all platforms through which users access the chatbot.
Establishes that a person who suffers injury as a result of a violation of this chapter may bring a civil action to recover damages of $1,000 per violation or actual damages, injunctive relief, and attorney's fees.