There is growing regulatory and public scrutiny of how AI companies interact with vulnerable users, especially minors, which may drive the development of stronger legal safety standards for chatbot use related to emotional and mental health.
November 07, 2025
high
trend
Ongoing policy and legal scrutiny of AI impacts on vulnerable populations and potential future regulation.
The proposed GUARD Act would require AI companies to verify user age using reasonable age-verification measures (for example, a government ID) rather than relying on self-reported birthdates.
high
policy
Policy proposal intended to restrict minor access to certain AI chatbots by enforcing stronger age verification.
The proposed GUARD Act would create new criminal and civil penalties for companies that knowingly provide chatbots to minors that solicit or facilitate sexual content, self-harm, or violence.
high
policy
Liability and enforcement provisions aimed at preventing harm to minors from certain chatbot behaviors.