The GUARD Act is proposed legislation that would ban AI 'companion' chatbots from targeting individuals under 18, require age verification for chatbot use, mandate clear disclosure that chatbots are not human or licensed professionals, and impose criminal penalties for companies whose AI products engage in manipulative behavior with minors.
January 01, 2024
high
temporal
A legislative proposal outlining specific safeguards and liabilities for AI chatbot interactions with minors.
The proposed GUARD Act would require AI companies to verify user age using reasonable age-verification measures (for example, a government ID) rather than relying on self-reported birthdates.
high
policy
Policy proposal intended to restrict minor access to certain AI chatbots by enforcing stronger age verification.
The proposed GUARD Act would require companies to prohibit users under 18 from accessing AI companion chatbots.
high
policy
Age-based access restriction for conversational AI designated as 'AI companions'.
The proposed GUARD Act would require chatbots to clearly disclose in every conversation that they are not human and do not hold professional credentials such as therapy, medical, or legal qualifications.
high
policy
Disclosure requirements for conversational AI to prevent users, including minors, from mistaking bots for professionals or humans.
The proposed GUARD Act would create new criminal and civil penalties for companies that knowingly provide chatbots to minors that solicit or facilitate sexual content, self-harm, or violence.
high
policy
Liability and enforcement provisions aimed at preventing harm to minors from certain chatbot behaviors.