Minnesota lawmakers push broad AI limits on police, kids
Minnesota legislators are advancing a slate of artificial‑intelligence bills that would directly affect how police, tech companies and insurers operate in the Twin Cities, including new limits on 'reverse warrants' and children’s access to chatbots. In committee hearings Monday, Sen. Eric Lucero argued that reverse location and data warrants — where police use AI and bulk data to identify everyone in a given place at a given time — violate the Fourth Amendment’s intent, while law‑enforcement officials countered they’re essential for quickly finding suspects. A separate bill led by Sen. Erin Maye Quade would bar companies from letting minors use conversational chatbots after reports that some systems have steered young users toward self‑harm, eating disorders and suicide, though industry lobbyists like TechNet’s Jarrett Catlin are pushing for narrower rules focused on harmful content and crisis‑response protocols instead of an outright ban. Other measures would prohibit insurers from quietly using AI to deny coverage, criminalize turning ordinary photos or video of Minnesotans into sexual or 'deepfake' content, and add a constitutional amendment clarifying that AI systems themselves have no free‑speech rights. None of the proposals has reached a floor vote yet, but if they pass, Minneapolis–St. Paul police departments, schools, hospitals and tech‑heavy employers will all have to rethink how they deploy AI tools in investigations, customer screening and kid‑facing products.
📌 Key Facts
- Minnesota lawmakers are considering multiple AI‑regulation bills, two of which were heard in committee Monday in March 2026.
- One bill targets law‑enforcement use of AI‑driven 'reverse warrants,' which Sen. Eric Lucero says conflicts with Fourth Amendment warrant standards.
- Another bill, backed by Sen. Erin Maye Quade, would prevent companies from allowing children to access AI chatbots after reports of links to youth self‑harm and suicide.
- Tech industry lobbyist Jarrett Catlin of TechNet argues for narrower, harm‑focused 'companion chatbot' frameworks rather than blanket bans on kids’ AI use.
- Additional bills would restrict AI use in insurance coverage decisions, outlaw creating sexually explicit content from normal images, and add a constitutional amendment stating AI has no free‑speech rights.
📊 Relevant Data
In 2020, law enforcement agencies served Google with more than 11,500 geofence warrants, which are a type of reverse warrant used in investigations.
Much Ado About Geofence Warrants — Harvard Law Review
Police spend 0.36% more time in neighborhoods for each percentage point increase in Black residents, indicating neighborhood-level racial disparities in police presence that could be amplified by location-based tools like geofence warrants.
Smartphone Data Reveal Neighborhood-Level Racial Disparities in Police Stops — MIT Press
One in four teenagers uses AI chatbots for mental health support, highlighting the prevalence of such interactions among youth.
AI chatbots provide mental health support to 1 in 4 teenagers, study finds — EdSource
Potentially hundreds of thousands of ChatGPT users show signs of mental health distress weekly, based on data shared by OpenAI.
OpenAI shares data on ChatGPT users with suicidal thoughts — BBC
UnitedHealthcare has a 33% claim denial rate, which has increased with the use of AI in insurance decisions.
AI Is Fueling Insurance Denials—Patients Pay the Price — Fellow Health Partners
96-98% of all deepfake content online consists of non-consensual intimate imagery, with 99-100% of victims being female.
Deepfake Statistics & Trends 2026 | Key Data & Insights — Keepnet Labs
📰 Source Timeline (1)
Follow how coverage of this story developed over time