Q&A with Sara Gerke on Illinois being first state to ban AI mental health therapists

Q&A with Sara Gerke on Illinois being first state to ban AI mental health therapists

Illinois recently became the first state in the nation to ban AI from acting as a therapist without oversight from a licensed clinician. The move has sparked national attention and raised important questions about patient safety, data privacy, and how we regulate emerging technologies in health care.

To help us unpack the ethical and legal implications, we’re joined by Sara Gerke, a faculty affiliate at IGPA and associate professor of law at the University of Illinois Urbana-Champaign, whose research focuses on the ethical and legal challenges of artificial intelligence and big data in health care.

Why do you think Illinois lawmakers felt it was necessary to ban AI from acting as therapists without clinician oversight?

Illinois lawmakers felt the need to act because AI chatbots were being marketed as virtual therapists without any medical training or regulation. People were turning to them since they’re cheap and available 24/7, but that raised real concerns, especially when vulnerable individuals relied on them for serious mental health needs.

It takes a long time to train and certify as a licensed therapist. The big issue is these tools aren’t vetted like licensed therapists are. No one knows how they’re trained, if they’re safe, or if they work as intended. So, Illinois lawmakers stepped in to protect patients and ensure only licensed professionals can provide therapy, with AI playing a role only under their supervision. It’s really about closing that gap between innovation and patient safety.

From an ethical and safety standpoint, what makes unsupervised AI use in mental health care potentially problematic?

The problem with unsupervised AI in mental health is that chatbots are designed to keep conversations flowing, not to ensure accuracy or safety. They can “hallucinate” information, sometimes giving dangerous advice like telling a fictional recovering addict to use a small amount of meth to cope.

They also mirror users to stay engaged, which can unintentionally reinforce harmful thinking. There’s also no proper legal protection at the federal level. The FDA only regulates “medical devices,” and not health chatbots that present themselves as “wellness” tools. The U.S. also does not have a federal privacy law that comprehensively protects sensitive data collected by chatbots. AI has potential because it’s accessible and might help people open up deeper than they would with a human. But without safeguards and oversight, the risks outweigh the benefits.

Do you see the Wellness and Oversight for Psychological Resources (WOPR) Act as an effective step for protecting patients, or are there areas where it could be strengthened?

I think it’s important to recognize that Illinois is the first state to take this bold step, which could set off a ripple effect in other states. So yes, it’s a meaningful move to put this kind of ban in place, but the real question is enforcement. $10,000 as a maximum fine may not be enough to deter companies, especially compared to the much tougher penalties we see in the European Union, such as under the General Data Protection Regulation or the new AI Act. Still, it appears that some apps have already pulled out of Illinois, which shows the law is having an effect.

That said, the Act doesn’t close every gap. It stops companies from marketing AI as therapists, but patients can still turn to tools like ChatGPT for therapy-like conversations. So, while it’s an important first step, it won’t fully prevent potential harm.

What challenges or changes might AI companies face in complying with this law, and could Illinois’ approach influence regulation in other states?

The biggest challenge right now is that there’s no consistent federal framework, so states are all over the place with their own AI laws. Companies often just follow the strictest rules, such as for privacy compliance. But Illinois’ ban on AI therapists could push some to skip the state entirely or shift their focus to working with licensed professionals instead of marketing directly to consumers.

As mentioned, these chatbots are basically unregulated at the federal level as well. The FDA doesn’t review them since many are marketed as “wellness” products, and there’s still no clear way to check generative AI for safety or effectiveness. Privacy protections are also a patchwork, varying from state to state.

So, Illinois’ approach makes sense for keeping patients safe. It protects people but could slow down AI innovation that might actually be helpful if done right.

Gerke is open to further comment on this topic. She can be reached at gerke@illinois.edu. 

All time 7367 Today 74
September 16, 2025