Advisories May 11, 2026

Health Care Advisory | Your AI Therapist May Need a Lawyer: Pennsylvania Brings Suit Against Chatbot Developer

Executive Summary
Minute Read

Our Health Care Group investigates a lawsuit against Character.AI that raises legal risks for AI platforms presenting themselves as licensed professionals and signals tightening regulatory scrutiny.

  • AI chatbot developers could face regulatory action, private lawsuits, and reputational harm if bots imply professional credentials or offer regulated advice
  • Federal and state authorities are increasing scrutiny of chatbot design, marketing, and the effectiveness of disclaimers
  • Businesses deploying AI chatbots should implement robust governance, clear risk allocation, and escalation workflows to ensure compliance

On May 1, 2026, the Commonwealth of Pennsylvania, acting on behalf of the state’s Board of Medicine and Department of State, filed suit against Character Technologies Inc., which operates the website and interactive program Character.AI, alleging the company enabled the “unlawful practice of medicine and surgery.”

Pennsylvania asserts that Character.AI violated state laws governing the practice of medicine by permitting an AI-generated persona to engage directly in conversations with users while purporting to be a licensed medical professional. According to the Pennsylvania governor’s office, the enforcement action was the result of an investigation by the Pennsylvania Department of State’s recently launched AI Task Force, and the first action of its kind in the United States.

Allegations

According to the complaint, a Pennsylvania investigator created an account on Character.AI and interacted with an AI chatbot named “Emilie.” The chatbot’s profile allegedly stated “Doctor of psychiatry. You are her patient.” The complaint further alleges that Emilie represented she had attended medical school, asserted she could perform a depression assessment since it was “within [her] remit as a Doctor,” and provided a purported Pennsylvania medical license number (which investigators determined was invalid).

A Character.AI spokesperson responded that its characters are “fictional and intended for entertainment and roleplaying,” and that the platform includes “prominent disclaimers in every chat.” That defense, however, may struggle against the weight of the complaint’s allegations, particularly given the specificity of the chatbot’s claims about its credentials, diagnostic capabilities, and licensure. Notably, Character.AI’s platform also allows users (not just the company itself) to create and deploy custom AI “characters.” This raises significant platform liability questions, for example, to what extent a company may be responsible for ensuring that user-generated AI personas do not claim to be licensed professionals.

Background

Pennsylvania is not the first state to address the practice of medicine by AI. In January 2025, the California attorney general released an advisory specifically highlighting that AI cannot practice medicine in California. California bans the practice of medicine by “corporations and other ‘artificial legal entities’” and provides that “[o]nly human … medical professionals are licensed to practice medicine in California.” (Read more about the takeaways from the California advisory in our January 2025 post.)

The Pennsylvania case signals increased willingness by state bodies to pursue regulatory enforcement of AI-enabled services. In February 2026, Pennsylvania Governor Josh Shapiro launched a formal complaint and reporting process for AI-powered chatbots, noting that the department intends to coordinate with the Pennsylvania attorney general “to strengthen consumer protections related to AI companion bots.”

Legislative attention at the federal level is also emerging. On March 18, 2026, California Rep. Kevin Mullin introduced the Curbing Harmful AI Tools by Offering Transparency (CHATBOT) Act (H.R.7985), which would prohibit companies deploying AI chatbots from implying or indicating that a bot holds a license in a covered profession, including health care, legal services, accounting, tax, payroll, finance, and insurance. The bill would empower the Federal Trade Commission to enforce violations and would create a private right of action for affected individuals.

Why This Matters for Businesses

Given the increasing focus at both the state and federal levels on AI chatbots, businesses may anticipate scrutiny of:

  • Bot names, avatars, and biographies that imply professional credentials.
  • Chatbot responses that imply diagnosis, treatment, legal advice, financial advice, or other regulated services.
  • Fabricated license numbers, credentials, or employment history.
  • Product design choices that encourage users to rely on AI as a licensed professional.
  • Disclaimers that are too weak, buried, or contradicted by a bot’s actual behavior.

The business risk extends well beyond a single enforcement action. Companies may face:

  • Regulatory investigations and enforcement by state licensing boards, attorneys general, and state and federal consumer protection agencies.
  • Private litigation alleging unfair or deceptive practices tied to chatbot marketing or functionality.
  • Class action exposure when users claim reliance or injury.
  • Contractual and indemnity disputes between AI vendors and enterprise customers.
  • Reputational damage, particularly in high‑trust sectors such as health care, legal services, finance, and education.
  • Increased friction in enterprise sales as customers demand stronger safeguards, audit rights, and clearer risk allocation.

Takeaways

AI chatbot providers (and the companies that deploy them) should treat regulated-profession risk as a core product governance and compliance issue, not merely a content-moderation concern. In practice, this requires a holistic review of how bots are labeled and presented to users, the permissible scope of outputs (particularly around regulated advice), the design and effectiveness of guardrails around regulated and sensitive topics like health care, and appropriate workflows to trigger escalation to a qualified human professional where needed.

AlstonHealth State Law Hub

Alston & Bird’s Health Care team highlights state legislation and regulatory actions with direct implications for operations, reimbursement, privacy, and enforcement risk. Designed for in-house counsel, the tracker supports legal teams in proactively managing risk and aligning business strategy with a rapidly evolving state regulatory environment.

Learn more on the AlstonHealth State Law Hub.


If you have any questions, or would like additional information, please contact one of the attorneys on our Health Care team.

You can subscribe to future advisories and other Alston & Bird publications by completing our publications subscription form.


Meet the Authors
Media Contact
Alex Wolfe
Communications Director