Artificial intelligence (AI) is now embedded in health care plans’ coverage and treatment decisions. Health plans are increasingly using AI for utilization management (UM), a process designed to control unnecessary hospital services. In response, states have begun regulating the use of AI in UM to protect citizens from unintended treatment and coverage denials that may result from automated decision-making.
Maryland is among the states taking action. In its most recent legislative session, Maryland passed a law (HB0820) targeting the use of AI in utilization management and review decisions.
Effective Utilization Management
Effective UM processes evaluate the efficiency, appropriateness, and medical necessity of health care services to reduce costs. These reviews occur at three stages:
- Prospective (e.g., prior authorizations, before services are rendered).
- Concurrent (during hospitalization or treatment).
- Retrospective (after services have been provided).
Regardless of the review type, establishing medical necessity is fundamental to any UM program. Medically necessary services are those reasonably expected to produce the intended results for the patient and have benefits that outweigh any potential harmful effects. Accordingly, states have routinely reiterated that these determinations should be made by clinicians. As health plans use AI more and more to facilitate UM decisions, legislators are intervening to regulate AI-driven decisions that could adversely impact patient care and outcomes.
MD HB0820 vs. CA SB 1120 – Applicability
The adage “as goes California, so goes the nation” has often proved true in areas like environmental justice, criminal law, and now the intersection of AI and health care.
Maryland’s new law, effective October 1, 2025, closely mirrors California’s 2024 law (SB 1120) and focuses on ensuring that AI tools support clinician medical necessity determinations based on the patient’s entire clinical picture. Maryland now requires oversight of review decisions made using “an artificial intelligence, algorithm, or other software tool.” While California’s law applies to health care service plans and their contractors, Maryland’s law is both more specific and broader, explicitly placing oversight responsibilities on carriers, including insurers, nonprofit health plans, HMOs, dental organizations, and others regulated by the state. The law also imposes obligations on pharmacy benefit managers (PBMs) and private review agents (PRAs) that contract with carriers and use AI tools.
Maryland Requirements and Expectations
The new Maryland law requires carriers, PBMs, and PRAs to ensure that any AI tool used in UM bases coverage determinations on:
- The enrollee’s medical or clinical history.
- Individual clinical circumstances reported by the provider.
- Other relevant clinical information in the enrollee’s record.
These requirements are intended to ensure that medical necessity decisions – the key factor for coverage – are made by clinicians, not AI tools. The law mandates that an individual’s clinical history, rather than group or demographic statistics, be the fundamental basis for UM decisions involving AI.
To ensure human oversight, the law requires at least quarterly reviews of any AI used in UM, including evaluations of the AI’s performance, use, and outcomes. There are also new reporting requirements: Carriers must report metrics on AI use in adverse decisions, submit written policies and procedures on AI use, and make AI tools available for audit or compliance review by the insurance commissioner.
Given these new data requirements, carriers and their PBMs/TPAs should assess their ability to share this data and determine the best approach for reporting outcomes to the commissioner. This will help avoid overcounting and facilitate discussions about the use of third-party data systems, which are now integral to health care administration. Because many AI tools are proprietary, parties may wish to have nondisclosure agreements in place before disclosing trade secrets or proprietary information in response to reporting or audit requests.
Fair vs. Unfair Discrimination
The Maryland legislation is largely consistent with California’s, except for one key provision. Modern AI tools have been criticized for inherent biases that may result in disparate treatment, such as in job candidate selection or population health management. To address this, California’s law requires that AI tools “do not discriminate, directly or indirectly, against enrollees in violation of state or federal law.” Maryland’s final version, however, requires only that “[t]he use of an artificial intelligence, algorithm, or other software tool does not result in unfair discrimination,” omitting the reference to state or federal law.
While lawmakers may have thought it superfluous to add that the AI tool does not discriminate based on state or federal law, this could also be a setup for a Supremacy Clause showdown. On the federal level, there is a real discussion on banning states from regulating AI for the next 10 years, which was seriously considered for inclusion in the One Big Beautiful Bill Act recently signed into law. The term “unfair” is also vague, and it is unclear what the state considers “fair discrimination.” In insurance regulation, “fair discrimination” generally refers to actuarial principles grouping persons of similar risk, which is typically permitted. The “unfairness” provision could be challenged, but for now, the Maryland and California laws appear secure.
Conclusion
Health care plans should invest time and resources now to ensure their use of AI in UM complies with Maryland’s requirements. The law has real consequences: Violations may result in misdemeanor charges, monetary penalties, and administrative actions such as denial, suspension, or revocation of certificates, cease-and-desist orders, administrative penalties, or restitution to harmed patients.
It remains to be seen whether HB0820 will significantly impact the industry or become an afterthought. In testimony supporting the bill, the League of Life and Health Insurers of Maryland stated that “no carriers ever make an adverse decision with an artificial intelligence system. … all denials that are a part of the system are always made by a human.” In 2023, hospitals and health systems spent $25.7 billion contesting claims denials – a 23% increase from the previous year – with 69% of those claims eventually paid.
Could it be that the use of AI tools would be more favorable to decisions of medical necessity that impact patient care? Either way, the question remains whether adding to the regulatory-industrial complex of an already burdened administrative system will enhance consumer protections. For now, carriers, PBMs, and TPAs should find the best way to streamline the data required for Maryland and work together to comply with the new state requirements for using AI in UM.
AlstonHealth State Law Hub
Alston & Bird’s Health Care team highlights state legislation and regulatory actions with direct implications for operations, reimbursement, privacy, and enforcement risk. Designed for in-house counsel, the tracker supports legal teams in proactively managing risk and aligning business strategy with a rapidly evolving state regulatory environment.
Learn more on the AlstonHealth State Law Hub.
If you have any questions, or would like additional information, please contact one of the attorneys on our Health Care team.
You can subscribe to future advisories and other Alston & Bird publications by completing our publications subscription form.