Advisories October 25, 2023

Privacy, Cyber & Data Strategy Advisory: New Final AI Regulation from Colorado Department of Insurance—Others Likely to Follow Suit

Executive Summary
Minute Read

Our Privacy, Cyber & Data Strategy Group considers the nationwide repercussions of Colorado’s new regulation of insurers’ use of artificial intelligence models to prevent race-based discrimination.

  • The regulation requires insurers that use external consumer data and information sources to establish a risk management framework
  • Though Colorado is the first state to specifically regulate AI in the insurance industry, many others have statutes and guidance
  • There are several steps companies can take to prepare for the growing regulatory structure surrounding AI

On September 21, 2023, the Colorado Division of Insurance (CDI) adopted a new regulation that will impact Colorado-licensed life insurers. The regulation governs the use of algorithms and predictive models that use external consumer data and information sources (ECDIS). The purpose of the regulation, which becomes effective on November 14, 2023, is to prevent life insurers that rely on models and ECDIS from engaging in race-based discrimination. Among other things, the regulation requires all Colorado-licensed life insurers to submit a compliance progress report on June 1, 2024 and an annual compliance attestation beginning on December 1, 2024.

The Colorado regulation adds to a recent trend that focuses on the use of artificial intelligence (AI) models in the insurance sector. It also comes in the wake of an insurance company facing a class action in 2022 for racial discrimination from its use of models. For information on AI regulations and initiatives at the federal level related to privacy and security, please see our advisory, “AI Regulation in the U.S.: What’s Coming, and What Companies Need to Do in 2023.”

Overview

The Final Regulation, 3 CCR 702-10, requires all life insurers that use ECDIS or ECDIS-based models to establish a risk-based governance and risk management framework for their use of ECDIS or models. It defines ECDIS as “a data or an information source that is used by a life insurer to supplement or supplant traditional underwriting factors or other insurance practices or to establish lifestyle indicators that are used in insurance practices.” The term ECDIS includes “credit scores, social media habits, locations, purchasing habits, home ownership, educational attainment, licensures, civil judgments, court records, occupation that does not have a direct relationship to mortality, morbidity or longevity risk, consumer-generated Internet of Things data, biometric data, and any insurance risk scores derived by the insurer” from those sources.

The Final Regulation implements Colo. Rev. Stat. § 10-3-1104.9, which prohibits the use of ECDIS and models that unfairly discriminate based on race, color, origin, religion, sex, sexual orientation, disability, and gender identity or expression. But unlike § 10-3-1104.9, the Final Regulation framework requirements are limited to race. The Final Regulation closely follows the draft but adds a requirement that insurers remediate unfair discrimination, if detected.

Key Provisions

Section 5.A of the Final Regulation requires that life insurers create a governance and risk management framework with policies, procedures, systems, and controls that determine whether its uses of ECDIS and models might result in unfair race-based discrimination. If detected, the Final Regulation requires insurers to remediate the unfair discrimination. The Final Regulation also prescribes certain components for the framework:

  1. Documented governing principles outlining the values and objectives of the insurer that provide the guidance necessary for ensuring oversight and management in the development and use of ECDIS and that they are designed to prevent unfair discrimination. 
  2. Board-level or committee oversight of the governance structure and risk management framework.
  3. Senior management is both responsible and accountable for setting and monitoring the overall strategy and for providing direction governing the use of ECDIS and the algorithms and predictive models that use ECDIS.
  4. Documented cross-functional ECDIS, algorithm, and predictive model governance group with personnel from key functional areas, including legal, compliance, risk management, product development, underwriting, actuarial, data science, marketing, and customer service.
  5. Documented policies, processes, and procedures, and processes to ensure that ECDIS and algorithms and predictive models that use ECDIS are documented, tested, and validated. These policies and processes must include an ongoing internal supervision and training program for relevant personnel.  
  6. Documented processes and protocols in place for addressing consumer complaints and inquiries about the use of ECDIS, as well as algorithms and predictive models that use ECDIS, which include providing consumers with information necessary to take meaningful action if there is an adverse decision stemming from the use of ECDIS and ECDIS-based models.
  7. A rubric for assessing and prioritizing risks associated with the deployment of ECDIS and ECDIS-based models with reasonable consideration given to insurance practices’ consumer impacts.
  8. Documented and current inventory, with version controls, of all utilized ECDIS and ECDIS-based models.  The document should include a description, the purpose of the ECDIS and model, and the outputs generated through their use.
  9. Documented explanation of any material changes in the inventory of all ECDIS and ECDIS-based models and the rationale for the changes.
  10. Documented description of testing conducted to detect unfair discrimination in insurance practices resulting from the use of ECDIS and ECDIS-based models, including the methodology, assumptions, results, and steps taken to address unfairly discriminatory outcomes.
  11. Documented description of ongoing performance monitoring of ECDIS-based models, including accounting for model drift.
  12. Documented description of the process used for selecting external resources, including third-party vendors that supply ECDIS, algorithms, and ECDIS-based models, including the intended use of the ECDIS or models.
  13. Documented comprehensive annual reviews of the governance structure and risk management framework and any required updates.

The Final Regulation does not define “traditional underwriting factors.” But a separate draft Colorado regulation, which would require Colorado life insurers to conduct quantitative testing of their ECDIS and models, does provide a definition. It defines “traditional underwriting factors” as:

  1. The insured’s application information, including medical information, family history, and disability.
  2. Occupational information that has a direct relationship to mortality, morbidity, or longevity risk.
  3. The insured’s behavioral information that has a direct relationship to mortality, morbidity, or longevity risk. This may include motor vehicle records and adult criminal history.
  4. Medical Insurance Bureau data.
  5. Prescription drug history.
  6. Elements of the insured’s financial profile provided on an application for insurance by the applicant.
  7. Digitized or other electronic forms of the information listed above.

On its face, if finalized, this draft testing regulation complements the newly enacted governance and risk management framework in the Final Regulation.

Other State Action

While Colorado has been the only state to issue AI regulations in the insurance industry, several other states have made public statements or implemented measures to address the increased use of AI and models in the insurance space, including Connecticut, the District of Columbia, and California. Specifically, in May 2021, the Connecticut Insurance Department adopted the National Association of Insurance Commissioners (NAIC) AI framework. And in April 2022, the department began requiring licensed insurers to certify compliance with its April 2022 “Notice Concerning the Usage of Big Data and Avoidance of Discriminatory Practices.”

The New York Department of Financial Services (NYDFS) has also been active in AI matters for several years. In 2019, the NYDFS issued a circular to life insurers on the use of “algorithms and predictive models.” The NYDFS noted that the use of AI models may have “a strong potential to have a disparate impact on … protected classes.” Thus, “an insurer should not use an … algorithm or predictive model … unless the insurer has determined that the external tools … do not collect or utilize prohibited criteria.” Interestingly, the NYDFS noted that “[a]n insurer may not simply rely on a vendor’s claim of non-discrimination” – “[t]he burden remains with the insurer at all times.”

More recently, in November 2022, the District of Columbia Department of Insurance, Securities and Banking proposed an insurance application/quote data call to identify and address unintentional bias, with a focus on race and ethnicity. The result of the review may lead to additional filing requirements for future rate filings.

In a 2022 bulletin, the California insurance commissioner warned of the dangers of AI and models and stated that when insurers use complex algorithms in a declination, limitation, premium increase, or other adverse action, the specific reason or reasons must be provided. Other states like Louisiana and New York have issued stern warnings about potential for discrimination with Big Data.

In addition to the work of insurance commissioners, it bears remembering that a number of states have recently passed privacy statutes that purport to regulate AI used to make key insurance decisions. Statutes in Colorado, Connecticut, Oregon, Texas, and Virginia – just to name a few – require companies to let consumers opt out of AI-powered decisions that result in “the provision or denial of … insurance.” These states may also generally require companies to conduct risk assessments for the AI or algorithmic tools or platforms that make such decisions.

The Colorado attorney general has cast these requirements into more specific regulations requiring detailed AI impact assessments. The California Privacy Protection Agency is also working on regulations to govern AI usage, and its first informal draft – issued in early September – expressly contemplates regulating AI used in insurance.

European Action

For a brief look abroad, European insurance regulators have also begun to use their supervisory authorities to set standards for AI usage. As one example, Germany’s Federal Financial Supervisory Authority (Bundesanstalt für Finanzdienstleistungsaufsicht, or “BaFin”) published 2021 guidance on how “Big Data and artificial intelligence” can be used to automate decision-making in banking and insurance. It followed this guidance with detailed studies of risk models used by insurers and banks, including explanations of the standards – like data credibility, output explainability, and adaptability – it has focused on in its supervision.

NAIC Leads the Way

The NAIC has been very active in the AI space since 2019, when its Innovation and Technology (EX) Task Force established the Big Data and Artificial Intelligence (H) Working Group. The Working Group was tasked with studying the development of artificial intelligence, its use in the insurance sector, and its impact on consumer protection and privacy, marketplace dynamics, and the state-based insurance regulatory framework. The Working Group developed regulatory principles on AI, which were designed to establish general expectations and assist regulators in dealing with insurance-specific AI applications. In August 2020, the NAIC’s full membership adopted the principles:

Fair and ethical

  • “AI actors should respect the rule of law throughout the AI life cycle,” including those relating to trade practices, unfair discrimination, access to insurance, ratemaking, and claims practices.
  • “AI actors should proactively engage in responsible stewardship” and “avoid proxy discrimination against protected classes. AI systems should not be designed to harm or deceive” and should avoid unintended consequences. AI systems should “correct and remediate for such consequences when they occur.”

Accountable

  • “AI actors should be accountable for ensuring that AI systems operate in compliance with … legal requirements governing its use of data and algorithms during its phase of the life insurance life cycle. Data supporting the final outcome of an AI application should be retained.” AI actors should be responsible for all impacts of an AI system, intended or otherwise. AI actors should implement safeguards consistent with risks.

Compliant

  • “AI actors must have the knowledge and resources in place to comply with all applicable insurance laws and regulations.” Insurance is primarily regulated by the individual states and federal government and “AI systems must comply with the insurance laws and regulations” in each jurisdiction. 

Transparent

  • “AI actors should commit to transparency and responsible disclosures regarding AI systems to relevant stakeholders. … [P]roactive disclosures include revealing the kind of data being used, the purpose and data in the AI system and consequences for all stakeholders.”
  • Stakeholders “should have a way to inquire about, review and seek recourse for AI-driven insurance decisions. This information should be easy-to-understand and describe the factors that [led] to the prediction, recommendation or decision.”

Safe, secure, and robust

  • “AI actors should ensure a reasonable level of traceability in relation to datasets, processes and decisions made during the AI system life cycle. AI actors should enable analysis of the AI system’s outcomes, responses and other insurance-related inquiries.”
  • AI actors should “apply a systematic risk management approach to each phase of the AI system life cycle on a continuous basis to address risks related to AI systems, including privacy, digital security and unfair discrimination.”

Litigation Activity

Against the backdrop of increased regulation, class action attorneys have also sought to challenge the growing use of automated models, alleging that they result in injury to policyholders. These suits will likely multiply as insurers increase their use of models and as regulatory activity expands.

What’s Next?

With the growth AI across a myriad of industries, from automotive to retail and everything in between, expect more legislation and industry-specific guidance to follow. Companies in all sectors should start considering how to manage AI regulatory risk. Companies that develop, deploy, or use AI systems may wish to consider the following steps to prepare for the new requirements coming online in the months ahead:

  • Know your AI
  • Lay the policy groundwork for AI adoption
  • Design governance and accountability structures
  • Prepare to communicate
  • Assess the risks
  • Maintain ongoing governance
Meet the Authors
Media Contact
Alex Wolfe
Communications Director

This website uses cookies to improve functionality and performance. For more information, see our Privacy Statement. Additional details for California consumers can be found here.