Scrutiny of the Use of AI by Life Insurers Is Increasing
It has become nearly impossible to open a newspaper (or, more likely, a website) without finding an article discussing the concerns or advantages of artificial intelligence (AI). While the topic is seemingly ubiquitous now, life insurance regulators have been grappling for several years with the potential risks and benefits of insurers using AI, and the large external data sources utilized by AI, in underwriting and other insurance practices.
The regulatory landscape around AI is changing quickly, especially in light of the rise of large language models like ChatGPT and automated decision-making technology. This technology raises significant regulatory questions, including concerns that its use could cause unfair discrimination or could compromise policyholder privacy through the use of large data sets that include personal information. A number of insurance regulators have also expressed concern that external data sources used by AI and other algorithms could contribute to underwriting improperly based on protected characteristics such as race. See NYDFS, Circular Letter 1 (2019), https://www.dfs.ny.gov/industry_guidance/circular_letters/cl2019_01.
In the life insurance space, the current focus is on the use of external consumer data to supplement traditional underwriting. External consumer data and information sources (ECDIS) generally are data or information sources used by a life insurer to supplement or supplant traditional underwriting factors or to establish lifestyle indicators that are used to determine whether and how to issue a life insurance policy. CO DOI, Draft Proposed New Regulation, Section 9. There is no standard definition of ECDIS but it can include credit scores, social media habits, location data, purchasing habits, home ownership, educational attainment, occupation, licensures, civil judgments, and court records. ECDIS can be incorporated into automated decision-making systems, such as risk scoring or pricing algorithms.
State Regulations and Regulator Initiatives
In July 2021, the Colorado legislature passed Senate Bill 21-169 (SB21-169) prohibiting insurers, with regard to any insurance practice, from unfairly discriminating based on enumerated protected characteristics and from using algorithms or predictive models that use ECDIS in a way that unfairly discriminates. C.R.S.A. Section 10-3-1104.9(8)(c). SB21-169 defines insurance practices broadly and goes beyond underwriting to include marketing, pricing, and claims management. The Colorado Department of Insurance (CO DOI) released a draft regulation implementing SB21-169 for comment on Feb. 1 (the proposed Colorado regulation). The proposed Colorado regulation, which focuses solely on the life insurance industry, establishes expansive requirements for life insurers using ECDIS to establish internal governance and risk management frameworks to ensure that their use of ECDIS and algorithms in any insurance practice does not result in unfairly discriminatory insurance practices. CO DOI, Draft Proposed New Regulation, Section 9.
Other state insurance regulators had previously issued anti-discrimination guidance focused on the use or impact of ECDIS and automated decision-making. In 2019, the New York Department of Financial Services (NYDFS) issued a circular letter regarding the use of ECDIS for underwriting of life insurance, citing concerns about both the neutrality of underlying data and the challenge of avoiding explicit or implicit bias in how such data are used by algorithmic models. NYDFS, Circular Letter 1 (2019). NYDFS also has signaled a potential intent to place the burden of proving nondiscriminatory impact on insurers using ECDIS. In January 2023 NYDFS hired an outside consultant, Fairplay-Sustain Solutions, to assist in understanding the potential benefits and harms from the use of AI, further signaling that scrutiny is growing.
The Connecticut Insurance Department (CID) and the California Department of Insurance (CDI) similarly issued bulletins in April and June 2022, respectively, regarding the use of AI and the potential for racial bias and discrimination in insurance practices, especially where the data lacks a sufficient actuarial nexus to the risk of loss. CID, “Notice to all entities and persons licensed by the Connecticut Insurance Department concerning the usage of big data and avoidance of discriminatory practices” (Apr. 20, 2022); CDI, Bulletin 2022-5 (June 30, 2022). The CDI bulletin emphasizes that insurers must avoid conscious and unconscious discrimination that can result from the use of AI when, among other things, marketing, rating, underwriting, processing claims, or investigating suspected fraud. The CID now requires that insurers annually certify that their use of data complies with CID’s bulletin and applicable laws.
The National Association of Insurance Commissioners (NAIC) created a Big Data and Artificial Intelligence working group within its Innovation, Cybersecurity and Technology committee (Technology Committee) to formulate guidance around combatting unfair bias in the use of AI. NAIC, Big Data and Artificial Intelligence (H) Working Group, https://content.naic.org/cmte_h_bdwg.htm. In December 2022, the working group sought comments on proposed questions that regulators might use to evaluate and monitor insurers’ use of big data and AI. National Association of Insurance Commissioners, Big Data and AI (H) Working Group, Model and Data Regulatory Questions (Dec. 2, 2022). The working group also announced an AI and Machine Learning (AI/ML) survey in collaboration with 14 states to assess the use of AI/ML in the life insurance space in the operational areas of pricing, underwriting, marketing, and loss prevention. Responses to formal call letters are expected by the end of May 2023. NAIC-Big Data and Artificial Intelligence (H) Working Group, 2022 Fall National Meeting (Dec. 13, 2022). In addition, the Technology Committee is drafting an interpretive bulletin outlining a regulatory framework for the use of AI by the insurance industry expected to describe regulatory expectations for the use of AI by insurers as well as provide standards for insurance regulators to examine the use of AI. See NAIC-Big Data and Artificial Intelligence (H) Working Group, 2022 Fall National Meeting (Dec. 13, 2022).
Federal Regulatory Scrutiny of External Data Sources Is Simultaneously Focusing on Accountability
Federal agencies are also scrutinizing AI. Their initiatives are not explicitly directed at the life insurance industry but they illustrate the broader interest in the effects of AI, and they may influence the thinking of state legislators and insurance regulators.
In October 2022 the White House Office of Science and Technology Policy issued a blueprint for an AI bill of rights. The White House, “Blueprint for an AI Bill of Rights,” Oct. 4, 2022. The blueprint provides guidance to organizations employing automated systems, and explains that when those systems contribute to unjustifiably different treatment, or to impacts that disfavor people based protected characteristics, that results in algorithmic discrimination. The White House calls for protective protocols including disparity testing, mitigation, algorithmic impact assessment, and organizational oversight.
The National Institute of Standards and Technology (NIST) likewise released guidance on Jan. 26 for an AI risk management framework for voluntary use by any organization designing or offering AI systems. National Institute of Standards and Technology (NIST), “NIST risk management framework aims to improve trustworthiness of artificial intelligence,” press release, Jan. 26.
On April 25, the Justice Department’s Civil Rights Division, officials from the Consumer Financial Protection Bureau, the Equal Employment Opportunity Commission and the Federal Trade Commission together pledged to uphold the core principles of fairness, equality and justice as emerging automated systems, including AI, become increasingly common. U.S. Department of Justice, Justice Department’s Civil Rights Division Joins Officials from CFPB, EEOC and FTC Pledging to Confront Bias and Discrimination in Artificial Intelligence, https://www.justice.gov/opa/pr/justice-department-s-civil-rights-division-joins-officials-cfpb-eeoc-and-ftc-pledging. In their joint statement, the agencies expressed their opinion that automated systems, including AI, have the potential to result in unlawful discrimination.
Finally, on May 3, the White House announced that the Office of Management and Budget (OMB) will issue in the coming months clear policy guidance on the use of AI by the federal government which will also serve as a model for state and local governments and businesses in their own use of AI. The White House, “Biden-Harris Administration Announces New Actions to Promote Responsible AI Innovation that Protects Americans’ Rights and Safety,” May 4, 2023.
What Is to Come?
To state the obvious, insurers should expect stricter scrutiny of any use of external data sources or automated decision making. Colorado is the first regulator in the life insurance sphere to move beyond a principles-based approach and prescribe specific governance steps and reporting frameworks for life insurers. If the proposed regulation is implemented as written, the requirement that insurers have governance, data inventories and a cross-functional committee and senior management oversight will likely be incorporated by other regulators.
Life insurance regulators can also be expected to increase the burden on insurers using external data sources to demonstrate the absence of unfair discrimination. To date, regulators have generally focused on gathering and analyzing information regarding the use of external data sources. The Proposed Colorado Regulation, however, requires life insurers to “facilitate and support policies and procedures, and systems designed to determine whether the ECDIS are credible in all material respects and their use in any insurance practice does not result in unfair discrimination.” CO DOI, Draft Proposed New Regulation, Section 2. As use of AI increases, regulators are likely to take the next step and demand that insurers demonstrate a lack of discriminatory impact before utilizing AI technology, especially if regulators are receiving consumer complaints. CDI, Bulletin 2022-5 (June 30, 2022) (noting that greater use by insurers of AI and other data collection models “have resulted in an increase in consumer complaints relating to unfair discrimination.”). Given these developments, insurers must balance the potential for regulatory enforcement, as well as the risk of private litigation, with the increased efficiencies and potentially fairer results of using AI.
While traditional life insurance underwriting assesses an applicant’s physical health and behavioral elements to determine risk and coverage eligibility, automated or accelerated underwriting utilizes AI and ECDIS to analyze applicant data, which may include non-medical data, to expedite the process. This use of AI can create efficiencies in the insurer’s underwriting practices and reduce the burden on the applicant. Automated decision making processes also may drive more consistency in the underwriting process by significantly reducing or eliminating bias that a human underwriter may impose. To convince regulators this is the case, insurers must be prepared to explain how complex algorithmic models operate and facilitate transparency into underwriting decisions. Insurers must also be able to demonstrate that their use of AI incorporates safeguards against compromising health data and other sensitive personal information of policyholders.
Federal and state regulators are shaping the regulatory landscape of AI in real time. Stakeholders must attend to proposed regulations and participate in the legislative process because this landscape will change and appear significantly different in the near future.
Reprinted with permission from the May 22, 2023 issue of the New York Law Journal. ©2023 ALM Media Properties, LLC. Further duplication without permission is prohibited. All rights reserved.