Underwriting: The Dangers of Data and Discrimination

On January 18, 2019, New York’s Department of Financial Services (DFS) issued Circular Letter No. 1 (2019) to advise life insurers regarding the type of data that they may use when underwriting policies. The guidance was released after the DFS conducted an investigation into insurers’ underwriting practices, which followed reports that it had received of the emergence and use of “unconventional” data sources within the industry.  The guidance—summarized below—will be of particular interest to InsurTech start-ups, as well as established carriers seeking to use new sources of data to underwrite risks.

Circular Letter No. 1 identifies two primary areas of concern regarding the use of external data (e.g., data—such as a credit card rating—that is not directly related to an applicant’s medical condition), algorithms and predictive models in the life insurance space. First, the use of such tools might unlawfully discriminate against protected classes of consumers. Second, such tools often lack transparency for consumers.

Unlawful Discrimination

 

New York Insurance Law Article 26 prohibits the use of, among other things, race, color, creed and national origin in underwriting.

The DFS’ investigation uncovered that life insurers used the following sources of external data in connection with underwriting policies: geographical data, homeownership data, credit information, educational attainment, licensures, civil judgments, and court records. According to the DFS, all of these sources of data have the potential to reflect “disguised and illegal race-based underwriting.” The DFS stated that other external data that it found in use, such as retail purchase history, social media or Internet activity, geographic location tracking, condition and type of applicant’s electronic devices (and software), and an applicant’s appearance in photos, also have a “strong potential to have a disparate impact on the protected classes identified under New York and federal law.

In light of these issues, the DFS provided, among other guidance, the following principles:

  • An insurer must determine that external sources do not collect or utilize prohibited criteria.
  • An insurer may not simply rely on a vendor’s claim of nondiscrimination or the proprietary nature of the third-party process.
  • An insurer may not use an external source, unless it can establish that it is not unfairly discriminatory. In so doing, an insurer should consider the following questions:  
    • Is the underwriting supported by generally accepted actuarial principles?
    • Is there a valid explanation for the differential treatment of similarly situated applicants reflected by the underwriting guideline that is derived (in whole or in part) from external data sources?

 

Consumer Disclosure/Transparency

 

Pursuant to New York Insurance Law Section 4224(a)(2), insurers must notify the insured or potential insured of the right to receive the specific reason(s) for a declination, limitation, rate differential or other adverse underwriting decision.

The DFS reiterated that an insurer may not rely on the proprietary nature of a third-party vendor’s algorithmic processes to justify a lack of specificity related to an adverse underwriting action, and that insurers must also provide notice to, and obtain consent from, consumers to access external data, where required by law or regulation.  According to the DFS, the failure to adequately disclose to a consumer the material elements of an algorithmic underwriting process (and the external data sources upon which it relies) may constitute an unfair trade practice under New York Insurance Law Article 24.

Conclusion

 

On the one hand, insurers are in the business of discrimination—a healthy 20-year-old will expect to pay less for life insurance than an ailing 90-year-old. On the other hand, there are legally mandated limits to the nature and type of discrimination allowed. Within this framework, the boundaries of permissible discrimination are often far from clear. The DFS’ latest guidance indicates that not only is it closely monitoring companies’ compliance with underwriting rules, but also that it believes that insurers may run afoul of the law by using criteria that has a “disparate impact” on a protected class—an issue that has received significant attention in the homeowners’ insurance industry, but comparatively less with respect to other lines of insurance. This development highlights the need for companies to review their underwriting models to ensure that they are actuarially sound and do not unfairly discriminate against a protected class.

The Circular is a bellwether for further action, including targeted enforcement investigations, by DFS.  DFS has made it clear that it expects insurers to independently audit those data sources to assure they do not collect or use impermissible data and to verify not only the actuarial soundness of guidelines that use that data, but also evaluate whether those guidelines have an adverse disparate impact on protected classes.

Shawn Hanson, Partner, Akin GumpUnderwriting: The Dangers of Data and Discrimination