AI Governance and Insurance: From Innovation to Oversight

Section image

Artificial intelligence has moved from experimental capability to embedded infrastructure across insurance underwriting, pricing, claims, distribution, and risk management. The regulatory conversation has moved with it — from curiosity about what AI might do to structured expectations about how it must be governed.

For insurers, AI governance is no longer a technical compliance question. It is a strategic issue at the intersection of capital allocation, operational resilience, consumer protection, and the social license to operate.

The Regulatory Landscape Is Taking Shape

The clearest signal at the national level came in December 2023, when the NAIC adopted its Model Bulletin on the Use of Artificial Intelligence Systems by Insurers. As of early 2025, nearly half of all states have adopted the bulletin with little or no modification — a rate of uptake that reflects both regulatory urgency and the bulletin's practical utility as a governance framework.

The Model Bulletin establishes several expectations that insurers should treat as baseline requirements. Carriers must develop and maintain a written AI governance program that addresses model risk, consumer outcomes, and internal controls across the full lifecycle of each AI system. Governance accountability must be traceable to senior management or a board-level committee. And critically, insurers cannot rely on vendor relationships to limit their own regulatory exposure: the bulletin is explicit that carriers bear responsibility for the AI systems and data sources they use, regardless of whether those tools are built in-house or acquired from third parties. Regulators may request documentation on vendor diligence as part of any investigation or market conduct examination.

States are also moving independently. Colorado has codified prohibitions on the use of external consumer data and predictive models that produce unfair discrimination, and its implementing regulation places quantitative testing obligations on carriers. New York's Department of Financial Services issued Circular Letter 2024-7, which requires insurers to demonstrate that AI systems and external data sources do not proxy for protected classes or generate disproportionate adverse effects — and that documentation supporting those demonstrations is available for regulatory review. California's Insurance Commissioner has signaled active scrutiny of algorithmic bias across lines of business.

Internationally, the EU AI Act classifies certain AI applications in life and health insurance as high-risk systems, triggering conformity assessment requirements and human oversight obligations. For global carriers with European operations, this adds a cross-border governance dimension that must be reconciled with the state-based U.S. framework.

The direction of travel is consistent: AI in insurance will be governed as a material risk management issue, not as a technology initiative sitting outside traditional regulatory oversight.

Why This Is a Business Issue, Not Just a Compliance Issue

Regulatory pressure is focusing on the places where AI has the greatest operational leverage — and therefore the greatest potential for consumer harm. Underwriting and pricing models that use external consumer data face heightened scrutiny for proxy discrimination, particularly where granular segmentation produces disparate outcomes that cannot be explained or justified. Claims automation is under examination for procedural fairness: touchless denial processes are an emerging focus of market conduct examiners. Fraud detection models raise parallel concerns about accuracy and disparate impact.

Governance failures in these areas are not simply fine exposure. They force the retraction or redesign of models that may represent significant competitive advantage. In some cases, they become market conduct matters with reputational consequences that extend well beyond the underlying regulatory proceeding.

The vendor liability question deserves particular attention. Many carriers rely on third-party data sources and model providers whose internal governance practices may not be aligned with emerging regulatory standards. The NAIC bulletin is unambiguous that this does not limit carrier responsibility — and regulators have signaled they will look through vendor relationships in examinations. Insurers that cannot demonstrate diligence over the AI tools they use, regardless of source, are exposed.

Strategic Implications

Governance is a board issue. Regulators in New York, Colorado, and the NAIC framework all contemplate board-level oversight of AI risk. Directors should understand their AI governance exposure with the same fluency they bring to catastrophe risk or investment strategy. "We didn't understand how the model worked" is not a defensible position in a market conduct examination — or in litigation.

Cross-functional alignment is operational, not aspirational. Effective AI governance requires data scientists, actuaries, underwriters, legal, compliance, and risk management to operate within a shared framework — not sequentially, but in an integrated governance structure before models go into production. Many firms have the talent in place; the structural architecture to connect these functions is what is often missing.

Proactive regulatory engagement shapes outcomes. Supervisory expectations are still forming. Regulators are actively learning what effective AI governance looks like in practice, and carriers that engage constructively — demonstrating robust governance to regulators rather than waiting to be examined — have a genuine opportunity to shape emerging standards. This is the window for proactive engagement, and it is not indefinitely open.

Cross-border consistency is a strategic requirement for global carriers. The U.S. state-based framework and the EU's risk-tiered approach reflect different regulatory architectures. Multinational insurers operating across both markets must reconcile these requirements in a coherent enterprise governance framework, not manage them as separate compliance programs.

Narrative and public trust matter. The political and public conversation about AI in underwriting and claims is intensifying — particularly around fairness, accessibility, and the use of data that consumers do not know is being used. How insurers communicate about their AI governance practices will influence both regulatory relationships and public perception. Framing this proactively, rather than defensively in response to criticism, is the more durable approach.

Looking Ahead

AI governance in insurance will not resolve through a single legislative moment. It will evolve incrementally through state bulletins, market conduct examinations, litigation, and the accumulating weight of regulatory precedent. Firms that build governance capability now — as an enterprise function rather than a compliance checkbox — will be better positioned to deploy AI at scale, adapt as expectations evolve, and engage regulators from a position of credibility rather than remediation.

Leadership teams should evaluate how AI governance connects to underwriting strategy, capital allocation, vendor oversight, and stakeholder engagement. The institutions that align governance, regulatory strategy, and public positioning early will have a meaningful structural advantage as oversight becomes more formalized.

Bushnell Mueller helps insurers and their distribution and risk partners navigate the policy, regulatory, and reputational dimensions of strategic business issues. For more on our work on AI governance, data policy, and regulatory strategy, contact us.