AI Governance and Insurance

From Innovation to Oversight

Section image

Insurance at the Intersection: The Business Case for Trust. This article is the first in a four-part series examining the strategic forces reshaping the insurance operating environment.

For insurers, AI governance is no longer a technical compliance question. It is a strategic issue at the intersection of capital allocation, operational resilience, consumer protection, and the social license to operate. And the governance landscape itself is now more complex, with state regulators pressing forward, a new federal executive order creating friction at the margins, and the NAIC accelerating its own oversight architecture.

The Regulatory Landscape

The clearest signal at the national level came in December 2023, when the NAIC adopted its Model Bulletin on the Use of Artificial Intelligence Systems by Insurers. The rate of adoption has been notable: the NAIC itself stated in December 2025 that over half of all states have now adopted the bulletin or similar guidance, and the working group continues to develop new tools for regulatory examination. In February 2026, the NAIC published a summary of its AI Systems Evaluation Tool pilot — a structured framework regulators will use to assess insurer AI governance during examinations. A model law on third-party data and models is anticipated later in 2026, potentially including licensing requirements for vendors.

The Model Bulletin establishes expectations insurers should treat as baseline requirements. Carriers must develop and maintain a written AI governance program addressing model risk, consumer outcomes, and internal controls across the full lifecycle of each AI system. Governance accountability must be traceable to senior management or a board-level committee. Critically, the bulletin is explicit that carriers bear responsibility for AI systems and data sources they use, regardless of whether those tools are built in-house or acquired from third parties — and regulators may request documentation on vendor diligence as part of any investigation or market conduct examination. A 2025 NAIC health insurer survey found that nearly a third of health insurers still do not regularly test their AI models for bias, a gap regulators have specifically flagged.

States are also moving independently. Colorado has codified prohibitions on the use of external consumer data and predictive models that produce unfair discrimination, and its implementing regulation places quantitative testing obligations on carriers. New York's DFS Circular Letter 2024-7 requires insurers to demonstrate that AI systems and external data sources do not proxy for protected classes or generate disproportionate adverse effects. California's Insurance Commissioner has signaled active scrutiny of algorithmic bias across lines of business.

Internationally, the EU AI Act classifies certain AI applications in life and health insurance as high-risk systems, triggering conformity assessment requirements and human oversight obligations. For global carriers with European operations, this adds a cross-border governance dimension that must be reconciled with the U.S. framework.

A New Federal Dimension

A significant development occurred in December 2025 when President Trump signed an executive order seeking to challenge state AI regulation and push toward a uniform federal standard, directing the Department of Justice to establish an AI litigation task force to challenge state AI laws and instructing agencies to condition certain federal funding on states not enacting conflicting AI regulations. Congressional attempts to enact a statutory moratorium on state AI laws have failed twice on a bipartisan basis.

For insurers, the practical effect of the executive order is limited but not irrelevant. Under the McCarran-Ferguson Act, state laws regulating the business of insurance take precedence over federal directives absent explicit congressional action. The NAIC stated plainly that the order should have no direct effect on state insurance regulation of AI, while also noting that it creates uncertainty that could delay business decisions and defer consumer protections. State insurance legislators, through NCOIL, responded with sharp opposition. The state-based AI governance framework for insurance remains intact — but the federal tension is now a live dimension of the regulatory environment that insurers and their government affairs teams need to monitor closely, particularly if Congress revisits preemption legislation.

Why This Is a Business Issue, Not Just a Compliance Issue

Regulatory pressure is focusing on the places where AI has the greatest operational leverage — and therefore the greatest potential for consumer harm. Underwriting and pricing models that use external consumer data face heightened scrutiny for proxy discrimination, particularly where granular segmentation produces disparate outcomes that cannot be explained or justified. Claims automation is under examination for procedural fairness: touchless denial processes are an emerging focus of market conduct examiners. Fraud detection models raise parallel concerns about accuracy and disparate impact.

Governance failures in these areas are not simply fine exposure. They force the retraction or redesign of models that may represent significant competitive advantage. In some cases, they become market conduct matters with reputational consequences that extend well beyond the underlying regulatory proceeding.

The vendor liability question deserves particular attention. Many carriers rely on third-party data sources and model providers whose internal governance practices may not be aligned with emerging regulatory standards. The NAIC bulletin is unambiguous that this does not limit carrier responsibility — and regulators have signaled they will look through vendor relationships in examinations. The anticipated 2026 model law on third-party data and models would extend this scrutiny further. Insurers that cannot demonstrate diligence over the AI tools they use, regardless of source, are exposed.

Strategic Implications

Governance is a board issue. Regulators in New York, Colorado, and the NAIC framework all contemplate board-level oversight of AI risk. Directors should understand their AI governance exposure with the same fluency they bring to catastrophe risk or investment strategy. "We didn't understand how the model worked" is not a defensible position in a market conduct examination — or in litigation.

Cross-functional alignment is operational, not aspirational. Effective AI governance requires data scientists, actuaries, underwriters, legal, compliance, and risk management to operate within a shared framework — not sequentially, but in an integrated governance structure before models go into production. Many firms have the talent in place; the structural architecture to connect these functions is what is often missing.

Proactive regulatory engagement shapes outcomes. The NAIC's AI Systems Evaluation Tool is being developed in active consultation with the industry, and carriers that engage constructively — demonstrating robust governance to regulators rather than waiting to be examined — have a genuine opportunity to influence how that tool is applied. This window is not indefinitely open.

Cross-border consistency is a strategic requirement for global carriers. The U.S. state-based framework and the EU's risk-tiered approach reflect different regulatory architectures. The new federal executive order adds a third layer of uncertainty for carriers navigating cross-border governance. Multinational insurers must reconcile these requirements in a coherent enterprise framework, not manage them as separate compliance programs.

Narrative and public trust matter. The political and public conversation about AI in underwriting and claims is intensifying — around fairness, accessibility, and the use of data consumers do not know is being used. How insurers communicate about their AI governance practices will influence both regulatory relationships and public perception, particularly as the federal-state tension over AI regulation plays out in public.

Looking Ahead

AI governance in insurance will not resolve through a single legislative moment. Supervisory expectations will continue to formalize through state bulletins, market conduct examinations, the deployment of the NAIC's examination tool, and the anticipated model law on third-party oversight. The federal preemption debate adds a layer of structural uncertainty, but the McCarran-Ferguson shield and the consistent bipartisan resistance to statutory preemption suggest the state-based framework will hold.

Firms that build governance capability now — as an enterprise function rather than a compliance checkbox — will be better positioned to deploy AI at scale, adapt as the regulatory architecture evolves, and engage from a position of credibility rather than remediation.

Bushnell Mueller helps insurers and their distribution and risk partners navigate the policy, regulatory, and reputational dimensions of strategic business issues. For more on our work on AI governance, data policy, and regulatory strategy, contact us.