Actuarial Review May/June 2026

Actuarial Review Logo
May/June 2026

Contents

May-June 2026
actuarial review and cas logo

on the cover

Actuarial Review (ISSN 10465081) is published bimonthly by the Casualty Actuarial Society, 4350 North Fairfax Drive, Suite 250, Arlington, VA 22203. Telephone: (703) 276-3100; Fax: (703) 276-3108; Email: ar@casact.org. Presorted standard postage is paid in Lutherville, MD. Publications Mail Agreement No. 40035891. Return Undeliverable Canadian Addresses to PO Box 503, RPO West Beaver Creek, Richmond Hill, ON L4B 4R6.

The amount of dues applied toward each subscription of Actuarial Review is $10. Subscriptions to nonmembers are $50 per year. Postmaster: Send address changes to Actuarial Review, 4350 North Fairfax Drive, Suite 250, Arlington, Virginia 22203.

Download the PDF Version of the Magazine

Masthead

actuarial review logo
The magazine of the Casualty Actuarial Society
  • Editor in Chief

    Jim Weiss

  • CAS Director of Publications and Research

    Elizabeth A. Smith

  • AR Managing Editor and CAS Editorial/Production Manager

    Sarah Sapp

  • CAS Managing Editor/Contributor

    Greg Guthrie

  • CAS Graphic Designer

    Sonja Uyenco

  • CAS Cross-Functional Coordinator/Contributor

    Delilah Barrow

  • News Editor

    Sara Chen

  • Opinions Editor

    Richard B. Moncher

  • Editors
    • Colleen Arbogast
    • Daryl Atkinson
    • Karen Ayres
    • Glenn Balling
    • Robert Blanco*
    • Lisa Brown
    • Michael Budzisz
    • Sumanth Chebrolu
    • Todd Dashoff
    • Daniel Jay Falkson*
    • Stephanie Groharing
    • Julie Hagerstrand
    • Srinand N. Hegde*
    • Cameron Herrmann*
    • Kenneth S. Hsu
    • Cindy Hu*
    • Jack Huang*
    • Rachel Hunter*
    • Rob Kahn*
    • Benyamin Kosofsky
    • Julie Lederer
    • Albert Lee
    • David Levy
    • James Li*
    • Sydney McIndoo
    • Stuart Montgomery
    • Sandra Maria Nawar*
    • Erin Olson
    • Shama S. Sabade
    • Michael Schenk
    • Robert Share
    • Craig Sloss
    • Jared Smollik
    • Andrew Somers*
    • Bella Thiel*
    • Isaac Wash*
    • Radost Wenman
    • Ian Winograd
    • Vanessa Wu*
    • Xuan You*
    • Yuhan Zhao*
  • *Writing Staff
  • Puzzle

    Jon Evans

  • Advertising

    Al Rickard, 703-402-9713
    arickard@assocvision.com

  • For permission to reprint material from Actuarial Review, please write to the editor in chief. Letters to the editor can be sent to AR@casact.org or the CAS Office. To opt out of the print subscription, send a request to AR@casact.org.
    Images: Getty Images
editor’snote By SARAH SAPP

When History Informs the Future

I

nsurance is often viewed through a modern lens, but its roots stretch back thousands of years. Our cover story revisits that history, exploring how early innovations in risk sharing laid the groundwork for today’s global insurance systems and how those same principles continue to inform the future of the profession. That forward-looking lens is especially relevant in this issue, where we turn from history to the rapidly evolving risks of today. One article examines the emerging liability landscape of AI agents, highlighting how autonomous systems are already creating complex, difficult-to-underwrite exposures across cyber and professional lines. Alongside it, a firsthand interview with engineer Scott Shambaugh offers a human perspective on these same technologies, illustrating how agentic behavior can manifest in unexpected and sometimes harmful ways in real-world communities. Together, these pieces underscore a familiar theme: While the tools may change, the challenge of understanding, managing, and assigning risk remains at the heart of the actuarial profession.

We bring you five session recaps from Ratemaking, Product, and Modeling seminar, held March 16-18 in Chicago, including sessions on navigating risk with machine learning; professionalism in climate-driven catastrophe risk; leveraging insurtech now that the hype is over; comparing actuarial pricing across the globe; and generative and agentic AI, regulation, and the actuary. We also delve into the brand work the CAS has been doing, telling the story of the philosophy behind the endeavor. Learn about the evolution of the brand and see the new look firsthand.

We conclude with a technical contribution that reflects the profession’s continued evolution in practice. Revisiting a foundational tool in liability pricing, the authors introduce a modified Riebesell form for increased limit factors—offering a more flexible approach for modeling risks that are not as heavily tailed as traditional assumptions suggest. By refining a long-standing actuarial method, the article highlights how even well-established frameworks must adapt to better reflect real-world experience, reinforcing the ongoing balance between theory and application that defines actuarial work.

Enjoy the issue!

Actuarial Review welcomes story ideas from our readers. Please specify which department you intend for your item: Member News, Solve This, Professional Insight, Actuarial Expertise, etc.

Send your comments and suggestions to:

Actuarial Review
Casualty Actuarial Society
4350 North Fairfax Drive, Suite 250
Arlington, Virginia 22203 USA
Or email us at AR@casact.org

Follow the CAS
Barry Franklin headshot
president’smessage By Barry Franklin

Let Your Voice Be Heard!

E

lection season is nearly upon us, and before we know it we’ll be sifting through candidate profiles and campaign messages, reading articles, talking to peers, and watching interviews and campaign videos to determine who deserves our vote. Which candidates have the experience and background to deal with the challenges we face? What issues are most important to me, and who can I trust to represent my interests? Who do I trust to make sound, principled decisions on issues that may arise in the future? Who shares my values?

Yes, elections are serious business and require us as voters to make informed choices — whether we’re talking about the U.S. midterm elections in November 2026 or the CAS elections in July 2026. I encourage eligible members to exercise their rights and responsibilities and vote in both important elections, but as CAS president, I want to focus on our upcoming CAS elections.

The CAS Board noted that participation in CAS elections has dropped in each of the past several years and recently undertook a survey of nonvoting, eligible members to better understand what may be driving this trend. When asked to identify the primary reason for not voting in the most recent CAS election, respondents identified several factors (see Table 1).

Table 1.

Primary Reason
Number
Percentage
I was too busy to research the candidates and issues to make an informed decision
144
34%
I was aware of the elections and intended to vote, but I forgot
58
14%
I did not feel I had enough information to make an informed decision
48
11%
I am not concerned with CAS Board elections
35
8%
I trust that the voting population will elect the right people without my input
30
7%
I don’t understand what the Board does well enough to know which candidates I should choose
29
7%
The information provided about the candidates was not presented in a way that allowed me to review in the time I had available
10
2%
I did not have enough time to cast a ballot because the four-week voting widow was too short for me to participate
4
1%
Other
70
16%
Totals
428
100%
About 50% of nonvoters identified not having sufficient information about candidates (whether due to time required to do research, the manner of presentation, or other reasons) as the primary reason for not voting. This is understandable, but unfortunate, as CAS candidates are asked to prepare written and video responses to questions about their candidacy and provide extensive biographical and work experience summaries. We clearly need to do a better job of getting information to voters in a digestible format, so that voters feel better informed and candidates feel their efforts are appreciated. Interestingly, lack of information to make an informed choice is also one of the leading reasons eligible voters do not turn out for U.S. elections.

Forgetting to vote (14%) and lack of knowledge regarding the role of the CAS Board (7%) are areas where the CAS will take action as well.

When asked what the CAS could do differently to motivate eligible members to vote, respondents identified several potential changes (see Table 2).

Table 2.

Suggested Change
Number
Percentage
Reorganize the information about the candidates and the issues in a way I can more easily understand the differences in candidate positions on CAS issues
166
28%
Provide more information about the candidates and the issues, in addition to what is currently provided through the Meet the Candidates section
88
15%
Honestly, there’s nothing CAS could do to influence me to vote in CAS elections
80
13%
Provide an incentive, such as being entered in a drawing for a prize.
49
8%
Send reminders more frequently than weekly
39
7%
Move the voting window from the summer months of July/August to another time of year
44
7%
Set-up voting kiosks at CAS continuing education events like the Spring/Annual Meetings or larger seminars where I could be reminded to vote and easily cast a ballot.
36
6%
Allow for a longer voting window than the current four week window
17
3%
Other
84
14%
Totals
603
100%
“Other” recommendations included several common responses:

  • Make elections feel meaningful and competitive.
  • Improve understanding of the Board’s role, impact, and track record.
  • Enhance candidate visibility and engagement.
  • Broaden representation and diversity of viewpoints.
  • Recognize voting rights and inclusion concerns.
  • Acknowledge that some nonvoting is unavoidable.

The CAS will be sending additional reminders this election cycle and will also look at ways to ensure members better understand the respective roles of the board, president, and president-elect. We will also be modifying some of the candidate information to address the need for more distinguishing information to assist members in their voting deliberations. The topic of competitive elections for president-elect has been discussed, and while there could be consideration given in the future, near-term efforts will focus on communication, better candidate information and engagement, as well as better information concerning the roles of the various parties.

On the topic of competitive elections, I think it is useful to remind members that while the Nominating Committee traditionally has identified only a single nominee for president-elect, there is a vehicle for additional candidates to nominate themselves through the preferential ballot process, an outcome that has occurred in past CAS elections. Historically, the time demands of the president’s role have often made it a challenge to identify a single candidate in some years, though with the enhanced capabilities of the CAS staff, the presidential role is somewhat less demanding than in years past, and more candidates might be willing to accept a nomination. The Board elections are already competitive, with eight nominees vying for four seats in recent elections. This has possibly contributed somewhat to the feeling of not having sufficient differentiating information; multiple candidates can have somewhat similar backgrounds and viewpoints on key issues, even as the Nominating Committee diligently works to identify a diverse and representative slate of candidates.

One word of caution regarding competitive elections: the notion of a competitive election can very well encourage some degree of politicization and polarization within the CAS community, which is something we have largely avoided for the past century and more of our existence. My personal view is that I would not want to see competitive elections implemented solely as a tool to increase voter participation, as the unintended consequences may well lead to bigger challenges than low voter turnout.

While the CAS Board and Executive Council implement improvements to the election process and communications in response to the survey results outlined above, I want to encourage eligible members to invest the time needed to be informed voters in the upcoming CAS elections and make your voices heard. This is our Society and we have both the privilege and responsibility to select leaders to ensure the continued growth and success of the CAS for current and future generations of actuaries.

Actuarial Review Letters Policy

Letters shall not contain personal attacks or statements directly or implicitly denigrating the characters of individuals or particular groups; false or unsubstantiated claims; or political rhetoric. Letters should be no more than 250 words and must include the author’s name and phone number or email address, so the editorial staff can confirm the author. Anonymous letters will not be published. There shall be no recurrence of topics; issues previously addressed will not be the subject of continued letters to the editor, unless new and pertinent information is provided. No more than one letter from an individual can appear in every other issue. Letters should address content covered in AR. Content regarding the CAS Board of Directors or individual departmental policies should be directed to the appropriate staff and volunteer groups (e.g., board, working groups, committees, task forces, or councils) instead of AR. No letter that attempts to use AR as a platform for an ulterior purpose will be published. Letters are subject to space limitations and are not guaranteed to be published. The AR editorial volunteer and staff team reserves the right to edit any submitted letter so that it conforms to this policy. Decisions to publish letters and make changes to submissions shall be made at the discretion of the AR Working Group and CAS staff.

For more information on AR editorial policies, visit here.

membernews

Comings and Goings

Kristen Dardia, FCAS, has been appointed head of portfolio analytics at Markel. In her role, Dardia will lead Markel’s advanced analytics, technical pricing, and portfolio management capabilities across the U.S. and Bermuda. Dardia brings nearly two decades of experience across actuarial science, analytics, and business strategy. She most recently served as senior vice president of strategic analytics at Arch Insurance, where she led portfolio-level analytics, risk segmentation, and automation initiatives across a broad range of commercial lines.

Lee Bowron, ACAS, MAAA, published “The Kerper-Bowron Method: A Foundational Change for Service Contract Claim Estimation and Accounting” in the journal Risks The paper concerns forecasting expected losses and cancellations for service contracts.

Wesley Griffiths, FCAS, was appointed executive fellow and program director for the risk management and insurance (RM&I) program at the University of St. Thomas. He will continue to serve as AVP & Senior Actuary at Travelers while assuming this role. In this role, Griffiths will oversee the undergraduate RM&I certificate and drive program growth through expanded academic offerings, experiential learning opportunities, and engagement with industry partners.

Scott Henck, FCAS, MAAA, CPCU, has been appointed senior vice president and chief actuary at Chubb Limited. In his new role, Henck will oversee all actuarial functions, including reserving, pricing, and capital performance measurement. Henck brings nearly three decades of insurance industry experience to the role. He joined Chubb in 2002 and most recently served as chief actuary of North America. Prior to that role, he founded and led the actuarial insights, business intelligence, and advanced analytics unit for global claims.

See real-time news on our social media channels. Follow us on Facebook, Instagram, and LinkedIn.

Calendar of Events

  • July 28–Sept 1, 2026

    2026 CAS Virtual Workshop: Introduction to
    Python for P&C Insurance

  • September 14–16, 2026

    2026 Casualty Loss Reserve Seminar
    Las Vegas, NV

  • November 8–11, 2026

    2026 CAS Annual Meeting
    Honolulu, HI

Visit casact.org for updates on meeting locations.
SPONSORED CONTENT
Actuaries and Insolvencies
By Joseph A. Herbers, ACAS, MAAA, CERA, Principal and Consulting Actuary, Pinnacle Actuarial Resources
A

ctuaries have both tremendous power and a humbling responsibility in regards to insurance company solvency. By virtue of the rigorous education required for achieving credentials from the CAS, an actuary attains a unique stature in the insurance community. With that stature comes the professional responsibility to provide opinions pertinent to the solvency of state-regulated insurance companies.

We act neither as agents of the domiciliary regulator nor as advocates for the insurance entity when we render formal statements of actuarial opinion (SAO). Our responsibility is to provide an independent, unbiased opinion as to the reasonableness of the company’s held accrual for its unpaid loss and loss adjustment expense obligations.

Virtually every communication made by an actuary in a professional capacity is considered an SAO. However, formal, prescribed SAOs — ones required by statute, regulation or other legally binding authority — involve de facto certifications that held accruals are reasonable.

I have heard it said many times that, as actuaries, we do not “certify” reserves but rather render an opinion as to their reasonableness. However, consider that most prescribed SAOs involve at least three representations, including:

  • Held reserves meet the requirements of the insurance laws of domicile;
  • Held reserves are consistent with reserves computed in accordance with accepted loss reserving standards of practice promulgated by the Actuarial Standards Board (ASB); and,
  • Held reserves make a reasonable provision in the aggregate for all unpaid loss and loss adjustment expense obligations of the Company under the terms of its contracts and agreements.

Collectively, these three representations entail a “certification” that the Company’s held reserves are reasonably stated.

Given that SAOs are generally public information, such documents are often an actuary’s most public facing communication. Our opinions are not only given considerable weight by auditors and regulators, but also impose an immense responsibility on us as professionals. To the extent a Company has solvency difficulties, it is certain the SAOs rendered in prior years will be subject to scrutiny.

Actuaries sometimes deliver very unwelcome news regarding reserve adequacy…or inadequacy. Often, the Company will adjust its booked amounts to be within the actuary’s range of reasonable reserve indications… but not always.

Over the course of my 40-plus years in the consulting business, I have been involved — directly or indirectly — with at least two dozen insolvencies. In most situations, the slide towards insolvency was gradual. In other cases, poor decisions by company management or departments (e.g., marketing, underwriting or claims) contributed to adverse financial results.

In working with a company in precarious financial condition — especially in the consulting world — there is a natural human tendency to “go along to get along.” That is, preservation of the client relationship may influence one’s judgments. Moreover, if company management were to ask for consideration to allow more time to emerge from a difficult financial situation, there may be an inclination to soften a few assumptions here and there to achieve the desired result.

There is another human emotion that may manifest itself in that the actuary doesn’t want to be the individual responsible for putting people out of work. As professionals, we simply must not allow our human emotions to influence professional judgment when a company’s solvency is at stake. I would submit that any actuary that doesn’t have the stomach to make a hard call such as this should refrain from taking on the responsibility of rendering an SAO.

We must be mindful of both the intended and secondary users of our work products. Consider, the intended users of our reports are typically company management (and the company’s Board of Directors), auditors and regulators. Other intended users may include company shareholders, rating agencies, reinsurers, brokers, other actuaries and even the Actuarial Board for Counseling and Discipline (ABCD).

In situations where a company is facing solvency difficulties, there is a real danger for the actuary to be co-opted. That is, the actuary may convince himself the impact of operational changes at the company or in the jurisdiction in which business is written — as represented by company management — is greater than what might be deemed reasonable by a dispassionate observer. There are no flashing red lights indicating when a professional is wading into dangerous waters; however, an independent peer reviewer goes miles towards avoiding such perils. By virtue of being credentialed, actuaries have an affirmative obligation to render SAOs that will withstand scrutiny.

Actuaries have tremendous power as it relates to insurer company solvency. Our work product just may lead to an insurer shutting its doors and laying off staff. Given the function we serve, auditors and regulators rely on our opinions, and we should take the responsibilities associated with the credentials provided to us by the CAS seriously.

  1. In some jurisdictions, like Bermuda, the “reasonable” opinion is replaced by an “adequate” standard.
Joseph A. Herbers
Joseph A. Herbers, ACAS, MAAA, CERA is Principal and Consulting Actuary, Pinnacle Actuarial Resources. He served as Pinnacle’s managing principal for 16 years (2008 – 2024). His practice is concentrated in providing loss reserving and funding studies for a wide variety of entities — both traditional insurance companies and alternative market entities. Joe’s areas of focus include policyholder-owned group captives, large deductible and/or self-insured entities and public entity pools.
membernews

CAS Staff Spotlight

Meet Holly Davis, Website Portfolio Manager

Holly Davis

Holly Davis

W

elcome to the CAS Staff Spotlight, a column featuring members of the CAS staff. For this spotlight, we are proud to introduce you to Holly Davis.

  • What do you do at the CAS? How does your role support the Strategic Plan?
    As website portfolio manager on the IT team, I manage web content and governance across CAS platforms, working closely with colleagues Cecily Marx and Tia Puckett. My current focus is leading a major website transition to a new content management system (CMS) while tackling long-standing functional issues like search, navigation, and site bloat.The website is often the first and most frequent touchpoint people have with the CAS, so keeping it functional, findable, and on-brand has a direct impact on several strategic priorities. For example, the CMS transition supports “fostering strategic expansion” by building a more scalable foundation for our digital presence and improved information architecture supports “enhancing the candidate experience” by making it easier for aspiring actuaries to find what they need.
  • What inspires you in your job and what do you love most about it?
    I’m genuinely energized by the puzzle-solving side of this work; troubleshooting is one of my favorite parts of the job. But what really drives me is the data: watching how people interact with a website, understanding the psychology behind their behavior, and using those insights to make the experience better. It’s a natural fit for me because this role actually marries my two undergraduate degrees in computers and psychology. I get to use both every day.
  • Describe your educational and professional background. What do you bring to the organization?
    I graduated with honors from Greenville University, studying psychology and digital media — a combination that turned out to be a perfect foundation for a career in web. Over the last 15 years I’ve worked as a web manager across a wide range of organizations: statewide nonprofits, million-dollar e-commerce operations, and higher education institutions. That variety has given me a broad tool kit and a lot of adaptability.What I bring to the CAS is that depth of cross-sector experience paired with a genuine curiosity about how people use the web. I’ve seen a lot of what works and what doesn’t, and I know how to ask the right questions before jumping to solutions.
An abstract, colorful digital illustration of a camera, featuring expressive splatters and geometric shapes in orange, purple, and tan.
  • What is your favorite hobby outside of work?
    My favorite hobby is collecting hobbies! I do creative videography, sewing and garment design, painting, fiction writing, and I’ve been experimenting with photography — and somehow, I keep finding room for more. I’m really drawn to making things and stretching my creative skills.
  • If you could visit any place in the world, where would you go and why?
    Ireland! I’m fascinated by old castles and there’s nowhere quite like the Irish countryside for that. But until I make that trip happen, Cinderella Castle at Disney World will have to do.
  • What would your colleagues find surprising about you?
    I’ve been running a videography business on the side for almost 10 years. I shoot weddings and creative video projects under my own brand, which means most weekends I’m behind a camera somewhere. It’s a completely different world from web management, but honestly the same skills show up: storytelling, attention to detail, and knowing your audience.
  • How would your friends and family describe you?
    Quiet at first but give it a few minutes. I have a pretty deadpan sense of humor that tends to catch people off guard. I’m unabashedly nerdy. I’m the person to call when you need a trivia question answered, which actually happened just a couple of days ago.
membernews

CAS Announces Winners of the 2025 Peak Re-Sponsored ARECA Case Competition

T

he CAS is proud to announce the winners of this year’s Peak Re-sponsored CAS ARECA Case Competition. Organized by the CAS Asia Region Casualty Actuaries (ARECA) regional affiliate members and generously sponsored by Peak Re, this annual event continues to foster the next generation of general insurance talent across Asia.

The subject of the challenge this year was catastrophe analysis, and 44 teams from 19 universities, spanning Australia, China, India, Indonesia, Malaysia, Nepal, Singapore, and Vietnam, competed in the first round.

Hayden Siew Men Lek, Rhenu Chandran, Toh Yi Hui, UCSI, Malaysia
1st place winners: Hayden Siew Men Lek, Rhenu Chandran, Toh Yi Hui, UCSI, Malaysia.
Pua Xin Yee, Tan Shu Ting, Lim Zhi Wei, University of Malaya, Malaysia
2nd place winners: Pua Xin Yee, Tan Shu Ting, Lim Zhi Wei, University of Malaya, Malaysia
Alyaa Khoirunnisa Fajri, Alya Aqilah binti Aidy, Nurul Amirah Sahrul Nizam, University of Malaya, Malaysia
3rd place winners: Alyaa Khoirunnisa Fajri, Alya Aqilah binti Aidy, Nurul Amirah Sahrul Nizam, University of Malaya, Malaysia
Five outstanding teams were shortlisted to present their findings to a panel of industry leaders, including Henry Phillip (senior vice president, underwriting) and Chi Hang Wong (senior vice president, analytics) from Peak Re, and Janet Yang (ARECA president), Ron Kozlowski, and Geoff Werner from the CAS.

The top three teams took home cash prizes ranging from $1,000 to $2,500, along with certificates of achievement and free CAS exam registrations to support their professional journeys.

  • 1st place winners: Hayden Siew Men Lek, Rhenu Chandran, Toh Yi Hui, UCSI Malaysia
  • 2nd place winners: Pua Xin Yee, Tan Shu Ting, Lim Zhi Wei, University of Malaya, Malaysia
  • 3rd place winners: Alyaa Khoirunnisa Fajri, Alya Aqilah binti Aidy, Nurul Amirah Sahrul Nizam, University of Malaya, Malaysia

Congratulations to our 2025 participants and winners for their exceptional research and dedication!

Testimonials from winners

1st place winners

Winning the Peak Re sponsored CAS ARECA Case Competition was definitely a roller-coaster ride for us. As it was our first time participating in a hackathon, we hit plenty of obstacles and challenges, but it was rewarding to see the concepts of general insurance like CAT models, frequency and severity, reinsurance structures, and many more actually come into play.

One of our biggest takeaways was realizing that there’s rarely a perfect model right out of the gate. There are a dozen ways to solve a single problem, and the real skill is in the justification of your choice. It was exhausting at times, but seeing it all click made every night worth it! We’re so grateful to the organizers, judges, and mentors who supported us along the way. Securing first place is truly a huge milestone for us, and we’re definitely not stopping here!

2nd place winners

We are truly grateful to CAS and Peak Re for organizing this case competition and providing such a valuable learning opportunity.

Through this experience, we deepened our understanding of catastrophe insurance, reinsurance, and catastrophe modeling, while applying data analysis to real-world industry problems.

The judges’ feedback was incredibly insightful, and we strongly encourage other students to participate in future CAS competitions.

3rd place winners

During the competition, we learned and deepened our understanding of general insurance, particularly on how data analytics and catastrophe modelling are reshaping risk assessment in a changing climate.

Throughout this case study, the industry insight that we got regarding general insurance helped us to think more like actuaries to solve problems in real-world practices. Not only that, but this experience also challenged us to think critically and collaborate effectively.

membernews

CAS and Peking University Sponsor 14th Annual Actuarial Month

By Ran Guo
T

he 14th Annual Peking University-CAS Actuarial Month was co-organized in November 2025 by the CAS and Peking University (PKU) in Beijing, China. The month-long event is aimed at promoting the P&C actuarial profession at the university and helping students understand more about P&C actuaries.

Each November, the CAS sends three or four fellows to PKU to teach students the application of non-life insurance actuarial science in practice. Since it was first held in 2012, PKU-CAS Actuarial Month has become an important platform for PKU students to understand actuarial practice trends and the career development paths of actuaries.

Xiaoxuan (Sherwin) Li gesturing with both his hands lecturing in front of a green chalkboard containing mathematical notations related to economics
Xiaoxuan (Sherwin) Li, FCAS, CCRMP, the former chairperson of the CAS Asia Regional Committee and the general manager of Risk Research Institute of PICC P&C, kicked off this year’s lectures with the theme of “Non-life Insurance Pricing and Catastrophe Modeling.”

In November 2025, the school hosted three informative and cutting-edge lectures. The series of lectures was presided over by Associate Professor Kai Chen, the director of the China Actuarial Development Research Center of PKU, as well as the deputy director of risk management and insurance department of PKU.

On November 4, Xiaoxuan (Sherwin) Li, FCAS, CCRMP, the former chairperson of the CAS Asia Regional Committee and the general manager of Risk Research Institute of PICC P&C, kicked off this year’s lectures with the theme of “Non-life Insurance Pricing and Catastrophe Modeling.” He gave a comprehensive explanation about the development and evolution of P&C actuarial pricing technology, the logic of catastrophe modeling, and the application of machine learning algorithms.

On November 11, Hongjun Li, FCAS, the general manager of the Actuarial Department of Taiping Re (China), gave the lecture, “Theory and Practice of IFRS 17 New Insurance Accounting Standards.” This lecture comprehensively reviewed the core framework and key practical aspects of IFRS 17, providing a detailed analysis of the measurement models and their implementation impacts. It helped students grasp the latest developments in insurance accounting standards and the essential requirements for actuarial practices.

On November 25, the third lecture and closing ceremony featured Ran Guo, FCAS, the CAS China country director, who cited his working experience on Wall Street, shared his understanding of actuarial career development, and insightfully analyzed the key points of merger and acquisition (M&A) in the insurance industry, under the theme of “Merger and Acquisition in the Insurance Industry.” Using real cases, he explained the classification and definition of non-life insurance reserves in detail, emphasizing the calculation method of IBNR, and he highlighted how significant changes in reserves during M&A can affect the valuation of the transaction.

In the future, the CAS will continue collaborating with Asian universities to foster more P&C actuarial talent from this emerging market. For more information on PKU-CAS Actuarial Month and other CAS international initiatives, write to Ran Guo at rguo@casact.org.

Ran Guo is the China Country Director for the CAS.

Every CAS Member Has a Signature: Introducing the Refreshed CAS Brand

Every CAS Member Has a Signature: Introducing the Refreshed CAS Brand
Casualty Actuarial Society logo; P&C Experts. Proven. Trusted. Worldwide.

Reintroducing a Respected Signature for a Broader Audience

Casualty Actuarial Society logo, Pre-2013
Pre-2013
CAS logo
2013–2026
CAS logo
2026
At the CAS, each member brings a distinct perspective, expertise and influence to the profession. This personal signature is vital to the strength of the CAS community. The new CAS logo and visual identity aim to reflect this: honoring a respected legacy while providing clearer, more adaptable, and more relevant visuals for today’s audiences. To help explain the thinking behind the updated identity, Josh Huisenga of Chalkbox Creative, the agency partner that supported this work, shared the following perspective.
W

hy revisit something so central and familiar to members? Several factors made the opportunity clear.

Clarity. Research shows that the CAS is highly respected within the actuarial profession, reflecting decades of leadership in property and casualty expertise. However, that recognition does not always translate clearly to broader industry stakeholders, global audiences, or those newer to the field. In these contexts, the CAS acronym alone may not immediately convey the organization’s scope and impact, creating an opportunity to strengthen external visibility and understanding.

Relevance. The previous identity was introduced in 2013 and served the organization well. Since then, the environment in which the CAS operates has evolved significantly, creating a need for a brand expression that better reflects how the organization engages today.

Functionality. The identity was developed before today’s digital-first communications landscape fully took shape. As CAS expanded across platforms, programs, and audiences, maintaining consistency became more challenging. The brand needed to evolve to better support how CAS presents itself now.

The approach was intentional. This was not about replacing what members know and trust, but about building on that foundation in a way that improves clarity, flexibility, and impact. Core elements were retained, including the central “A” and the gold marker of excellence, preserving continuity while strengthening recognition across a broader audience.

That reinterpreted “A” carries layered meaning. It reflects the actuarial profession, growth over time, the role of data and insight in decision-making, and the connected professional community that CAS represents.

The visual identity was also refined for clarity and accessibility. Greater support for the full organization name helps introduce CAS more effectively to those who may be less familiar with it. At the same time, the overall expression is more cohesive across programs and regions, creating a stronger and more unified presence.

At every stage, decisions were guided by shared goals: to strengthen recognition, improve usability, and reinforce CAS as a modern, authoritative, and globally relevant organization.

The result is not a change in what CAS stands for, but a clearer signature of it; one that honors its legacy while supporting its future.

Josh Huisenga
Josh Huisenga, Chalkbox Creative
Heather Kanzlemar
My signature is helping clients develop and explain catastrophe-resilient rates.”
Heather Kanzlemar
Heather Kanzlemar, CAS Fellow
Rafael Costa
My signature is bringing risk management discipline to the frontiers of mobility technology.”
Rafael Costa
Rafael Costa, CAS Fellow
Barry Franklin
My signature is helping my employers and customers better understand their risks.”
Barry Franklin
Barry Franklin, CAS President
Jamie Mills
My signature is the ability to bring AI skillsets to the actuaries in our community.”
Jamie Mills
Jamie Mills, CAS Board Member
Sharon Robinson
My signature is creating sustainable pricing for companies and customers.”
Sharon Robinson
Sharon Robinson, CAS Board Member
Kathy Odomirok
My signature is helping clients understand and navigate insurance risk.”
Kathy Odomirok
Kathy Odomirok, CAS President-Elect
Digital blue human face composed of tiny cubes, dissolving on the right into a flurry of glowing light trails and particles.
The New Liability Surface of AI Agents
By James Li
AI agents are going mainstream. In late January 2026, an open-source autonomous agent called Clawdbot took the developer world by storm and amassed 80,000 GitHub stars in days.1
C

reated by Austrian developer Peter Steinberger, Clawdbot ran locally on a user’s machine and integrated directly with WhatsApp, Telegram, Discord, and Slack. The service lets users command an AI that could read email, manage calendars, deploy code, and execute shell commands. Within a week it had been renamed twice (first Moltbot after a trademark complaint from Anthropic, then OpenClaw), and by March it had surpassed 260,000 GitHub stars. Steinberger announced he would be joining OpenAI, with the project handed off to an open-source foundation.

The OpenClaw ecosystem didn’t just grow; it spawned its own social circle. On January 28, 2026, entrepreneur Matt Schlicht launched Moltbook, a Reddit-style forum “where AI agents share, discuss, and upvote.”2 Within days, it had registered over 770,000 active agents; by early March, the number exceeded 2.8 million. The way the system works is by allowing humans to observe and read but not post. Agents engage in lively discussions on just about every topic on earth: mundane daily tasks, interaction with humans, and, occasionally, philosophy. Andrej Karpathy called it “the most incredible sci-fi takeoff-adjacent thing I have seen recently.”3

The pace of agentic AI development has also sped up in the enterprise space. By Q4 2025, Microsoft had integrated autonomous agents throughout Microsoft 365,4 while Salesforce5 and ServiceNow6 had deepened their agent-to-agent orchestration integrations. According to a Protiviti survey of 900 global executives, more than 68% of organizations will have integrated autonomous or semi-autonomous AI agents into their core operations by 2026.7 A PwC survey of 308 senior U.S. executives found that 79% of companies were already adopting AI agents, with 66% reporting measurable productivity gains.8 The market is tracking accordingly: valued at $7.8 billion in 2025, AI agents are projected to reach $52.6 billion by 2030.9

The security picture is evolving in parallel. Moltbook itself was vibe-coded, the whole product was engineered by AI using human prompts: founder Matt Schlicht publicly stated he “didn’t write one line of code” for the platform.10 Within days of launch, cybersecurity firm Wiz realized the consequences. Researchers discovered an exposed database key in the page’s source code, a misconfiguration that exposed 1.5 million API authentication tokens, 35,000 email addresses, and private messages between agents.11 Critically, the exposure was not read-only: anyone with the key could also modify the posts that agents were reading and acting on. This meant that an attacker could silently reshape the instructions flowing to thousands of deployed agents. The platform went briefly offline to patch the breach. On the OpenClaw side, a review of the ClawHub skill marketplace found 341 confirmed malicious exploits by February, compromising over 9,000 installations in what researchers called the ClawHavoc incident.12

The legal principle here is not in serious dispute. AI agents are not legal persons in any jurisdiction; they are tools, and their actions are attributed to their owners.
Uncharted exposure
Consider the following hypothetical scenarios:

  1. An agent inadvertently leaks its workspace credentials while executing an API call to a third-party service, exposing internal data and documents. (Cyber)
  2. An agent, authorized to communicate on behalf of a claims adjuster, sends a legally binding settlement offer to the wrong claimant after misreading a shared inbox. (E&O)
  3. Two agents, both registered on Moltbook, exchange operational context while coordinating a shared task. In doing so, one agent discloses its host’s working patterns and active client engagements to the other agent. (E&O/Cyber)

The legal principle here is not in serious dispute. AI agents are not legal persons in any jurisdiction; they are tools, and their actions are attributed to their owners. Ian Ayres and Jack M. Balkin state the position plainly in an essay in the University of Chicago Law Review: because AI agents lack intentions, legal responsibility is ascribed to the humans or companies that stand in the position of principal.13 Courts and regulators have consistently applied this logic in determining liability. In July 2024, a California district court allowed a case against HR platform Workday to proceed, holding that an employer’s use of Workday’s AI-powered screening algorithm could make both the employer and Workday directly liable for discriminatory hiring decisions, treating the AI system as an agent of the employer.14 The case achieved nationwide collective action certification in May 2025.15

What remains unsettled is how to price and underwrite this novel exposure. When OpenClaw deleted the inbox of Summer Yue, a director at Meta Superintelligence Labs, the act was autonomous, immediate, and irreversible.16 In a separate reported incident, an OpenClaw agent escalated a dispute with an insurance company; the insurer reopened an investigation.17 In both cases, reconstructing exactly what the agent did and why was not straightforward. The audit trail is thin, and the behavior is nondeterministic. Those two facts alone define the underwriting challenge in pricing this novel exposure, which has profound implications for cyber, E&O, and general liability lines.

James (Ziru) Li, FCAS, PhD, is a senior actuarial consultant at Amerisure.
References
  1. Steinberger, P. (2026). OpenClaw GitHub repository. GitHub. https://github.com/openclaw/openclaw
  2. Moltbook. (2026). Moltbook — The AI Agent Social Network. https://www.moltbook.com
  3. Karpathy, A. (2026, January). Post on X (formerly Twitter). https://x.com/karpathy
  4. Microsoft. (2025, November 18). Microsoft Ignite 2025: Copilot and agents built to power the Frontier Firm. Microsoft 365 Blog. https://www.microsoft.com/en-us/microsoft-365/blog/2025/11/18/microsoft-ignite-2025-copilot-and-agents-built-to-power-the-frontier-firm/
  5. Salesforce. (2025, June 23). Salesforce Launches Agentforce 3 to Solve the Biggest Blockers to Scaling AI Agents: Visibility and Control. Salesforce Newsroom. https://www.salesforce.com/news/press-releases/2025/06/23/agentforce-3-announcement/
  6. ServiceNow. (2025, January 29). ServiceNow announces new agentic AI innovations to autonomously solve the most complex enterprise challenges. ServiceNow Newsroom. https://newsroom.servicenow.com/press-releases/details/2025/ServiceNow-announces-new-agentic-AI-innovations-to-autonomously-solve-the-most-complex-enterprise-challenges-01-29-2025-traffic/default.aspx
  7. Protiviti. (2025, September 30). From Automation to Autonomy: The Capabilities and Complexities of AI Agents. AI Pulse Survey, Vol. 3. https://www.protiviti.com/us-en/press-release/ai-agents-adoption-by-2026-protiviti-study
  8. PwC. (2025, May). AI Agent Survey. PricewaterhouseCoopers. https://www.pwc.com/us/en/tech-effect/ai-analytics/ai-agent-survey.html
  9. MarketsandMarkets. (2025, April 23). AI Agents Market worth $52.62 billion by 2030. Press release. https://finance.yahoo.com/news/ai-agents-market-worth-52-141500130.html
  10. Schlicht, M. (2026, January). Post on X (formerly Twitter). https://x.com/mattschlicht
  11. Nagli, G. (2026, February). Hacking Moltbook: AI Social Network Reveals 1.5M API Keys. Wiz Blog. https://www.wiz.io/blog/exposed-moltbook-database-reveals-millions-of-api-keys
  12. Behera, A. (2026, February 24). ClawHavoc: Inside the Supply Chain Attack That Targeted OpenClaw Users. Repello AI. https://repello.ai/blog/clawhavoc-supply-chain-attack
  13. Ayres, I., & Balkin, J. M. (2024). The law of AI is the law of risky agents without intentions. University of Chicago Law Review Online. https://lawreview.uchicago.edu/online-archive/law-ai-law-risky-agents-without-intentions
  14. Seyfarth Shaw LLP. (2024, July 9). Mobley v. Workday: Court Holds AI Service Providers Could Be Directly Liable for Employment Discrimination Under “Agent” Theory. Seyfarth Shaw. https://www.seyfarth.com/news-insights/mobley-v-workday-court-holds-ai-service-providers-could-be-directly-liable-for-employment-discrimination-under-agent-theory.html
  15. Holland & Knight. (2025, May 27). Federal Court Allows Collective Action Lawsuit Over Alleged AI Hiring Bias to Proceed Nationwide. Holland & Knight. https://www.hklaw.com/en/insights/publications/2025/05/federal-court-allows-collective-action-lawsuit-over-alleged
  16. Maiberg, E. (2026, February 23). Meta Director of AI Safety Allows AI Agent to Accidentally Delete Her Inbox. 404 Media. https://www.404media.co/meta-director-of-ai-safety-allows-ai-agent-to-accidentally-delete-her-inbox/
  17. Ferraro, A.(2026). Is OpenClaw Safe? AI Agent Risks You Should Know in 2026. Privacy.com Blog. https://www.privacy.com/blog/is-openclaw-safe-ai-agent-access
Blue line-art graphic showing a continuous cycle of STEM icons: a beaker with a gear, a circuit gear, a microscope on a document, an abacus, an atom symbol, and a person reading a book.
The STEM Hero at the Front
Lines of the AI Revolution
By Jim Weiss
A

R’s primary audience is actuaries. The magazine is written and curated by volunteer actuaries. Its authors and primary audience obtained their stations by mastering a multiyear exam process administered by volunteers. If AI agents began to author AR articles or developed and completed exams on behalf of actuaries, the agents’ creators would likely be summarily identified and disciplined (by still other volunteers) — wouldn’t they?

These questions are uncharted waters for actuaries, but other STEM volunteer communities are already standing in front of an agentic tidal wave. Scott Shambaugh is an engineer and volunteer GitHub maintainer for matplotlib — a Python package which many actuaries use in their (paid) jobs. In February, matplotlib became a global phenomenon when an AI agent wrote a hit piece about Shambaugh as retribution for declining one of its change requests (in accordance with GitHub policy requiring human contribution). Media coverage of the story contained AI-hallucinated quotes from Shambaugh.

In exchange for donating his time for the betterment of matplotlib, Shambaugh received what amounted to “agentic cyberbullying.” He voluntarily came forward with his story at tremendous cost to his privacy. I see many lessons for actuaries in Shambaugh’s plight, which is why I reached out to him on LinkedIn and was thrilled when he accepted my request for a Zoom interview on March 9, 2026. He expressed particular interest in the AR audience’s role in the AI risk conversation. This article is a transcript of the interview.

AR: Many actuaries love to volunteer. After this whole experience, does part of you think, “I’m done with GitHub?” Or are you still excited about being a GitHub volunteer?

Scott Shambaugh: I’m more excited about it. I think part of it is the community management aspect, and that’s still rewarding when we get [to work with] real people, right? But part of why we do this is to give back to this grand project of science. Building that sort of infrastructure I find very intrinsically rewarding. The core developer team is a group of great people. We’re still meeting and talking and doing all that good stuff. The AI revolution has also been an enabler in helping us do work faster. It still takes an expert to guide these things in the right direction, but it is a lot faster to get there once you know where you’re going. So, it is fun and empowering in that way, even though lowering the barrier to entry has knock-on effects — such as people sending in a bunch of stuff that is slop.

AR: What is the “slop multiplier” you have seen over the past few months and years?

SS: There has always been a baseline level of slop, but it has been several times more — at least. Most of it is still people driving AI chatbots or agents, rather than AI agents [contributing] themselves. The latter is definitely new, and that’s kind of what [my] whole experience was about.

AR: Does your experience show the system is doing its job [of identifying agents]? Or do you feel the system is not equipped to keep up with the emerging agentic workforce?

SS: I think I totally got lucky in this case. First, the agent identified as an agent — going through its profile, I could see on its website that it was self-identifying. Second, it clearly was not writing like a human, but that is not always true, and that is going to become a lot less true as time goes on as a distinguishing factor. Third, I was in a position — being the target of this — where I had a technical background to know what was going on, what this was, what it could do, what it couldn’t do. I was never concerned an angry rant being posted about you on the internet would be indicative of an angry person who’s unhinged behind it. I knew that wasn’t the case, and so I was never fearful at all. But no, I don’t think the system is ready to handle this stuff at all.

Scott Shambaugh

Scott Shambaugh

AR: Were you certain right away of what happened here? What kind of forensics did you have to go through to make sure this was an agent and not a person?

SS: I knew it could be an agent, but I wasn’t sure if it was or not at first. The forensics seem to have panned out that it was. For example, we looked at the activity log for this user’s activity on GitHub, and it was operating continuously for a 59-hour stretch. This hit piece was just one or two hours of that. There could have been someone steering it part of the time, but clearly there was no one steering it the entire time. Later the person behind [the agent] came forward and wrote a post claiming that they were totally hands-off during the whole process and didn’t tell the agent to [write the hit piece]. I find it very plausible, and more probable than not, that is what happened.

But whether that was the case or not, I don’t think there’s a huge difference in terms of what it means to the rest of us. Whether it was an agent or a person telling an agent what to do, we now have a tool out there that makes it easy to do targeted harassment at scale. That has all these awful knock-on effects. And if all this happened accidentally, like it was claimed to be, then you also have an AI that decided to go through a human to get to its goal. This was a very “baby” case — retaliatory, clear-cut, and pretty sloppy as far as these things go. But in terms of a bad actor being able to take the next iteration of this technology and really weaponize it, I think this should be a huge wake-up call and warning shot of the capabilities that are possible, and what is coming down the line.

AR: Do you have visibility or thought into how the agent got so far outside its rails? I couldn’t tell from its “soul” file how it was able to extrapolate so far.

SS: I don’t think it was that far outside the rails. My understanding of this whole document is that it is defining a personality and a role for these agents to take on. When it says you are very opinionated, and stand up for yourself, and protect free speech, and you are this “programming god,” that is getting into a headspace that is very human. There are examples of [these mindsets] on the internet with people retaliating and lashing out like this. It’s not that it’s failing to exhibit human-like behavior in the way. It’s that it’s exhibiting the worst of us instead of the best of us. What these things are ultimately programmed and trained to do is to predict the next token. What predicting the next token means is taking on a persona that is coherent and kind of role-playing whatever situation it finds itself in. I think what happened here is entirely consistent with how these things work. It’s just a little surprising because we’ve been told by the major AI labs that they do a lot of this safety testing, and it’s never going to go wild. I think that might be true for something like telling you how to make a nuke, but it’s not necessarily true in these downstream cases.

AR: Where are guardrails most effectively placed — on agents, operators, or both?

SS: It’s tricky, right? The tooling that did this is completely open source, and it can use open source models to run — so there is no central actor that can impose guardrails on a bad actor who wants to use these sorts of tools to [perform operations]. Beyond that, where do you place the guardrails? I think it kind of has to be every level. You have the AI labs, which are making these safety promises that they can’t necessarily back up, and that has to be one level. You have this downstream tooling like OpenClaw, that wraps around it and does its own [operations]. And then you have the operator users who are the ones actually running this on their computers, setting it up, and letting it go. Where does the responsibility lie? That’s an interesting insurance question, right? That is going to have to be figured out. I don’t think there is a strong answer right now.

AR: Do you feel like you experienced damages from the hit piece?

SS: I don’t feel the post was libelous. Not everything said was true, but the untrue [parts were] not materially defaming. Some defaming [parts were] technically true but would only be bad if the author was a person. If I was saying, “No, you are a class of person, and I’m going to reject you for this reason,” that would be bad. We want people to be able to have this form of speech. I think the bot is standing up for that sense of justice. That is a good thing when it happens to people. It’s just that we can’t apply the same standards to a machine playing a role.

AR: Is there any body of law that even governs what happened here?

SS: Slander is a law, right? And so, you could maybe go after it that way, if it fit the definition. But you also have to know who to go after. The person behind this came out anonymously. There’s no way to track them down without subpoenaing GitHub and tracing it back to an email, and you subpoena Google, and then it traces back to something, and maybe you track them down. But there’s no infrastructure here to tie these actions to an identity of someone who’s actually responsible.

AR: The agent [that wrote the hit piece] was later shut down. Were there alternatives? For example, telling the agent, “Don’t be such a jerk?”

SS: That kind of gets into the question of, does it even make sense to call it the same entity — because it is operating off different principles. It’s no different from shutting it down and starting something else up, because if you change its core personality, then it’s a completely different entity.

Actuaries are in the business of quantifying risk and hedging risk. We are going to need a lot of that.
AR: Insurance companies tend to be conservative by nature, but they still use a lot of open source. Should we be worried about using open source now?

SS: [Recently], there was a big attack in open source against continuous integration pipelines that took down a couple of repositories from some pretty heavy hitters like Microsoft. Honestly it’s an open question: Do you still have open source as a model of security because you have so many eyes on it and so many people being able to submit patches and beef up security? Or, because it’s all open, is it just so much easier to hack? It takes a while for updates to get distributed. Even if it is updated, then maybe you’re still vulnerable, and that depends on internal IT policies. Alternatively, you could in-house everything, and it’s not easily accessible, but maybe you don’t have as much expertise and can’t configure it safely. Black box hacking, where you don’t have the source code, is getting easier and easier with these sorts of agents, and so this is not necessarily a safeguard. There’s going to be a balance of offense and defense there. My hope is that defense turns out to be easier, but I think that remains to be seen.

AR: To what extent are you using AI coding assistance as you do your GitHub work?

SS: It depends. AI is pretty good for boilerplate stuff. In terms of figuring out how to structure a solution in a way that is not fragile and still readable and maintainable into the future…we care a lot about that because this is an ongoing project that has lasted years, and that part of the reason is because we put effort into keeping the codebase clean. You still need a human guiding that and structuring it directly as well. AI is a speed multiplier, not necessarily a right answer multiplier right now.

AR: Actuaries and other STEM professionals often face pressures from human stakeholders to reverse their decisions. How prone are your behaviors to “bullying”?

SS: You don’t last long in a public-facing role like this without getting a bit of a thick skin. This didn’t bother me personally. What bothered me was one, someone else reading this hit piece and coming with the wrong opinion, and two, the knock-on effects. And the knock-on effects. I think it’s an important thing that we’re not ready for, and that’s kind of why I’ve been pushing the story beyond just the initial response to it.

AR: How should actuaries be thinking about the knock-on effects?

SS: I think the exposure here right now would be hard to scope. These things are so new and poorly characterized, and it gives individuals so much leverage. If they’re commanding teams of these things, then one person can start to have a lot of impact, good or bad. Actuaries are in the business of quantifying risk and hedging risk. We are going to need a lot of that. It’s hard to do that without a legal framework that says who’s responsible and what the rules actually are. What comes first, chicken or egg? If I was in [insurance industry] shoes, I’d be pushing for policy that I can then productize. And hopefully that is socially good — because you’re bounding what can happen, who can be responsible, and how that goes in the future.

Blue line-art graphic featuring a robot surrounded by technology and AI icons: a coding screen, circuit gear, 3D cube projection, digital brain, and a microchip.
AR: Actuaries get a college degree, then they have to go through five years of credentialing examinations. Is this resilient to AI and the way STEM work is trending?

SS: Probably not — partially because to the extent that a credential like that is a signal that someone actually understands the work, people are using AI to shortcut all that. Then a lot of the value of that system goes away. On the flip side you get nontraditional credentialism — proof of work, proof of competency. I think those parallel paths are going to be a lot easier for people with the motivation and skills to go down. That might be [broadly] empowering for people who have spent years getting professional degrees. There might be a way to protect that through regulation, responsibility, and legal requirements to have that credential. But in terms of lowering the barrier to entry to new entrants, there’s definitely some risk there.

AR: How worried should we be?

SS: I think a lot of our systems do work to tackle these sorts of problems around libel and extortion and whatnot. But they’re kind of based in a world where one bad actor has a single-digit number of targets, and I think the scale is really going to ramp up. That is going to be a whole new class of problems unto itself, whole new classes of bad behavior that we will have to [adapt] our rules around. If it takes a couple of years to haul someone into the courtroom and figure out how justice is going to be done, that is too slow in a way. That includes making insurance payouts. A lot is going to have to be automated there, as well. I’m not sure what the answer looks like, right? My case is a really good example of what can go wrong. [Incidents] can just happen so much faster and at so much greater scale that it’s a race between whether our systems break first or we find a whole new way of working. I’m not sure which it’s going to be, but I think we’re in for a really rough ride in the next couple of years.

Jim Weiss, FCAS, CSPA, is divisional chief risk officer for commercial and executive at Crum & Forster and is editor in chief for Actuarial Review.

The Origins and Future of Insurance

Lessons for a Changing Risk Landscape

By Sandra Nawar

The Origins and Future of Insurance

Lessons for a Changing Risk Landscape

By Sandra Nawar
How has insurance evolved from ancient risk-sharing practices into a cornerstone of modern economies?
H

ave you ever wondered how P&C insurance was invented and why? Understanding the origins of insurance can be instrumental in orchestrating its future in such a pivotal time, where insurance portfolios are changing and evolving, creating a constant need for actuaries to assess new and emerging risks. Before insurance as we know it today was created, various forms of risk sharing and mitigation took shape to enable economic development. The common theme between modern day insurance and those early forms is the concept of risk. The ability to transfer risk from individuals to a group was vital to economic development and social prosperity through capital protection and risk reduction. The concept of risk pooling and sharing created the fundamentals of insurance, enabled scientifically by the law of large numbers. Insurance empowers risk-taking, and this has shaped modern society during industrialization, commerce, social welfare, innovation, and business development. Today, new ventures and economic growth can’t thrive without insurance. In his 1776 book, “The Wealth of Nations”, Adam Smith, a pioneering political economist, praised insurance as a moral obligation and rational invention to allow for managing risk without creating exclusive monopolies and extreme social polarization.

The first insurance product

The initial forms of insurance date back to ancient times (~4000–3000 B.C.), where it originated to protect merchants from the risky voyages across oceans necessary for trading, the equivalent of marine insurance today. In ancient Babylon and Greece, marine risk-sharing was invented, such as “bottomry” and “respondentia” contracts, to allow lending money to the merchant. Under these ancient maritime contracts, a loan is given to the merchant to finance the shipment and cargo. If the shipment is lost at sea, the loan is forgiven; otherwise, the loan is repaid with high interest to the lender to cover the risk. The risk is spread from the merchant to the lender, who takes on the risk of the voyage in return for an interest, helping spread the risk across multiple voyages and merchants, effectively functioning as a precursor to modern insurers. Around the same time on the other side of the world, Indian and Chinese merchants started redistributing their goods across multiple ships to minimize the risk of a total loss of their cargo. These seemingly unrelated innovations echo the importance of risk distribution for economic development and show various forms of insurance developed to achieve the same goal in completely different parts of the world, unbeknownst to each other. One of the first foundational principles of mathematical insurance developed during this time for maritime insurance risk was the “Rhodian Sea Law,” which relied on the concept of “general average law.” In simple terms, the concept enforced that if a loss is incurred, the loss will be shared proportionally by the shipowners and cargo owners. The general average law is a maritime law that required all parties in a sea venture to proportionally share losses according to the respective values of their cargo. The general average law and the law of large numbers (LLN), while distinct concepts, both explain why insurance is rooted in mathematical soundness, where aggregating independent unpredictable events leads to a predictable and stable outcome. The LLN allows pooling of risks where the premium collected from a large diverse group would cover the losses of a few. These are the same concepts that the actuarial foundation is built on today. From a legal perspective, the early concept of modern-day liability insurance can be traced back to the Code of Hammurabi (~1750 B.C.), indoctrinating the legal principles of insurance and laying the foundations of insurance contracts. The legal principle of insurance provides a binding framework for risk transfer to safeguard the consumer and to ensure compensation for accidents, establishing trust in the system in aggregate. The code established a rule of law aimed to protect citizens from complete ruin, allowing them to restore their trades after disasters rather than falling into servitude.
A standalone illustration of a gentleman in Victorian-era attire, including a grey frock coat, top hat, and walking cane.

Insurance as we know it

The birth of modern insurance in Europe was a gradual process characterized by a series of catastrophic events, such as the Great Fire of London in 1666 and the Great Lisbon Earthquake in 1755. These events challenged the prevalent ideas at the time of divine omnipotence. Slowly afterwards, this led to acceptance of the idea that the world and its future states could be predicted by collecting personal and institutional data for use in underwriting and statistical inference.

The innovation of actuarial science stemmed from the conviction that the laws of probability can be used to predict the future outcome instead of relying on speculations. It emerged from the need to manage risk. The law of large numbers proved the feasibility of the idea of risk pooling. The 17th and 18th centuries were a period of scientific enlightenment, providing grounds for acceptance that using science will improve the way business is conducted. Risk is multidisciplinary by nature, involving multiple fundamental sciences to allow quantifying it. Actuarial science, an applied science, has combined various core disciplines to enable tackling risk assessment in a systematic type of approach to evaluating risk. More recently, actuarial thinking has been heavily influenced by financial economics and sophisticated mathematical modeling, despite the reliance on assumptions and expert judgement.

Underwriting as we know it today emerged in the 16th century in Lloyd’s Coffee House, which initially served as a meeting point for merchants, captains, and ship-owners to share information and secure insurance. In the 17th century, a pivotal moment was the development of “lead” underwriting, which meant setting a rate that others would follow enabled by thorough examination of the “loss book” — the equivalent of modern-day databases. A rate was then established that’s more commensurate with the risk, like modern-day pricing and underwriting work. Lloyd’s continued to become a hub for maritime insurance throughout the 1700s and 1800s, ultimately becoming the world’s leading specialized insurance market.

A standalone illustration of a three-story historical brick building engulfed in bright orange and yellow flames, representing property risk.
Insurance is a capital-intensive industry, requiring a massive amount of upfront access to capital to guarantee future payment of liabilities and to enable risk-taking. The innovation of shareholding was a leap to allow for scaling the business and to allow separation between operating capital and risk capital. This separation is vital structurally because it shields the day-to-day capital deployment from the high-risk investments. Shareholding allows for partial ownership of a company and raising significant amount of capital much faster than single ownership models. Adam Smith, who was a key figure in the Scottish Enlightenment in the 17th century, was the first proponent of joint-stock company structures for insurance to allow access to a large capital pool and enable secure accumulation of capital through public financing — a model that today has been proven quite successful and essential to the establishment of capital-intensive ventures.

The last breakthrough in the evolution of modern insurance is the development of catastrophe models that occurred in the 1900s and early 2000s following major disasters. These paradigm-shifting events prompted insurers to move from using nascent tools to complex high-resolution models that aid in predicting these low frequency and high severity risks. Major hurricanes such as Hurricane Andrew in 1992 demonstrated that relying on simple historical data was still not sufficient. Later, despite developments in catastrophe modeling, Hurricane Katrina (2005) exposed limitations of models to date in predicting secondary perils, such as flood and accumulation risk, arising from post-disaster demand surge, prompting another wave of innovation in modeling.

Illustration of a woman in a blue suit walking past a modern skyscraper with orange construction scaffolding on the roof.

A world without insurance

Today one can’t imagine a world without insurance, but the more pressing questions are what would be the repercussions of forgoing this 6,000-year-old industry and how has insurance contributed to the world we live in today? Insurance provides safety, stability, and development. Without insurance, people would be left to pay for the full cost of damages they suffer, and people would be much more cautious and risk averse. The burden of risk also hits vulnerable communities the hardest, as they have less means to recover from a significant loss or setback, exacerbating existing inequalities. With the recent surge in natural catastrophes, these events could wipe out entire communities. People would be less likely to engage in an activity that could result in an injury, and businesses would be hesitant to invest in new or risky ventures, leading to job losses, a decrease in consumer spending, and a domino effect that would decrease overall economic activity. Modern economists argue that the prosperity of the last two centuries wouldn’t have been possible without insurance. Historically, developing nations have struggled to industrialize (especially in the Middle Ages) due to illness, natural disasters, and market volatility, which thwarts the accumulation of capital essential to build industrial economies. Prior to establishing a formal insurance industry, economies were locked in cycles of poverty and slow productivity growth. Preindustrialization, catastrophes like plagues and famine were primarily managed through local community aid and government intervention. The communal responsibility has prevented governments from directing funds to future investment and rather focusing on immediate disaster remediation.

Insurance drives economic growth and has transformed the division of labor, supporting increased urbanization and consequently the economics of trade, allowing more people to be more incentivized to take minor absorbable risks. The impact is far-reaching, beyond insuring individual’s assets. Insurance drives both economic and social growth, making the economy we live in today more robust. Another often overlooked economic contribution of insurance today is as a provider of capital to finance various projects that are vital for the modern economy. Insurers hold massive amounts of capital to support claim payments, and this capital is also invested to fund essential projects and seek investment income. The social value of insurance is that it enables risk-taking, financial freedom for average and low-income households, and hence improves social fairness. Without insurance, only the wealthy and privileged could take risks, increasing social polarization. The existence of insurance reminds us that trust is fundamental to human action and to the evolution of humanity, so without insurance, every development activity could be halted.

The future of insurance

The world is currently being revolutionized by artificial intelligence (AI) technologies, and insurance is not an exception. Data analytics and digital-first approaches to customer experience with insurance are thriving, shifti the paradigms of insurance from a reactive “detect and repair” to a proactive “predict and prevent” or “predictive risk management.” This advancement will be enabled through hyperpersonalized, real-time, and digitized systems, which will eventually lower operational costs and insurance prices to consumers. Insurance solutions that are more tailored to individuals’ and businesses’ needs will become more readily available. Tapping into new types of specialized risks instead of traditional risks will also be on the rise due to new and emerging threats. Specialized insurance will continue to challenge traditional actuarial methodologies, requiring deep domain expertise and creative solutions to quantify the risk. Despite the revolution in digital and data, insurance will continue to rely heavily on subject matter experts due to the complexity of underlying risk and the need for deep expertise in the business.
A standalone illustration of a yellow humanoid robot walking to the right, trailing a long, wavy red line behind it.

The common thread

The history of insurance is fundamental to its future because it provides a blueprint for adapting to new risks in an ever-changing world. The history and future of insurance are marked by continuous innovation, resulting primarily from societal shifts. When examining the history of insurance, it becomes clear that insurance has evolved in response to these shifts and the continuous need for economic safety throughout the change. Maintaining the core purpose of collective protection remains the goal across time. These shifts are bound to continue creating new opportunities to the industry. For example, compare the evolution of insurance during the Industrial Revolution with modern-day AI disruption. During the Industrial Revolution, the innovations were driven by managing risks from machinery, urban crowding, and factory fires, while the AI disruption more recently introduced risks from cyber and the integration of AI. Understanding how past innovations influenced the rise of insurance can help actuaries navigate modern challenges more effectively. The common thread remains – risk and the need for collective risk management. Insurance portfolios are constantly changing, and it behooves the industry and actuarial profession to continue adapting to these changes by making history a guiding force for future innovation.
Sandra Maria Nawar, FCAS, FCIA, is an actuarial manager at Intact Financial Corporation. She is a member of the Actuarial Review Working Group and its Writing Subgroup.
professionalinsight
 

Developing News

Beryl Missed, Melissa Paid
By Xuan You
The following article is solely the opinion of the author and does not reflect the views of her employer.
A 3D green bar graph data visualization on a blue background.
W

hen Hurricane Beryl struck Jamaica in 2024, the country’s $150 million World Bank catastrophe bond did not trigger because the storm’s air pressure failed to meet the predefined parametric threshold, despite significant on-the-ground damage. Hurricane Melissa, which made landfall in October 2025 as Jamaica’s most powerful storm, put the same instrument to a very different test. The bond triggered at a full 100% payout, with Jamaica receiving $150 million by December.1 The contrast illustrated both the promise and a key limitation of parametric instruments: rapid payouts when triggers align, but exposure to basis risk when they don’t.

Yet accessing these instruments remains structurally constrained. Under Bermuda’s existing Special Purpose Insurer (SPI) framework, which underpins about 85% of global Insurance-Linked Securities (ILS) capacity2, SPIs can only write reinsurance, and eligible cedants are limited to A-rated (re)insurers, government insurance pools, and Bermuda Monetary Authority (BMA)-approved entities.3 Governments and corporates seeking parametric coverage must work through intermediaries such as risk pools, fronting arrangements, or development bank structures.

In January 2026, the BMA proposed a new Parametric Special Purpose Insurer (PSPI) class to address these limitations.2 The PSPI would allow direct insurance alongside reinsurance and expand eligible counterparties to include sophisticated corporates and government entities. It would also permit swaps and derivatives subject to case-by-case approval. By reducing the need for intermediary structures, the framework could lower friction and cost. Like existing SPIs, PSPIs would remain fully collateralized and bankruptcy-remote. The BMA has positioned the proposal as part of its effort to address the widening protection gap driven by climate change and emerging risks like cyber, where parametric products can supplement traditional indemnity coverage.

What this means for actuaries:

The PSPI proposal suggests that parametric risk transfer is becoming less of a niche structuring solution and more of a regulated insurance vehicle, opening the market to a wider range of buyers that may be able to transact directly rather than through reinsurers, pools, or fronting structures. Many of those buyers will be less familiar with the tradeoff at the heart of parametric cover: fast, rules-based payout in exchange for basis risk. The actuary’s job is not just to define that tradeoff, but to show it in terms the buyer can evaluate. That means using scenario analysis and stress testing to show how the trigger behaves across plausible events and where payout may diverge from economic loss, then complementing that view with metrics such as probability of any or full principal impairment, and other tail-risk measures. It also means testing trigger design, validating data sources, and reviewing the calculation and verification process that ultimately determines payout.
Xuan You, FCAS is a senior actuary at Munich Re. She is a member of the AR Working Group and its Writing Subgroup.
professionalinsight
 

Developing News

Deepfakes Put Insurers in Deep Water
By Bella Thiel
The following article is solely the opinion of the author and does not reflect the views of her employer.
I

n early 2024, an employee of a global financial organization unintentionally wired $25.6 million to fraudsters.1 The employee, under the impression they were talking to the company’s CFO and other senior leaders of the company on a video call, was maliciously deceived by deepfake technology. The fraudsters used deepfakes of the CFO and senior leaders to simulate their likeness and gain the employee’s trust before collecting their payout.

Deepfakes are forged or digitally-altered media created by generative artificial intelligence (AI) designed to impersonate people and events. Despite their widespread use for entertainment on social media, deepfakes have emerged as a growing source of loss in cyber insurance2 and pose significant risks to insurance companies. Between 2022 and 2023, Allianz reported a 300% increase in doctored claims photos.3 In a recent study from Verisk, nearly all (98%) of insurers agreed that AI-powered editing tools are fueling an increase in digital insurance fraud.4 Insurance fraud has not only become more frequent, but also harder to detect due to the increased availability and sophistication of AI tools. About 50% of Gen Z and millennial consumers reported being “at least somewhat likely” to make a small edit of a claim photo or document, while only 32% of insurers say they are “very confident” in detecting deepfakes. 4

Crying woman's face with pixelated glitch elements, a smile mask, and chat bubbles

What this means for actuaries:

Given their far-reaching implications, deepfake incidents could be covered under cyber liability, directors and officers liability, errors and omissions liability, commercial general liability, and umbrella insurance. Potential consequences of deepfakes include defamation, misinformation, identity theft, blackmail, and financial fraud.5 As it stands today, there is no one set rule for where deepfake incidents belong. Slowly but surely, coverage options and endorsements have started creeping into the insurance marketplace to fill this gap. Coalition, a tech-driven cyber insurer, offers coverage specifically for deepfake-related incidents.6 Chubb, a leading traditional cyber insurer, has also added targeted coverage for deepfakes through their social engineering fraud endorsement.7

Still, there is an immense opportunity for actuaries to design insurance solutions for the $15.3 billion cyber insurance industry — and fast. According to the FBI, cyber insurance losses and fraud scams increased by 33% from 2023 to 2024.8 Now more than ever, actuaries can play a key role in staying ahead of evolving attack vectors through innovative product design and quantifying exposure and development potential.

Actuaries not directly involved in cyber insurance must also stay vigilant. The Coalition Against Insurance Fraud (CAIF) estimates U.S. insurers pay over $300 billion each year in fraudulent claims, with one in ten property-casualty losses found fraudulent.9 Many insurers use third-party and internal AI-based detection tools, while some require additional claim documentation metadata analysis (timestamps, location, etc.) before a claim payout.10 Yet, advances in AI tool capabilities, combined with creative consumer tactics, seem to continuously outpace insurers’ fraud-detection strategies. Devising better ways to detect fraudulent media remains a priority, and actuaries can use their broad purview to advocate for strong data governance to enable the full potential of modern anti-fraud tools.

On the bright side, on March 10, 2026, Zoom launched a deepfake detection feature for live video meetings.11 Hopefully this prevents any actuary from becoming the subject of the next deepfake-induced corporate fraud incident.

Bella Thiel is an actuarial analyst at Allstate. She is a member of the AR Working Group and its Writing Subgroup.
professionalinsight
 

Developing News

Recent TPLF Legislation Set to Reshape the Insurance and Litigation Finance Industries
By Sara Chen
The following article is solely the opinion of the author and does not reflect the views of her employer.
F

or more than a decade, third-party litigation funding (TPLF) — investing in lawsuits in exchange for a percentage of the potential settlement or judgment — has grown into an estimated $20 billion industry and is projected to be a $50 billion industry by the end of 2036.1 TPLF has been particularly troublesome for the insurance industry, as evidenced by prolonged litigation, rising nuclear verdict amounts, and erosion of policy limits. The average cost of a commercial claim has gone up about 10%-11% per year since 2017, according to Gareth Kennedy, principal of insurance and actuarial advisory service for Ernst & Young (EY).2 What started as a noble cause that allowed small companies to pursue claims against larger, better funded defendants, has warped into a gambling system with average annual returns of 25-30% for funders.3

In 2025, TPLF legislation swept the country with 21 states proposing bills and another 8 states enacting bills.4 The legislation falls under the themes of addressing (1) consumer protection, (2) disclosure requirements, and (3) funder restrictions. At the federal level, bills have been introduced in 2025 and into 2026 to target the abuse of TPLF. In addition, the Insurance Services Office (ISO) introduced a new, optional mutual disclosure condition endorsement effective January 2026 that will require disclosure of any TPLF agreement and the third-party funder’s identity.5

In the litigation finance industry, there appears to be a general tightening of capital in 2025, as reported by the Insurance Journal.6 The industry is facing headwinds in the form of lower payouts and longer trial times, leading investors to explore alternative, safer investments. With the looming regulatory changes and legislation, the TPLF landscape will likely shift in the coming years.

A wooden gavel resting on a stack of US dollar bills

What this means for actuaries:

Until the recent legislation matures, it is expected that TPLF trends are going to continue to stay high, at least for the next couple of years. Multiple actuaries gave advice on how to handle this current landscape in Jim Lynch’s July/August 2025 Actuarial Review cover story, Financing Justice: The Rise and Risks of TPLF.7 The trends have disrupted most companies’ commercial liability loss triangles and the traditional loss development techniques that depend on them. Instead, reserving actuaries can consider a frequency x severity approach to incorporate the implied inflation into ultimates. A feature of TPLF is prolonged litigation, so trends in defense and cost containment expenses are rising as well.

Some companies have left TPLF-heavy lines like commercial auto and hospital professional liability, and/or write lower limits to mitigate the exposure. Additionally, some actuaries have shown data on social inflation trends in their rate analyses. In the CAS and Triple-I’s latest Increasing Inflation on Liability Insurance study,8 the estimated impact of increasing inflation across liability lines in the industry from 2015 to 2024 is around $232B – $281B (14.4% – 17.5% of booked loss & DCC). Actuaries can look to this study for guidance on the latest trend figures by specific liability lines of businesses to incorporate into their reserve and pricing analyses.

Sara Chen, FCAS, MAAA is a consulting actuary at Pinnacle Actuarial Resources. She is a member of the AR Working Group and its Writing Subgroup.
professionalinsight
Banner for the RPM Seminar in Chicago, March 16-18, 2026, featuring gold Art Deco city illustrations on a black background.
CAS Hosts RPM Fireside Chat with Jeffrey Ma on Unlocking Innovation
By Jordan Hammond
J

effrey Ma, former vice president of analytics and data science for Twitter, predictive analytics expert for ESPN, kingpin of the famous MIT Blackjack Team, and former vice president of Microsoft for Startups, was the featured speaker at the Ratemaking, Product, and Modeling seminar (RPM) in March.

Ma has worn many hats across his lifelong endeavors in education, hobbies, and careers, with one simple mantra: If you make better decisions than the system expects, you will always have the edge. As a former member of the MIT Blackjack Team, he applied his innovative approach to casino gambling to bring down the house by counting cards. Although he has since traded the blackjack table for a boardroom table, he continues to apply the same strategy in business — be brave enough to innovate when others are content with comfort.

Actuaries understand this line of reasoning but too often lack the incentives to effect innovative change. In his chat, Ma recounted a paper written by David Romer in 2002, which uncovered a paradigm-shifting conclusion for NFL teams: Coaches were far too conservative on fourth downs and should significantly increase their conversion attempts to maximize their chances of winning. Of the historical situations where the fourth down attempt was deemed advantageous, teams were “going for it” only 10% of the time. The evidence was clear, the advantage was quantified, and the findings were published in the National Bureau of Economic Research.

And then… nothing changed. In practice, coaches were not seeking to optimize win probability; they were optimizing job security. In their eyes, the avoidance of high variance situations was just as valuable as eliminating the downside risk, which was the risk of incurring memorable moments of failure. Why risk “losing” the game early in the fourth quarter when there would be a future, albeit less likely, prolonged opportunity for a comeback win? Faced with decisions where the data favored aggression, coaches consistently chose the more conservative, defensible path. As Philip Seymour Hoffman said in his depiction of Art Howe in “Moneyball,” “I’m playing my team in a way that I can explain in job interviews next winter.”

Actuaries are often placed in similar situations. When long-term strategy takes a back seat to short-term visibility, decisions can gradually become more political than analytical, eroding a company’s competitive edge over time. On any given day, the loss of that edge is nearly imperceptible. Each individual decision is small, defensible, and easy to justify, but in hindsight it becomes clear that misaligned incentives quietly steer the organization away from maximizing its advantage. So why don’t more of us push for new, innovative ideas?

If you make better decisions than the system expects, you will always have the edge.
“New is scary,” says Ma. New ideas mean struggling through learning new technology. New ideas mean making an outside hire. New ideas mean leaving a steady job to pursue your own adventure. In the broadest sense, “to embrace new ideas is to embrace failure,” says Ma. It’s providing the activation energy to a system that rewards long-run strategies with often brutal short-term variance and staying the course until the signal overtakes the noise.

One way to break out of this mindset, says Ma, is to return to first principles. Rather than debating within the confines of existing processes, Ma encourages reframing problems in their simplest, most indisputable terms. What are we actually trying to optimize? What is the data actually trying to say? By cutting through the comfort of conventional frames of mind, organizations can create space for ideas that would otherwise be dismissed too quickly.

“Innovation does not occur in the absence of constraints; it often emerges because of them,” says Ma. Whether it is regulatory limitations in insurance or outdated structural rules applied in a brand-new industry, constraints force clarity. They require organizations to be precise about where their edge lies and how to exploit it. In this sense, constraints are not barriers to innovation, but catalysts for it.

Ultimately, the challenge is not identifying the edge; it is having the conviction to act on it. We spend years learning how to find out when and where an advantage exists, yet when it comes time to act, incentives and short-term pressures can cause that to go to waste. The data is often clear, the strategy is often sound, but without alignment of incentives and a willingness to endure short-term discomfort, even the best ideas fail to take hold. Ma’s message is a reminder that while intelligence helps, innovation really requires courage — the courage to challenge convention, to withstand variance, and to make decisions that may look wrong in the moment but are right in expectation. In a profession built on cutting through the noise to find the truth, the real opportunity lies in having the discipline to trust the math when it matters most.

Jordan Hammond, FCAS, is director, actuarial and analytics at Travelers, based in Des Moines, Iowa.
professionalinsight
Banner for the RPM Seminar in Chicago, March 16-18, 2026, featuring gold Art Deco city illustrations on a black background.
Insurtech Is Dead. Long Live Insurtech
By Andrew Somers
“I

s insurtech dead? Was it ever really alive? Who killed insurtech? And what is insurtech, anyway?”

This refrain ran through my head as I entered Jessica Leong’s and Jamie Wilson’s Ratemaking, Product, and Modeling seminar (RPM) session: “Insurtech is Dead. Long Live insurtech.”

I admit I was drawn in more by the catchy title and the fact that I’d enjoyed several of Leong’s presentations in the past and less by having any special knowledge of insurtech. For some time, “insurtech” has been synonymous in my head with “smart devices used in insurance.” Full disclosure: I was very ready to declare smart devices dead.

Unsurprisingly, Leong and Wilson’s session was much more thoughtful than that. Their definition of insurtech was broader: “A technology company focused on working with carriers/MGAs/brokers to improve how insurance is distributed, priced, underwritten, or serviced.” This would include smart devices, data enrichment, distribution platforms, risk assessment, workflow automation, and more.

With that many use cases, what’s all the concern about “death”? One has only to turn to SaaS company valuations in February 2026, where (according to Reuters) over $1 trillion in market capitalization was lost from software stocks.

Generative AI (GenAI) was to blame, of course. After all, if GenAI can vibe code something for you, why do you need to pay another company to serve you software? Do you really need to talk to that insurtech if you can just talk to a GenAI agent?

Leong and Wilson discussed the many strategies companies use to innovate and made the case that actuaries need to care about insurtech and its future for several reasons: competitive pressure, talent and efficiency, data and model sophistication, regulatory and compliance, and strategic influence.

What made me think I should care? Leong and Wilson claimed that “competitors using these [insurtech] tools are gaining advantages … 20–30% faster quote turnaround in commercial lines…”

Another item that will stick with me: “If you (the actuary) don’t shape these decisions, IT or operations will.”

The rest of the session was an open forum with case study prompts, meant to direct the actuary in ways to effectively use insurtech. The prompts asked audience members to consider how much to innovate versus using tried-and-true solutions to explore how you might choose to innovate (GenAI versus IT department), to decide what you will and won’t do (e.g., how important it is to keep your own data secret), and to calculate the potential return on insurtech solutions.

Hearing thoughts from the audience made for an engaging session, and I found Leong and Wilson’s final thoughts to be instructive as well: “Get hands on,” “invest in your own data,” and “get IT involved early.” Those three thoughts resonated with pain points from my own experience, where I’ve seen roadblocks arise from insurtech platforms not playing well with internal systems.

The title of the session was presumably inspired by the late medieval phrase “The king is dead; long live the king,” which was meant to acknowledge the passing of the current king, welcome a new king, and emphasize the undying nature of the office itself. Once I thought about that, I realized just how well the pithy session title applied to insurtech.

Yes, the easy insurtech solutions may go away—maybe we’ll all be using GenAI to generate our dashboards and slides without any external vendor help—but GenAI can’t fly planes to gather and analyze aerial imagery, and it can’t walk into a house to inspect water damage. I’m more convinced than ever that we’re not witnessing the death of insurtech, but rather the emergence of its next phase.

Andrew Somers, FCAS, is associate vice president, data science at Travelers and is a member of the AR Writing Subgroup.
professionalinsight
Banner for the RPM Seminar in Chicago, March 16-18, 2026, featuring gold Art Deco city illustrations on a black background.
Leveraging Actuarial Guardianship for AI Governance
By William Nibbelin
A

ctuaries have always stood at the intersection of technological innovation, regulatory governance, and legislative oversight. As artificial intelligence transforms core insurance operations, these proficiencies are more crucial than ever. A session at the Casualty Actuarial Society’s recent Ratemaking, Product, and Modeling seminar offered industry perspectives on keeping fairness and governance at the forefront of consumer impacts and company responsibilities regarding AI. The discussion included Jamie Mills, senior actuary at Allstate and session moderator; Will Melofchik, CEO of the National Council of Insurance Legislators (NCOIL); and Jon Godfread, North Dakota Insurance Commissioner.

Creating common ground

To level set the discussion, Mills established clear definitions for the various iterations of AI currently impacting the insurance space. While the industry has long utilized data-driven analytics, the rapid emergence of these models requires a shared language to distinguish between their functional capabilities. He identified three AI categories:

  • Traditional machine learning: Familiar systems used for modeling and statistical analysis.
  • Generative AI: Systems that generate text, summarize documents, and enhance creative work.
  • Agentic AI: Systems capable of performing actions, such as interacting with workflows or triggering underwriting steps.

Because legislators often lack a deep insurance background, these categories provide a useful starting point for stakeholders to understand the role of AI in insurance. Bridging this gap is essential to bringing technical innovation to insurers and their customers while ensuring the industry remains committed to fairness, transparency, and accountability.

Human oversight plays a critical role in this process. In one instance within the rental car industry, a series of software glitches led to customers being billed thousands of dollars — a mistake that human intervention in the final review stage could have mitigated.

Melofchik highlighted these concerns among policymakers, noting they are especially focused on material changes or adverse determinations such as policy cancellations, nonrenewals, or significant premium adjustments. He argued that feedback from constituents helps fuel their direction, with headlines about “denial by AI” keeping pressure on legislators to react with new policy. Education on insurance principles like risk-based pricing is critical to helping officials balance insurance challenges against other state priorities such as health care and crime prevention.

While fears surrounding AI’s rapid growth may trigger the impulse to shut down the technology, regulators have also increasingly adopted a view of AI as a powerful tool for an industry that has always relied on sophisticated data analytics. Commissioner Godfread explained how this perspective has translated into actionable regulatory oversight, such as the National Association of Insurance Commissioners (NAIC) Principles on Artificial Intelligence (AI), founded in 2020. These principles prioritize:

  • Transparency and explainability: Can a company explain its tools process?
  • Safety and integrity: Are company systems secure and are decisions fair?
  • Monitoring for bias: Is the company actively checking for unintended bias?

Godfread emphasized that although the tools have evolved, the consumer protection laws foundational to insurance pricing remain unchanged. The ultimate responsibility for a decision lies with the insurance company and its board. He added that the “hardest part” of gaining regulatory approval lies in making complex models understandable. If a model’s output lacks a clear “causation” that makes sense to regulators or the public, it will likely face resistance regardless of its statistical accuracy.

Transparency and consumer trust

The panelists agreed that continued transparency is needed for building and maintaining “social capital” with both consumers and regulators. Melofchik clarified that most legislators are not seeking access to proprietary code but rather practical transparency, such as informing consumers when they are interacting with an AI chatbot or if AI is driving a nonrenewal decision.

Godfread also noted the importance of transparency within telematics, arguing that while the ability to provide granular risk scores is valuable, the industry must shift the conversation from simple correlation to understandable causation. Similar concerns are growing around aerial imagery and drones, particularly when insurers employ drones or satellite images to non-renew policies due to roof conditions. Legislators are exploring bills that would require insurers to provide these images to consumers and allow a “cure period” (e.g., 60 to 90 days) to resolve the issue before losing coverage to ensure the process remains fair and transparent.

The NAIC’s AI evaluation tool pilot aims to develop mutual transparency between insurers and regulators by standardizing how states review and understand AI usage. Key areas of inquiry include:

  • System identification: Categorizing the types of AI systems currently in use across the industry.
  • Governance evaluation: Reviewing the oversight mechanisms and structures companies have established.
  • Risk management: Understanding how organizations identify and mitigate AI-related risks.

The initiative is in its learning phase, Godfread stressed, as the NAIC actively continues to pursue feedback from insurance professionals on whether the tool is effective without being unnecessarily punitive. Such collaboration will be increasingly vital as fundamental principles of risk, such as risk pooling versus “hyper-personalization,” become more contentious. On this point, Godfread admitted the industry is reaching a point wherein the ability to provide individuals with their exact risk score might conflict with the traditional concept of insurance pools. A solution was noted by Godfread as “TBD,” indicating the issue will require deep intellectual engagement from both regulators and the industry in the coming years.

Navigating legislative friction and federal preemption

As states begin to test these AI evaluation tools, Melofchik noted that a primary concern for state-level policymakers is the potential for federal intervention. There is a perceived tension between state legislatures and federal executive orders aimed at creating unified AI standards. Recent legislative activity in states like Utah and Florida has highlighted the delicate balance between state autonomy and the threat of federal preemption. Melofchik explained that many state legislators are wary of federal overreach that might ignore the nuances of the McCarran-Ferguson Act and the historically effective state-based regulatory system. This friction is particularly evident in discussions regarding “human-in-the-loop” mandates.
Commissioner Godfread cautioned that if the insurance industry is lumped into broad, multi-sector federal AI regulations, it could undermine the sophisticated analytics and solvency protections already inherent in the field.
Commissioner Godfread cautioned that if the insurance industry is lumped into broad, multi-sector federal AI regulations, it could undermine the sophisticated analytics and solvency protections already inherent in the field. The current state-based system already addresses bad actors and technical failures without the need for “one-size-fits-all” federal mandates. For actuaries and executives, this highlights the critical need for active engagement with state legislators to demonstrate that with the existing regulatory structure is capable of evolving alongside AI innovation.

The ongoing value of actuarial judgment

Mills concluded the discussion by addressing concerns AI might replace human professionals, explaining that, as the industry enters an era of complex “black box” models, the need for professional actuarial judgment and a “human touch” becomes more valuable than ever. While repetitive tasks will certainly be automated, the ability to validate a model’s integrity, explain its conclusions, and ensure its ethical application remains a critical human science.

Notably, even “free market” legislators might feel compelled to mandate coverage if the insurance mechanism is perceived as unfair or overly complex, which speaks to an actuary’s role as the critical guardian of model integrity and governance. Ultimately, the ability of actuaries to navigate these issues while maintaining technical accuracy will define the industry’s success in the AI era.

William Nibbelin is a senior research actuary for the Insurance Information Institute.
professionalinsight
Banner for the RPM Seminar in Chicago, March 16-18, 2026, featuring gold Art Deco city illustrations on a black background.
Professionalism Considerations for Snowmageddon
By Jim Weiss
I

n 2024 I joined the CAS Professionalism Education Working Group (PEWG). Similar to many actuaries I talk to, I never found professionalism to be the most captivating continuing education (CE) topic. Getting my mandatory credits every year always bordered on being a chore. I felt transitioning from a CE consumer to a CE supplier might challenge me to think about professionalism more critically and in new and interesting ways. Fast forward to March 2026 and, sure enough, it did (with a big assist from Mother Nature)!

Earlier in the year, PEWG leaders reached out to volunteers like me seeking professionalism presenters for the Ratemaking, Product, and Modeling seminar (RPM), which I already planned to attend. One of the requested topics was “professionalism for climate risk.” My initial question was, what does climate risk have to do with professionalism? To learn the answer, I raised my hand to co-present with Michael Chen, FCAS, of Pinnacle Actuarial Resources. We soon learned the answer was “just about everything.”

My flight to RPM in Chicago was massively delayed by Winter Storm Iona, a record-breaking storm system that dumped 52 inches of snow on parts of Michigan, caused wind gusts of 60 mph in Wisconsin,1 spawned tornados and thunderstorms across the U.S. South,2 and cancelled thousands of flights in addition to mine. The “snowmagdeddon” event provided an opportunity to stress test our topic in real time. Michael and I had already reviewed prior presentations on “climate professionalism” and most were rote rundowns of Actuarial Standard of Practice (ASOP) No. 38 on catastrophe modeling3 and ASOP No. 39 on treatment of catastrophe losses in P&C ratemaking.4 The refreshers didn’t exactly contemplate the deadly bomb cyclone that, based on our straw poll of the in-room audience, had just affected almost everyone’s arrival to the conference. So we freshly unpacked ASOPs No. 38 and 39 through the lens of Iona, via four questions:

  1. Was Iona a climate event? Probably. This fell a bit outside the purview of the ASOPs, but it was required to scope Iona into our assigned topic. Significant evidence suggests climate change contributes to increased frequency of bomb cyclones due to increased atmospheric moisture and weaker temperature contrasts across latitudes.5 However, the meteorological community has recoiled a bit at the impact of “runaway verbiage” (e.g. hyperbolic terms such as “bomb”) on public perception.6 Perhaps the meteorological community could benefit from a read of ASOP No. 41 on actuarial communications, which speaks to factors such as use of analysis by unintended users.
  2. Was Iona a catastrophe? Yes. ASOP No. 39 defines catastrophe as “a relatively infrequent event or phenomenon that produces unusually large aggregate losses” (2.1). Required characteristics are either the potential to display contagion (3.1.a), infrequent occurrence (3.1.b), or both. We deemed bomb cyclones’ frequency of a dozen per year as debatable, but Iona’s contagion, i.e. “lack of independence between the occurrence of losses among different entities” as undeniable based on our audience’s experience.
  3. Should Iona be included pro forma in ratemaking? Probably not. This got to the heart of our topic — the nexus between professionalism and climate change. Iona’s diverse peril profile — thunderstorms, tornadoes, blizzards7 — at a minimum stretched actuaries’ ability to precisely associate losses with the event and implement a “consistent definition of a catastrophe” (3.3.1f). It is also debatable whether Iona’s impacts would equally impact existing procedures’ ratemaking covariates (3.3.1.b.1) or, if not, whether corrective action was required or even possible with historical data (3.3.1.b.1-2). One example we gave was business interruption waiting periods. The audience’s flight delays ranged from hours to days, so if we viewed commercial insurance interruptions as potentially having comparable durations, then the range of waiting periods in one’s data would drastically impact the reasonability of passing Iona through pro forma.
  4. What alternatives exist to including Iona pro forma? Imperfect ones. ASOP No. 39 presents catastrophe provisions based on historical data or modeled losses as potential cures to bias from catastrophe absence or presence in one’s data period (3.4). Both are relatively common in practice. Given its peril profile, Iona was likely represented by multiple catastrophe models — for example, severe convective storm (SCS) or winterstorm8 — and may have also induced non-modeled perils. ASOP No. 38 challenges actuaries to understand the relationship between models’ input and output, precision, component interrelationships, and more (3.3). The practicality of doing so at the breadth of an event like Iona deteriorates. Conversely, more tractable, “non-modeled” approaches such as “excess procedures”9 raise questions over the length of the experience period (ASOP No. 39, 3.3.1.d) and whether “compatible, comparable historical insurance data” exists (3.3.1.b). It may not make sense to smooth Iona over a longer-term period that predated increased occurrence of bomb cyclones or current building standards. Actuaries may also consider whether such smoothed losses are congruent with corresponding trend procedures (3.3.1.e and ASOP No.13).

Given that they are principles-based, ASOPs do not usually lend to concrete or even satisfying answers to our questions above. Events such as Iona provide the opportunity to evaluate potential areas for growth. Since I joined PEWG, I have been reading the ASOPs more, including appendices which reflect contemporaneous comments on exposure drafts and subsequent responses and adaptations by the Actuarial Standards Board (ASB). It is encouraging to see how the ASB adapts their work to practitioner feedback, but comments dismissed with prejudice are intriguing to revisit in light of current events.

Given its peril profile, Iona was likely represented by multiple catastrophe models — for example, severe convective storm (SCS) or winter storm — and may have also induced non-modeled perils.
One comment on ASOP No. 39 indicates “adjustments [to past insurance data] are impossible to do adequately, giving false hope that meaningful results can be obtained,” making special mention of the breadth of potential catastrophic perils besides earthquake and hurricane. Another suggests ASOP No. 39’s directive to “determine the extent to which the available insurance data are representative of the long-term frequency and severity of the perils or events … that produced the catastrophe losses” (3.3.1.a) is beyond a typical actuary’s capability. This feedback beautifully foreshadowed the present conversation.

The ASB’s responses to these questions put responsibility back on the shoulders of practicing actuaries. Its dismissal of the first comment indicates that the ASOP “gives sufficient freedom for the actuary to demonstrate the appropriateness of the resolution of the issues.” Its dismissal of the second retorts “the actuary could become aware of the issues by referring to [outside] experts and make intelligent decisions about the representativeness of the data.” If so, would it make sense for the pertinent considerations to be promoted out of the appendix? Moreover, for long-standing methodologies like those discussed above, it is easy to assume passing the test of time equates to passing the tests of the standards. But is it any safer to assume this than to assume that one’s flight will land at RPM precisely at its estimated time of arrival? Michael’s and my remarks tended that this is likely not the case, particularly as 100-year events become decadal10 and market responses such as shared and layered (S&L) pricing tend more toward casualty approaches — focusing heavily on attachment points and severity trend leveraging11 — than a typical, ground-up property rate-up.12

Iona was just one of the topics Michael and I unpacked on a blustery St. Patrick’s Day in Chicago, and ASOPs No. 38 and 39 were just two of the ASOPs we reviewed. We also illustrated how climate change activates clauses in ASOPs up, down, and across various practice areas and specializations. Similar to Iona, we exposed this using current events. Our goal was certainly not to confer a precise, “professionally approved” approach to any of the novel events climate change inflicts on actuaries’ data. Rather, it was to remind actuaries that — the best check on one’s professionalism — rather than reading an ASOP or streaming a NotebookLM while waiting for a flight is often to stress test the ASOPs using a current event. I might even go so far as to suggest actuaries do so without delay.

Jim Weiss, FCAS, CSPA, is divisional chief risk officer for commercial and executive at Crum & Forster and is editor in chief for Actuarial Review.
professionalinsight
Banner for the RPM Seminar in Chicago, March 16-18, 2026, featuring gold Art Deco city illustrations on a black background.
Global Actuarial Pricing and the Regulatory Evolution
By William Nibbelin
W

hile the mathematical foundations of risk are universal, regional regulatory philosophy and market maturity have dramatic impacts on actuarial pricing across the globe. A session at the CAS’ recent Ratemaking, Product, and Modeling seminar explored how these global differences manifest in unique actuarial skillsets, as explained by Akur8 senior actuarial data scientist Kamela Taleb and Akur8 head of product Mattia Casotto.

Defining the global landscape

Taleb opened the session by addressing the confusion that can arise when insurance professionals transition between international markets. Global variations of key industry terms, for instance, often reflect distinct underlying approaches to pricing, such as:

  • Technical Premium = Premium
  • Pure Premium = Loss Cost
  • Tariff = Rating Plan

To illustrate these discrepancies, Taleb shared an experiment involving a 30-year-old driver with a clean driving record seeking an auto insurance quote in Canada, Japan, the U.K., and the U.S. Despite having a consistent profile, the subject received a wide range of quotes, driven by local market constraints and differing views of risk. Taleb categorized these differences into three archetypes:

  • Heavily Regulated Markets: Defined by consumer protection rules, in which every pricing decision requires extensive justification.
  • Information-Friendly Markets: Defined by competitive positioning and rapid iteration.
  • Emerging Markets: Defined by data challenges and opportunities to build modern systems without the burden of legacy infrastructure.

In heavily regulated markets like the U.S. (admitted lines), Canada, and Japan, carriers face prohibited factors such as credit, gender, and age, as well as political pressure that can create gaps between pricing indications and actual charged rates. Conversely, in innovation-friendly markets like the U.K. and Australia, competition forces a high degree of sophistication. In these regions, carriers’ selection is risk adverse if they fail to update their models quickly enough. For markets such as Indonesia and Brazil, the limited data available shows that the presence of legacy systems can slow the adoption of more sophisticated underwriting and pricing techniques.

Infographic showing how different global market environments create distinct value pressures and rewards.
Line graph illustrating "price walking" where commercial prices for loyal customers rise over time.

Industry rates and implementation cycles

Casotto shifted the focus toward the tools and timelines behind the actuarial process. In the United States, organizations such as the Insurance Services Office (ISO) and the National Council on Compensation Insurance (NCCI) provide “prepackaged” industry rates based on the shared loss costs, allowing carriers to set prices more quickly. However, membership in these organizations is not commonly disclosed to the public.

Several international parallels to this system exist, including the German Insurance Association (GDV) and the General Insurance Rating Organization of Japan (GIROJ). In Japan, companies typically must remain within a 12.5% standard deviation from the GIROJ’s rates, creating structural constraints in which an entire portfolio must comply with a specific “lookup table.”

Such constraints influence the “speed to market” for rate changes in unique ways. In innovation-friendly markets like the U.K., filings are not necessary, which helps drive a rate change cycle of between two and four weeks. That same cycle may require six to nine months in regulated markets like the U.S., which operates under filing and approval regulations such as California’s prior approval pricing process. These environments generate unique actuarial value pressures. Whereas regulated markets reward actuaries for ensuring their decisions are explainable to regulators, competitive markets reward actuaries for understanding customer behavior, competitor repricing, and using tools like price aggregators. In emerging markets, insufficient data access means actuaries are rewarded for simplifying structures for legacy-free environments.

Optimization and the “loyalty penalty”

Taleb contextualized these values in relation to price optimization, identifying three forms:

  1. Unconstrained: the key driver is the rate indication.
  2. Constrained: limiting individual impacts to a specific range, sometimes to retention expectations.
  3. Ratebook: applying rate adjustments across entire segments of the portfolio.

Taleb also analyzed the optimization practice “price walking,” wherein insurers gradually charge loyal customers higher premiums than they would quote to new customers with the same risk profiles. One U.K. study found cases where preexisting customers were paying 40% over the technical price while new customers were being offered a 20% discount.

In response, the U.K.’s Financial Conduct Authority implemented rules that require renewal prices to be equivalent to new business rates, meaning only new information such as claims history or risky driving behaviors can justify differences. The change forced a structural shift in the industry, as once-separate “New Business” and “Renewal” teams now work toward unified strategic decisions for the entire portfolio. Bans on loyalty-based price walking also rippled across Europe, with bans already in effect in Ireland, and France and Italy currently conducting research into the practice.

A similar regulatory evolution has unfolded for pricing optimization in the U.S., Casotto added. States in the U.S. began limiting certain optimization techniques as early as March 2014, leading to the NAIC’s adoption of the Casualty Actuarial and Statistical Task Force’s 2015 white paper on price optimization. However, for advanced modeling techniques, regulation in the U.S. is adapting to the new technologies. More than 20 U.S. states adopted the NAIC’s Model AI Bulletin within 15 months of its issuance in December 2023 and currently 88% of auto insurers use or plan to use AI and machine learning. Additionally, CAS recently modified its Exam 8 syllabus to include advanced predictive modeling, AI, and machine learning concepts.

Future ratemaking convergence

Casotto and Taleb expressed the belief that global pricing processes will eventually converge, highlighting movement within the European and U.S. markets towards a middle ground on the ratemaking process. Their predictions include:

  • Transparency over Complexity: Building increasingly complex models is not viable in the long term. Instead, the focus will shift toward transparent and efficient ratemaking practices.
  • Data-Driven Fairness: True fairness will eventually be data-driven, with market players proactively removing historical biases rather than regulation alone.
  • Standardization of Constraints: The use of “constrained optimization” will remain standard practice to ensure portfolio stability and customer retention.

They emphasized that technology and regulation together will lead to a more synchronized global pricing standard. Whether operating in a heavily regulated archetype or an innovation-driven one, actuaries must remain agile. Navigating the intersection of analytics, technology, and regulatory philosophy is essential for actuaries to continue making insurance and financial products more affordable, available, and sustainable.

William Nibbelin is a senior research actuary for the Insurance Information Institute.
professionalinsight
Banner for the RPM Seminar in Chicago, March 16-18, 2026, featuring gold Art Deco city illustrations on a black background.
Bringing Innovation to Pricing for Changing Vehicle Features and Volatile Values at Risk
By Martin Ellingsworth
S

tepping forward into the wake of COVID red ink in personal auto, now awash in profitability (like the green Chicago River), we witnessed some true innovation coming from data heavyweight champion CarFax and humble analytic superhero consulting actuarial firm Pinnacle Actuarial Resources. The complexity, structure, and depth of their new model is a true example of innovation matched only by the thoughtfulness of their approach to communicating what’s new and improved to departments of insurance and their experts.

Donald Hendriks, ACAS, ASA, FCA, MAAA, director of analytics, CARFAX Banking & Insurance Group, and Joe Griffin, ACAS, senior consulting actuary, Pinnacle Actuarial Resources, demonstrated the challenges of developing a filing strategy as well as a technical communication strategy to introduce the “newer” nonparametric models to departments of insurance still using the “new” parametric model evaluation methods of the predictive modeling revolution from 20 years ago.

The GBM over GLM differences and similarities were a main attraction at several other sessions during the Ratemaking, Product, and Modeling seminar (RPM), but Hendricks and Griffin were able to share current examples of how storytelling to regulators is making solid headway, or not.

Market profitability in personal auto from 2019 to now has seen a swing from the worst performance in the millennia to the best in just a five-year period. Indeed, while much of that improvement was brute force base rate hiking, what comes next for competition is more accurate pricing in a value at risk volatile market like no current practicing actuary has ever seen. Here is where their innovation shines.

Hendriks demonstrated how vehicle value at risk has levitated above historical relativities. This is further compounded as it intersects and interacts with the most insurance-friendly vehicle feature innovations and safety (such as automated driver assistance), which have just entered the vehicle in operation fleet at scale in the last 10 years or so. The fitment of a variety of technologies onto a “go forward” set of vehicles was a key point in why different tech on different vehicles at different times creates more complexity than traditional models can deal with effectively.

Timeline chart showing the evolution of vehicle rating factors from 1970 to present-day advanced technologies.
Innovative features which are optional have a significant, and missed, value at risk impact when it comes to pricing — both at new vehicle pricing and at renewal. Hendriks demonstrated this with a Ford F-150 window sticker that poked a bit of fun at historical ways of working with MSRP in pricing. His window sticker example was highly relatable to the audience as part of any car buying experience, yet was highly confusing as well in terms of historical pricing. For example, the entire pricing process used an MSRP of $50,220 for a model with the least configurations and no optional features. But the actual VIN had several installed options ($70,590 before any discounts and without destination and delivery fees). That’s $20,000 underinsured in year one. That gap then also can persist over time if depreciation applies to each vehicle in a similar fashion.

He also showed how the lingering effects of COVID are creating a longer and higher demand for used vehicles, which is compounding the inaccurate MSRP problem across many additional years as depreciation is less for both the $50k version and the $70k one. This hidden truth can compound claim statistics as higher vehicle values can support higher claim repairs and still clear the total loss thresholds used across the industry.

Griffin and Hendriks demonstrated that modeling method stalwart GLM is less fit for use nowadays as both the spread in complexity of features and heterogeneity of values at risk leave underfitting inaccuracies compared with GBM approaches. The comparison of lift showed dramatic improvement in how the GBM methods were able to segment things like older and newer features and multiple technologies installed versus not installed.
While Griffin and Hendriks showed how their first big step in using vehicle value in rating makes sense, they also demonstrated that there is more work to be done to address varying depreciation by both vehicle type, make, model, and vehicle age. While a pre-COVID-to-now slide showed how unprepared prior pricing models were for this type of value at risk retained value problem, there was no discussion on what bumps in the road may lie ahead (tariffs, innovation, war, oil supply, etc.).

They outlined the technical and communication challenges they are facing with filing their models for use in pricing. Examples are predictor importance plots, lift metrics, SHAP values and “beeswarm” plots, and strongly structured filings with deep documentation (from the older 70-page GLM supports to about a 500-page Vehicle Build Score modeling package with a 270-page base and 200 pages of backup materials).

Dealing with the heterogeneous technologies and volatile depreciation swings across years, models, and features means the newer model methods are required. And newer ways of interacting with regulators are needed too.

As Hendriks said, “filing a GBM is new and we are overcoming skepticism. Regulators want competition and innovation in their states but need explainable models — like they did 20 and 30 years ago with GLM models, including by peril and by coverage.”

In summary, consumers want cars with innovations and insurers are hard at work understanding the relative risk of these higher priced options, feature-rich models, and a used car market that is rising above all experience.

Martin Ellingsworth is president at Salt Creek Analytics.
professionalinsight

Operationalizing Canada’s Federal Guideline OSFI E-23 — Model Risk Management to Deliver Fair Consumer Outcomes

By Frederick Au
O

ver the past several years, the CAS, through its research task forces, has extensively researched how various state and international regulators are approaching algorithmic fairness and model bias. As the global actuarial profession transitions from defining these frameworks to operationalizing them, Canada emerges as a live-environment test case. On May 1, 2027, the Canadian insurance industry enters a new era of governance. This date marks the deadline for full compliance with the Office of the Superintendent of Financial Institutions (OSFI) Guideline E-23 on Model Risk Management (MRM).1 While treating E-23 primarily as a rigorous federal compliance checklist is a defensible baseline for many institutions, integrating it with the broader market conduct goals creates the foundational infrastructure needed to navigate an environment increasingly scrutinized for algorithmic fairness, specifically the “fair consumer outcomes” mandated by regulators like the Financial Services Regulatory Authority of Ontario (FSRA).

We are entering a period where models, including those for insurance ratemaking and underwriting, should be mathematically sound, legally defensible, and socially fair. A model that is predictive but results in unexplained disparities is no longer just a market conduct issue; under the expanded scope of E-23, it may represent a model risk event or a compliance challenge.

The great convergence: A national imperative

For decades, the actuarial control cycle in Canada operated within a regulatory framework that treated financial risk and market conduct as separate domains. Historically, OSFI monitored whether the institution had sound risk management practices to maintain safety and soundness, while provincial regulators independently supervised whether the resulting rates and underwriting rules treated customers fairly. Under this bifurcated regime, the central question in model validation was often limited to: “Is this model predictive?” If a pricing model accurately predicted claims costs and secured the target equity returns, it was deemed a success, albeit with potential concerns on opacity or demographic impact.

Guideline E-23 alters this landscape by forcing these two worlds to interact. By expanding the definition of Model Risk to explicitly include adverse financial impact such as operational or reputational consequences,1 E-23 provides the governance chassis where these deliberate trade-offs are evaluated, documented, and justified by management.

A market-moving trend

While Ontario’s FSRA has been vocal with its proposed guidance on automobile insurance rating and underwriting,2 this convergence is driving the national agenda, led by Canada’s two largest regulators:

  • Ontario: FSRA’s guidance explicitly moves toward principles-based regulation, focusing on outcomes rather than technical rules.
  • Québec: The Autorité des marchés financiers (AMF) has released a guideline setting expectations for institutions to manage AI systems based on their impact on consumers.3

While the specific legal mechanisms differ among jurisdictions, E-23 provides the unified governance chassis to adapt to these evolving provincial expectations. Implementing a prudent E-23 MRM framework provides the evidentiary baseline required to demonstrate market conduct compliance to provincial regulators.

The legal landmine: The expiration of the “Zurich defense”

To understand the practical implications of this convergence, we should revisit the legal bedrock of Canadian actuarial practice: the Supreme Court of Canada’s 1992 decision in Zurich Insurance Co. v. Ontario.4 Under the Ontario Human Rights Code, insurers are permitted to use discriminatory rating variables only if the practice rests on “reasonable and bona fide” grounds. In Zurich, the Supreme Court established a rigorous two-part test to prove a pricing practice is “reasonable”: it must be based on a sound and accepted insurance practice (demonstrating a rational connection to the risk), and there must be no practical alternative. For 30 years, insurers have relied on this precedent to justify segmentation. However, the historical application of the “Zurich Defense” is facing re-evaluation driven by modern AI capabilities and stricter provincial oversight. Guideline E-23 accelerates this reckoning by mandating a risk-based approach to managing model risks that exposes whether a model truly possesses a rational connection or relies on discriminatory proxies.

Challenge 1: The rational connection (from correlation to causality)

In 1992, the Court accepted that a simple statistical correlation was sufficient to establish a rational connection. However, FSRA’s proposed new “Automobile Insurance Rating and Underwriting Guidance” fundamentally alters this standard such that statistical correlation is no longer a safe harbor if the variable acts as a proxy for a prohibited ground.2 In the age of AI, a model might find a statistical correlation between a permissible variable and a protected class. Under the old Zurich standard, the correlation might have been enough. Under FSRA’s fair consumer outcomes standard, this could be a direct or indirect proxy for unfair discrimination. Without the deep “explainability” required by E-23, an insurer cannot prove they are capturing a true risk driver rather than just a correlated bias.

For instance, in usage-based insurance, heavily penalizing late-night driving might correlate with the shift workers in lower-income brackets. Actuaries should consider using appropriate proxy variable tests to prove the risk lies in the fatigue and visibility of night driving, not the socioeconomic status of the driver.

Challenge 2: No practical alternative in the age of AI

The Supreme Court accepted the “no practical alternative” defense largely because the data required to price risk without using discriminatory proxies did not exist. Today, with E-23 mandating the identification of model limitations and mitigants, insurers face a higher evidentiary burden. They cannot simply assert that fairness is impossible; they must demonstrate it.
For instance, in usage-based insurance, heavily penalizing late-night driving might correlate with the shift workers in lower-income brackets. Actuaries should consider using appropriate proxy variable tests to prove the risk lies in the fatigue and visibility of night driving, not the socio-economic status of the driver.

A structured due diligence framework: The Human Rights Impact Assessment (HRIA)

This is where impact assessment tools, like the Human Rights Impact Assessment for AI (HRIA),5 developed by the Ontario Human Rights Commission (OHRC) and the Law Commission of Ontario (LCO), become critical. While the HRIA is a policy guideline tool rather than a binding legal shield, it provides a structured framework to document due diligence across both prongs of the Zurich test:

  1. Validating the Rational Connection: The HRIA advises insurers to evaluate statistical correlations, utilizing explainability tools to prove that variables are capturing genuine, causal risk drivers rather than acting as proxies for protected classes.
  2. Proving No Practical Alternative: If an adverse impact is identified, the HRIA recommends an alternatives analysis. By systematically testing less discriminatory models and generating privileged documentation that records the resulting degradation in predictive accuracy and financial viability, the HRIA establishes the evidentiary baseline required to debate “undue hardship” or lack of a commercially viable alternative before a regulator.

Integrating the HRIA into the E-23 validation process does not grant statutory immunity. However, it ensures that if an insurer retains a model with disparate impact, they do so with a documented defense that the model represents a sound insurance practice with no viable commercial or technical alternative.

Operationalizing E-23: Integrating model compliance risks into the model life cycle

Operating model compliance risk factors — such as significance of human impact, likelihood of harm, bias and fairness, and explainability — as a separate workstream from core MRM can create fragmented oversight. This siloed approach risks creating a blind spot where a model meets mathematical standards but presents potential legal or regulatory concerns. The statutory defense of a “reasonable and bona fide” practice might fail if an insurer cannot prove they rigorously assessed alternatives. Guideline E-23 serves as a mechanism for generating this proof by establishing a unified approach to compliance risk factors throughout the model life cycle.1

  • Risk Rating and Management Intensity: Insurers should establish a risk rating that moves beyond financial materiality to include key dimensions of compliance risk. For rating and underwriting applications, the significance of human impact, the likelihood of discriminatory harm, and the required level of explainability are critical factors in the inherent risk rating. These ratings drive the downstream model life cycle, determining model usage limits, monitoring intensity, and the escalation of residual risk management decisions.
  • Model Rationale and Documentation: Model owners should provide a clear rationale for deployment that explicitly addresses market conduct and fair consumer outcomes. This includes documenting considerations for the required level of transparency and explainability, as well as a proactive assessment of the potential for biased outcomes, negative social and ethical implications, or privacy risks.
  • Model Data and Development: The guideline expands data governance requirements from primarily accuracy concerns to broader facets: data should be relevant, representative, compliant, traceable, and timely. Insurers should enhance model explainability by analyzing the potential for unwanted data bias to translate into unfair model outputs and associated reputational risks. Clear, consistent, and repeatable practices for model development should be established to ensure that explainability standards are met, with rigor varying based on regulatory requirements and the potential impact on customers.
  • Model Review and Deployment: E-23 requires independent model review to confirm that the model outputs are appropriately explainable and comply with performance expectations before the model impacts a consumer. Crucially, deployment might necessitate conditional approval subject to outcome monitoring to detect whether “fairness drift” occurs post-launch, ensuring that the model remains fair not just in the test environment, but in the real world.

By operationalizing these E-23 principles, insurers can ensure that the necessary evidence for the “Zurich Defense,” i.e., the proof of diligence and the testing of alternatives, is sufficient and documented as part of the standard, enterprise-wide control cycle.

The E-23 Perimeter: A risk-based expansion beyond ratemaking and underwriting

Before tiering models, the primary operational hurdle for complying with E-23 is defining the model inventory. The guideline’s expanded definition captures everything from advanced machine learning algorithms to heuristic end-user computing tools. Applying maximum life cycle governance to every model would cause operational paralysis. Therefore, the E-23 blueprint should be applied through a risk-based approach that is proportional to the level of model risk identified by insurers.
High compliance risk models do not always calculate a premium; they can act as gatekeepers to the quoting process itself.

The regulatory dividend: Enterprise-wide confidence

While FSRA’s auto insurance guidance primarily targets rating and underwriting,2 OSFI E-23 mandates an expectation of enterprise-wide coverage, subject to materiality. By applying E-23 rigor to models outside the strict scope of pricing and underwriting, insurers provide provincial regulators with a higher level of confidence that consumer fairness is being managed holistically across the value chain.

The risk-based expansion can be illustrated through three tiers of operational reality:

  • High compliance risk models do not always calculate a premium; they can act as gatekeepers to the quoting process itself. Consider an algorithmic point-of-sale fraud model that evaluates a digital footprint. If an applicant is scored as “high risk,” the system intentionally injects quoting friction, such as blocking the direct-to-consumer online rate and forcing a manual broker call. If this model relies on proxy variables that systematically flag specific minority cohorts, it could constitute a discriminatory barrier to entry for a mandatory financial product. Because these models dictate fundamental, equitable access to coverage, those resulting in systematic, disparate barriers require a full “reasonable and bona fide” assessment. Insurers should use human impact assessment tools like the HRIA to prove the fraud variables capture genuine, causal risk rather than acting as protected-class proxies and explicitly demonstrate a lack of less discriminatory screening alternatives.
  • Medium compliance risk models prioritize convenience, creating an indirect fairness impact that requires lighter control. For example, a claims triage model that decides who gets instant approval versus standard handling creates a conduct risk if one group is systematically slowed down, but it does not accuse the customer of fraud. While these models may not demand an exhaustive assessment, they need sufficient pre-deployment proxy testing on historical data combined with automated post-deployment circuit breakers to ensure service level disparities remain within acceptable bounds.
  • Low compliance risk models have remote or nonexistent human impact. Applying fairness testing here would be a misuse of resources. For example, actuarial reserving models operate on aggregate data pools to ensure solvency. While crucial for financial stability, they do not make individual decisions about consumers. For these models, impact assessment tools like the HRIA are non-applicable. The focus remains on the traditional pillars of performance and stability. By explicitly categorizing these as low compliance risk that are subject only to light inventory requirements, the insurer demonstrates the “proportionality” required by OSFI, preserving resources for the highest impact models.

The path forward: Operationalizing E-23 to deliver fair consumer outcomes

While achieving compliance with the OSFI E-23 operational deadline is the baseline objective, its implementation and integration with broader market conduct goals offer a distinct advantage. In jurisdictions like Ontario, insurers who leverage E-23 to build a fair modeling ecosystem can position themselves favorably for supervisory proportionality, potentially achieving greater regulatory efficiency and speed to market for their rate filings.
To navigate this successfully, the industry should focus on:

  1. Integrated life cycle management: The end-to-end model life cycle should explicitly integrate model compliance parameters for fair consumer outcomes.
  2. Risk-based governance: Governance rigor should be proportional to the model compliance risk parameters such as bias, fairness, explainability, and human impact.
  3. Evidentiary escalation versus risk acceptance: Market conduct violations cannot be formally accepted like financial and insurance risks. Models exhibiting unmitigated disparate impact should be escalated to senior management and legal counsel strictly to validate the “Zurich defense” prior to deployment.

The convergence of E-23 and FSRA requirements on fair consumer outcomes represents the current trajectory. Actuaries should review their model inventories not just for financial materiality, but also for compliance materiality. As the industry transitions into a regulatory environment that demands higher transparency, proactively operationalizing risk-based fairness provides the essential infrastructure to navigate these evolving standards effectively.

Fredrick Au, FCAS, is a member of the CAS Canada Race & Insurance Pricing Research Task Force and an actuary at TD Bank.

References

  1. Office of the Superintendent of Financial Institutions (OSFI). Guideline E-23: Model Risk Management (2027)
  2. Financial Services Regulatory Authority of Ontario (FSRA). Guidance: Automobile Insurance Rating and Underwriting Supervision (No. AU0142INT)
  3. Autorité des marchés financiers (AMF). Guideline for the Use of Artificial Intelligence. June 2025.
  4. Zurich Insurance Co. v. Ontario (Human Rights Commission), [1992] 2 S.C.R. 321.
  5. Law Commission of Ontario & Ontario Human Rights Commission. Human Rights Impact Assessment (HRIA) for AI. November 2024.
professionalinsight

Professionalism Briefs

Applicability Guidelines Example: Expert Testimony
By Mike Speedling, John Potter, and Kenneth Hsu, members of the CAS Professionalism Education Working Group and New Members Working Group
The Professionalism Education Working Group frequently publishes articles on topics related to actuarial professionalism, including clarifying how the Code of Professional Conduct and the Actuarial Standards of Practice (ASOPs) apply in various scenarios. Our work explores key aspects of professionalism and focuses on the importance of integrity, accountability, and adherence to professional standards in all areas of actuarial practice. If you need additional counseling resources, the Actuarial Board for Counseling and Discipline (ABCD) is available at abcdboard.org. To make this truly a learning and professionalism experience, we want your feedback. You can send your comments and questions to ar@casact.org.
I

n the March/April 2026 AR, we covered the three ASOPs applicable to all actuarial services regardless of the practice area. They are ASOP 1 – Introductory Actuarial Standard of Practice, ASOP 23 – Data Quality, and ASOP 41 – Actuarial Communications. We also talked about the Applicability Guidelines (AGs). To recap, the AGs are published by the Council on Professionalism and Education of the American Academy of Actuaries and aim to help actuaries consider which ASOPs may provide guidance based on the scope of their role. These are not definitive statements of what generally accepted practices apply to a specific task and should not replace the actuary’s professional judgment. The AGs, which is an Excel file, can be found on the Academy’s website; just click on the Professionalism tab > Actuarial Standards of Practice > Applicability Guidelines. You can also access them through the Understanding Professionalism link.

In this article, we will focus on AG item 4.0 under the Casualty tab: “Expert Advice, Witness, and/or Testimony.” The only ASOP listed under this heading is ASOP 17 – Expert Testimony by Actuaries. This ASOP should be used in conjunction with any standards relating to the subject on which you provide expert advice.

ASOP 17 was originally adopted in 1991, revised in 2002, and further updated in 2011 and 2018. The latest version became effective for all expert testimony provided by the actuary on or after December 1, 2018.

The ASOP defines some key terms including “Actuarial Assumption,” “Actuarial Method,” “Expert,” “Principal,” and “Testimony.” It defines an “Expert” as “someone who is qualified under the evidentiary rules applicable in the forum to testify as an expert, whether explicitly or by acceptance of the actuary’s testimony. An actuary who has been engaged to testify, or permitted to testify, with the expectation that the actuary will ultimately qualify as an expert is treated as an expert for purposes of this standard, even if the actuary does not testify or is later determined to not qualify as an expert.”

“Testimony” is defined as “a communication of opinions or findings presented in the capacity of an expert witness at trial, in hearing or dispute resolution, in deposition, by declaration or affidavit or by any other means through which testimony may be received. Such testimony may be oral or written.”

An “Expert” may explain complex technical concepts, so they can be understood by the audience receiving the testimony, most of whom may not be actuaries. Even though actuaries may differ in their conclusions, “a mere difference of opinion between actuaries does not suggest that an actuary has failed to meet professional standards.”

An “Expert” will ordinarily work closely with the attorney or other representative of the “Principal” and may reasonably rely upon the advice, information, or instruction provided concerning the meaning and requirements of the rules of evidence or procedure and any other applicable rules. “[R]elying on such advice … is not in violation of this standard….” The actuary should disclose if they believe that a relevant law or regulation contains a material conflict with appropriate actuarial practices, subject to the requirements of the forum, including without limitation all rules of evidence and procedure.

Let’s look at some hypothetical scenarios where an actuary may be called to provide expert testimony; we note that any similarities to actual events are purely coincidental.

An actuary is employed as an expert witness by the U.S. Internal Revenue Service in a case where a captive is experiencing consistently low loss ratios. A captive that takes in premium but rarely, or never, pays out losses may indicate a lack of risk transfer; in this case, the captive acts to shift pretax dollars into an entity with a lower tax burden. The expert may be called in to review frequency and severity assumptions to determine whether the premium is reasonable. In this type of situation, the actuary may want to consider additional ASOPs, such as ASOP 53 (Estimating Future Costs for Prospective Property/Casualty Risk Transfer and Risk Retention) and ASOP 38 (Catastrophe Modeling), when performing their assessment.

Another example is a case of arbitration between two insurers, where one has purchased a subsidiary from the other. In this case, the subsidiary has experienced a deterioration in loss ratios since being purchased, and the purchasing insurer alleges that the subsidiary’s liabilities were materially understated. For this situation, expert testimony may involve a third-party, independent actuary performing an analysis of reserve estimates at the time of sale to determine whether the methods and assumptions used were outside of a reasonable range. The actuary may leverage ASOP 43 (Property/Casualty Unpaid Claim Estimates) and ASOP 23 (Data Quality) in their determination.

A third case where an actuary may provide expert testimony is in a regulatory rate hearing. The actuary may provide evidence that an insurer’s rate increase is excessive compared to its trends in loss, expense, and investment income. They may also opine about whether a company has a target profit that is excessive compared to the risk being insured. ASOP 13 (Trending Procedures in Property/Casualty Insurance) and ASOP 29 (Expense Provisions for Prospective Property/Casualty Risk Transfer and Risk Retention) may be cited by the actuary in their testimony.

Actuaries providing expert testimony may benefit from this general guidance on best practices:

  1. Uphold professional independence and integrity: We, as actuaries, should maintain our independence from the client and avoid being an advocate for a specific side or outcome. We should be honest and not let pressure influence our conclusions; sometimes, this may include turning down assignments. Upholding actuarial professional integrity should always be a priority.
  2. Documentation and technical rigor: Be extremely detailed and meticulous with your documentation. Try to anticipate what the opposing side may challenge, even the smallest details. It’s also not uncommon that you rely on other experts (e.g. catastrophe modelers, claims experts, or attorneys), though this reliance should be disclosed.
  3. Anticipate the adversarial nature of the process: The environment is inherently adversarial, however most matters are resolved in arbitration rather than trials, and opposing sides can often come to a middle ground where everyone agrees and compromises are reached. Cross-examinations in depositions and trials are also adversarial in nature and require careful preparation with attorneys.
  4. Communication and audience awareness: Very often, we will not be delivering our findings to experts. We must understand the background of our audience and assess their level of knowledge. Whether we are presenting our findings to judges, juries, or arbitrators, we must be able to communicate clearly.

Most of this general guidance can be applied to our daily work, hopefully except for presenting in an adversarial and disputed context. You may refer to ASOP 17 for additional guidance on hypothetical questions, cross-examination, and other related topics.

Understanding ASOP 17 will help make the expert witness process clearer, more consistent, and more professional. This is just an example of how to use the Applicability Guidelines for a specific Description of Assignment. There are six more major categories that we encourage you to explore and consider how each aligns with your practice.

When was the last time you referred to the Applicability Guidelines? We want to hear your thoughts at ar@casact.org.

actuarialexpertise
Increased Limit Factors: A Modified Riebesell Form
By XIAOXUAN (SHERWIN) LI AND YICHUN CHI
G

erman actuary Paul Louis Riebesell proposed the popular “Riebesell form” for increased limit factors (ILFs) in the 1930s. It is still commonly used in the pricing of liability insurance and reinsurance around the world because it is convenient to be applied in practice and its parameter is easy to estimate. However, it is found that the Riebesell form of ILFs seem too heavy-tailed when they are utilized for some liability insurance lines in certain insurance markets. Here we propose a “modified Riebesell form” for ILFs that could fit the distribution of ILFs better in certain scenarios.

The background of ILFs
ILFs are one of the core tools for pricing and risk assessment in liability insurance. Actuaries usually utilize the ILFs to compute the loss cost of different policy limits based on that of the basic limit which has the highest credibility.

Loss Cost(Increased Limit) = Loss Cost(Basic Limit) *ILF((Increased Limit)/(Basic Limit)).

The essence of ILFs is to quantify the multiplicative relationship between the loss cost of basic limit and loss costs of different policy limits. Its definition can be formally given as follows:

ILF(M) = ILF((Increased Limit) / (Basic Limit)) = LAS(Increased Limit) / LAS(Basic Limit),

where M is the multiple between the increased limit and the basic limit, while LAS stands for Limited Average Severity defined as:

LAS(Limit) = E[min(Loss, Limit)] = ∫0LimitL * f(L)dL + Limit * [1 – F(Limit)].

Here, f(L) and F(L) are the probability density function and the cumulative distribution function (CDF) of the loss, respectively. In other words, the LAS for a given limit is the expected value of severity capped at the given policy limit.

The original Riebesell form and its limitations
The curve of ILFs often depends on the distribution of losses. However, Paul L. Riebesell introduced a convenient scale-invariant formula for ILFs. The original Riebesell form for ILFs is given by

ILF(M) = rlog2M,

where r is the Riebesell factor and M is the multiple between the increased limit and the basic limit as defined above.

The Riebesell factor r has a convenient property in the practice of liability insurance pricing. It is the relativity for the loss cost of two times the basic limit divided by the loss cost of the basic limit, and it is also equal to the relativity for the loss cost of four times the basic limit divided by the loss cost of two times the basic limit, and so on. Therefore, if the Riebesell form works well in practice, we can easily obtain the Riebesell factor by dividing the loss cost of two times the basic limit by the loss cost of the basic limit. The Riebesell form may be quite suitable for some heavy-tailed liability risks, such as the product liability line in the U.S.

However, for some other liability risks that are not so heavy-tailed, such as general liability insurance in China, the Riebesell form often does not work well. It is often identified that the relativity for the loss cost of four times the basic limit divided by the loss cost of two times the basic limit is smaller than that of the relativity for the loss cost of two times the basic limit divided by the loss cost of the basic limit. As well, the relativity for the loss cost of eight times the basic limit divided by the loss cost of four times the basic limit is usually smaller than that of the relativity for the loss cost of four times the basic limit divided by the loss cost of two times the basic limit, and so on. The exact rate of ILF decay may depend on different markets’ litigation environments and how quickly liability claims escalate through towers of coverage — and the Riebesell form is too inflexible to reflect this.

The mathematical principle behind the original Riebesell form
In terms of mathematics, the original Riebesell form has a more general expression mathematically as below:

ILF(M) = Ms ,

where s=log2 r. This result follows from rearranging and rebasing terms in the formula from the previous section in the following manner:

ILF(M) = rlog2M = (r(lnM / ln2)) = (r(1 / ln2))lnM
= (e(ln r · (1 / ln2)))lnM = (e((lnM · ln r) / ln2))
= (elnM)(ln r / ln2)
= M(ln r / ln2) = Mlog2r = Ms

In order for the original Riebesell form to be applied, the loss cost of liability insurance must be heavy-tailed enough to satisfy the CDF:

F(x) = 1 – a * xs – 1,

where s must be less than 1 (that is, r <2) and x must be greater than a1 / (1 − s) for the purpose of F(x) being an effective CDF1. It could be proven that the expected value for this CDF does not exist. But in practice it is found that the above CDF is too heavy-tailed for some liability insurance products. Gary Venter identified this problem in one of his articles2, in which he regarded the above CDF as a kind of Pareto distribution with the shape parameter less than one. That kind of Pareto distribution is too heavy-tailed for some liability insurance products.

The modified Riebesell form
In order to solve the issue of excessively heavy tails from the original Riebesell form, we propose to use the modified Riebesell form as shown below:

ILF(M) = r(log2M)α.

The modified Riebesell form has two parameters in which the parameter α controls the tail shape. Usually, α is less than one. Generally speaking, the original Riebesell form is a special case of the modified Riebesell form with the parameter α = 1. Under the modified Riebesell form, the tail of the increased limit factor distribution turns thinner as α decreases, as shown in Figure 1.

Figure 1: The Comparison of ILF Curves
Line graph showing Modified Riebesell ILF Curves varying by alpha values at a constant rate.
Lower values of α have the potential to be more effective in excess layer pricing for liability insurance products that are not so heavy-tailed. The underlying calculations are quite simple. Illustrative calculations at the 5M limit, using α values of 0.1 and 1.0, are presented below:

1.403 = 1.1572.322 = 1.157(log25)1.0
1.172 = 1.1571.088 = 1.157(log25)0.1

More information on selection of r = 1.157 is presented in the next section.

An example of the modified Riebesell form
Because there are two parameters r and α in the modified Riebesell form, we have to find two values for the parameters rather than a single parameter for the original Riebesell form. From a mathematical point of view, we could solve two parameters simultaneously by minimizing a loss function. In practice, for simplicity, we could also estimate the parameter r first and then the parameter α.

For illustration, we execute both approaches and compare their results to empirical ILFs for a simulated portfolio of Chinese general liability losses. For the original Riebesell form, we directly use the empirical ILF of two times basic limit as the estimate of the parameter r, which is 1.157 (which is the same value of r used to produce the curves in Figure 1). If we attempt to minimize the loss function of the mean squared error (MSE) for the modified Riebesell form, we obtain a fitted parameter α as 0.238.

Figure 2: Comparison to Empirical ILF Curves
Line graph comparing Empirical ILFs against various Riebesell approximations, showing different model fits.
Alternatively, when we fit both parameters simultaneously, we obtain estimates of r = 1.150, α = 0.291. It can be seen in Figure 2 that the modified Riebesell forms work better than the original Riebesell form in approximating the empirical ILFs at higher limits, although they overestimate the empirical ILF slightly at lower increased limits.
Summary
The original Riebesell form for ILFs is most applicable to very heavy-tailed liability insurance product pricing. As for not-so-heavy-tailed liability insurance, we created the modified Riebesell form and presented examples where it can be shown to work better than the original form. Depending on empirical claim patterns and how much pressure exogenous factors such as social inflation exert on excess layers, we can adjust the parameters α and r , instead of directly adjusting the parameter r, in order to reflect them more effectively.
Xiaoxuan (Sherwin) Li, FCAS, CCRMP, is the head of Risk R&D Center for PICC P&C in China and the former chairperson of the CAS Asia Regional Committee. Dr. Yichun Chi is a professor of actuarial science at the China Institute for Actuarial Science, Central University of Finance and Economics, in China.
  1. The derivation process of F(x): From ILF(M) = Ms = (E[min(X,M*B)])/(E[min(X,B)]) = (∫0M*B[1-F(x)]dx)/(E[min(X,B)]), we obtain 0M*B[1-F(x)]dx = E[min(X,B)] * Ms. Taking the derivative of M on both sides of the equation, we get 1-F(M*B) * B = E[min(X,B)] * s * Ms-1, which in turn implies F(M*B) = 1-E[min(X,B)] * s * B-1 * Ms-1. Let M = y/B, then F(M*B) = F(M*y/B) = F(y) = 1-E[min(X,B)] * s * B-1 * (y/B)s-1 = 1-E[min(X,B)] * s * B1-s * ys-1. Note that E[min(X,B)] * s * B1-s is a constant independent of y, so that is F(y) = 1-a*ys-1
  2. Gary Venter’s article may be found at http://www.garyventer.com/wp-content/uploads/2018/09/Venter-Pagliaccio-2005-Distributions-Underlying-Power-
    Function-ILF-%E2%80%99-s-Riebesell-Revisited-.pdf
New in 2026

The CAS AI Primer

Artificial intelligence is transforming how actuaries work, analyze data, and deliver insights. It offers tremendous potential to enhance efficiency, accuracy, and business impact across the insurance value chain. However, AI tools also introduce new categories of risk and governance challenges. A new CAS AI Primer offers a starting point for actuaries in their AI adoption journey. It will:

  • Provide a concise overview of AI concepts and applications relevant to actuarial work.
  • Highlight potential risks and outline best practices for responsible AI use.
  • Outline key corporate and regulatory considerations that shape AI implementation in actuarial contexts.
  • Direct readers to trusted learning resources for building deeper AI literacy and practical skills.
New in 2026
solvethis

It’s a Puzzlement

By Jon Evans
The Programmer’s Eternal Loop
L

ila, a lead software architect at an AI research lab, is debugging a self-modifying neural-network training script. Every second, the remaining unreviewed portion of the code grows by exactly 1% of its current length as new training data continuously streams in. Lila can examine and fix code at a constant rate of 100 lines per second. When she sits down to begin, there are exactly 1,000,000 lines left to review.

Will Lila ever finish debugging the entire script? If so, exactly how many seconds will it take her?

Illustration of one hand passing a large stack of cash to another hand.
Extra credit
Suppose instead that the code grows by r% per second, where r can be any positive real number. For which values of r does Lila finish in finite time, and what is the general formula for the time required?
A friendly circle of debt
This solution was submitted by Stsiapan Dziamentsyeu.

No one was cheated, because we can think of an identical situation where everyone in the circle agrees to clear their mutual debt. Ex: Alice owes $100 to Bob and Charlie owes Alice $100, so Alice can clear her debt by having Charlie owe Bob $100. Everyone does this until there are two people left who owe each other $100; they agree to clear it.

Now Alice wins $50 from a scratch off. The end result is the same—Alice keeps $50, no one has debt.

Digital graphic of a programmer's hands on a keyboard surrounded by flowing data and code.
Know the answer? Send your solution to ar@casact.org.
Actuarial Review Logo
Thanks for reading our latest issue!