Contents
departments
- EDITOR’S NOTE
- When History Informs the Future
- President’s Message
- Let Your Voice Be Heard!
- Member News
- Comings and Goings
- Calendar of Events
- CAS Staff Spotlight
- CAS and Peking University Sponsor 14th Annual Actuarial Month
- CAS Announces Winners of the 2025 Peak Re-Sponsored ARECA Case Competition
- Every CAS Member Has a Signature. What’s Yours?
- Professional Insight
- Developing News
- CAS Hosts RPM Fireside Chat with Jeffrey Ma on Unlocking Innovation
- Insurtech Is Dead. Long Live Insurtech
- Leveraging Actuarial Guardianship for AI Governance
- Professionalism Considerations for Snowmageddon
- Global Actuarial Pricing and the Regulatory Evolution
- Bringing Innovation to Pricing for Changing Vehicle Features and Volatile Values at Risk
- Operationalizing Canada’s Federal Guideline OSFI E-23 – Model Risk Management to Deliver Fair Consumer Outcomes
- Professionalism Briefs
- Actuarial Expertise
- Increased Limit Factors: A Modified Riebesell Form
- Solve This
- It’s a Puzzlement
on the cover
-
Explore how insurance evolved from ancient risk-sharing practices into a cornerstone of modern economies and how its history can guide actuaries in addressing emerging risks. -
AI agents are rapidly transforming business operations while introducing new and difficult-to-price liability risks across cyber, E&O, and general liability lines.
-
An interview with Scott Shambaugh on his experience being targeted by an AI agent—and the broader risks AI poses for open-source communities and actuarial work.
The amount of dues applied toward each subscription of Actuarial Review is $10. Subscriptions to nonmembers are $50 per year. Postmaster: Send address changes to Actuarial Review, 4350 North Fairfax Drive, Suite 250, Arlington, Virginia 22203.
Masthead
-
Editor in Chief
Jim Weiss
-
CAS Director of Publications and Research
Elizabeth A. Smith
-
AR Managing Editor and CAS Editorial/Production Manager
Sarah Sapp
-
CAS Managing Editor/Contributor
Greg Guthrie
-
CAS Graphic Designer
Sonja Uyenco
-
CAS Cross-Functional Coordinator/Contributor
Delilah Barrow
-
News Editor
Sara Chen
-
Opinions Editor
Richard B. Moncher
-
Editors
- Colleen Arbogast
- Daryl Atkinson
- Karen Ayres
- Glenn Balling
- Robert Blanco*
- Lisa Brown
- Michael Budzisz
- Sumanth Chebrolu
- Todd Dashoff
- Daniel Jay Falkson*
- Stephanie Groharing
- Julie Hagerstrand
- Srinand N. Hegde*
- Cameron Herrmann*
- Kenneth S. Hsu
- Cindy Hu*
- Jack Huang*
- Rachel Hunter*
- Rob Kahn*
- Benyamin Kosofsky
- Julie Lederer
- Albert Lee
- David Levy
- James Li*
- Sydney McIndoo
- Stuart Montgomery
- Sandra Maria Nawar*
- Erin Olson
- Shama S. Sabade
- Michael Schenk
- Robert Share
- Craig Sloss
- Jared Smollik
- Andrew Somers*
- Bella Thiel*
- Isaac Wash*
- Radost Wenman
- Ian Winograd
- Vanessa Wu*
- Xuan You*
- Yuhan Zhao*
-
*Writing Staff
-
Puzzle
Jon Evans
-
Advertising
Al Rickard, 703-402-9713
arickard@assocvision.com -

The Casualty Actuarial Society is not responsible for statements or opinions expressed in the articles, discussions or letters printed in Actuarial Review. -
For permission to reprint material from Actuarial Review, please write to the editor in chief. Letters to the editor can be sent to AR@casact.org or the CAS Office. To opt out of the print subscription, send a request to AR@casact.org.
Images: Getty Images -
© 2026 Casualty Actuarial Society.
ar.casact.org
When History Informs the Future
nsurance is often viewed through a modern lens, but its roots stretch back thousands of years. Our cover story revisits that history, exploring how early innovations in risk sharing laid the groundwork for today’s global insurance systems and how those same principles continue to inform the future of the profession. That forward-looking lens is especially relevant in this issue, where we turn from history to the rapidly evolving risks of today. One article examines the emerging liability landscape of AI agents, highlighting how autonomous systems are already creating complex, difficult-to-underwrite exposures across cyber and professional lines. Alongside it, a firsthand interview with engineer Scott Shambaugh offers a human perspective on these same technologies, illustrating how agentic behavior can manifest in unexpected and sometimes harmful ways in real-world communities. Together, these pieces underscore a familiar theme: While the tools may change, the challenge of understanding, managing, and assigning risk remains at the heart of the actuarial profession.
We bring you five session recaps from Ratemaking, Product, and Modeling seminar, held March 16-18 in Chicago, including sessions on navigating risk with machine learning; professionalism in climate-driven catastrophe risk; leveraging insurtech now that the hype is over; comparing actuarial pricing across the globe; and generative and agentic AI, regulation, and the actuary. We also delve into the brand work the CAS has been doing, telling the story of the philosophy behind the endeavor. Learn about the evolution of the brand and see the new look firsthand.
We conclude with a technical contribution that reflects the profession’s continued evolution in practice. Revisiting a foundational tool in liability pricing, the authors introduce a modified Riebesell form for increased limit factors—offering a more flexible approach for modeling risks that are not as heavily tailed as traditional assumptions suggest. By refining a long-standing actuarial method, the article highlights how even well-established frameworks must adapt to better reflect real-world experience, reinforcing the ongoing balance between theory and application that defines actuarial work.
Enjoy the issue!
Actuarial Review
Casualty Actuarial Society
4350 North Fairfax Drive, Suite 250
Arlington, Virginia 22203 USA
Or email us at AR@casact.org
Let Your Voice Be Heard!
lection season is nearly upon us, and before we know it we’ll be sifting through candidate profiles and campaign messages, reading articles, talking to peers, and watching interviews and campaign videos to determine who deserves our vote. Which candidates have the experience and background to deal with the challenges we face? What issues are most important to me, and who can I trust to represent my interests? Who do I trust to make sound, principled decisions on issues that may arise in the future? Who shares my values?
Yes, elections are serious business and require us as voters to make informed choices — whether we’re talking about the U.S. midterm elections in November 2026 or the CAS elections in July 2026. I encourage eligible members to exercise their rights and responsibilities and vote in both important elections, but as CAS president, I want to focus on our upcoming CAS elections.
The CAS Board noted that participation in CAS elections has dropped in each of the past several years and recently undertook a survey of nonvoting, eligible members to better understand what may be driving this trend. When asked to identify the primary reason for not voting in the most recent CAS election, respondents identified several factors (see Table 1).
Table 1.
Forgetting to vote (14%) and lack of knowledge regarding the role of the CAS Board (7%) are areas where the CAS will take action as well.
When asked what the CAS could do differently to motivate eligible members to vote, respondents identified several potential changes (see Table 2).
Table 2.
- Make elections feel meaningful and competitive.
- Improve understanding of the Board’s role, impact, and track record.
- Enhance candidate visibility and engagement.
- Broaden representation and diversity of viewpoints.
- Recognize voting rights and inclusion concerns.
- Acknowledge that some nonvoting is unavoidable.
The CAS will be sending additional reminders this election cycle and will also look at ways to ensure members better understand the respective roles of the board, president, and president-elect. We will also be modifying some of the candidate information to address the need for more distinguishing information to assist members in their voting deliberations. The topic of competitive elections for president-elect has been discussed, and while there could be consideration given in the future, near-term efforts will focus on communication, better candidate information and engagement, as well as better information concerning the roles of the various parties.
On the topic of competitive elections, I think it is useful to remind members that while the Nominating Committee traditionally has identified only a single nominee for president-elect, there is a vehicle for additional candidates to nominate themselves through the preferential ballot process, an outcome that has occurred in past CAS elections. Historically, the time demands of the president’s role have often made it a challenge to identify a single candidate in some years, though with the enhanced capabilities of the CAS staff, the presidential role is somewhat less demanding than in years past, and more candidates might be willing to accept a nomination. The Board elections are already competitive, with eight nominees vying for four seats in recent elections. This has possibly contributed somewhat to the feeling of not having sufficient differentiating information; multiple candidates can have somewhat similar backgrounds and viewpoints on key issues, even as the Nominating Committee diligently works to identify a diverse and representative slate of candidates.
One word of caution regarding competitive elections: the notion of a competitive election can very well encourage some degree of politicization and polarization within the CAS community, which is something we have largely avoided for the past century and more of our existence. My personal view is that I would not want to see competitive elections implemented solely as a tool to increase voter participation, as the unintended consequences may well lead to bigger challenges than low voter turnout.
While the CAS Board and Executive Council implement improvements to the election process and communications in response to the survey results outlined above, I want to encourage eligible members to invest the time needed to be informed voters in the upcoming CAS elections and make your voices heard. This is our Society and we have both the privilege and responsibility to select leaders to ensure the continued growth and success of the CAS for current and future generations of actuaries.
Actuarial Review Letters Policy
Letters shall not contain personal attacks or statements directly or implicitly denigrating the characters of individuals or particular groups; false or unsubstantiated claims; or political rhetoric. Letters should be no more than 250 words and must include the author’s name and phone number or email address, so the editorial staff can confirm the author. Anonymous letters will not be published. There shall be no recurrence of topics; issues previously addressed will not be the subject of continued letters to the editor, unless new and pertinent information is provided. No more than one letter from an individual can appear in every other issue. Letters should address content covered in AR. Content regarding the CAS Board of Directors or individual departmental policies should be directed to the appropriate staff and volunteer groups (e.g., board, working groups, committees, task forces, or councils) instead of AR. No letter that attempts to use AR as a platform for an ulterior purpose will be published. Letters are subject to space limitations and are not guaranteed to be published. The AR editorial volunteer and staff team reserves the right to edit any submitted letter so that it conforms to this policy. Decisions to publish letters and make changes to submissions shall be made at the discretion of the AR Working Group and CAS staff.
For more information on AR editorial policies, visit here.
Comings and Goings
Lee Bowron, ACAS, MAAA, published “The Kerper-Bowron Method: A Foundational Change for Service Contract Claim Estimation and Accounting” in the journal Risks The paper concerns forecasting expected losses and cancellations for service contracts.
Wesley Griffiths, FCAS, was appointed executive fellow and program director for the risk management and insurance (RM&I) program at the University of St. Thomas. He will continue to serve as AVP & Senior Actuary at Travelers while assuming this role. In this role, Griffiths will oversee the undergraduate RM&I certificate and drive program growth through expanded academic offerings, experiential learning opportunities, and engagement with industry partners.
Scott Henck, FCAS, MAAA, CPCU, has been appointed senior vice president and chief actuary at Chubb Limited. In his new role, Henck will oversee all actuarial functions, including reserving, pricing, and capital performance measurement. Henck brings nearly three decades of insurance industry experience to the role. He joined Chubb in 2002 and most recently served as chief actuary of North America. Prior to that role, he founded and led the actuarial insights, business intelligence, and advanced analytics unit for global claims.
Calendar of Events
-
July 28–Sept 1, 2026
2026 CAS Virtual Workshop: Introduction to
Python for P&C Insurance -
September 14–16, 2026
2026 Casualty Loss Reserve Seminar
Las Vegas, NV -
November 8–11, 2026
2026 CAS Annual Meeting
Honolulu, HI
ctuaries have both tremendous power and a humbling responsibility in regards to insurance company solvency. By virtue of the rigorous education required for achieving credentials from the CAS, an actuary attains a unique stature in the insurance community. With that stature comes the professional responsibility to provide opinions pertinent to the solvency of state-regulated insurance companies.
We act neither as agents of the domiciliary regulator nor as advocates for the insurance entity when we render formal statements of actuarial opinion (SAO). Our responsibility is to provide an independent, unbiased opinion as to the reasonableness of the company’s held accrual for its unpaid loss and loss adjustment expense obligations.
Virtually every communication made by an actuary in a professional capacity is considered an SAO. However, formal, prescribed SAOs — ones required by statute, regulation or other legally binding authority — involve de facto certifications that held accruals are reasonable.
I have heard it said many times that, as actuaries, we do not “certify” reserves but rather render an opinion as to their reasonableness. However, consider that most prescribed SAOs involve at least three representations, including:
- Held reserves meet the requirements of the insurance laws of domicile;
- Held reserves are consistent with reserves computed in accordance with accepted loss reserving standards of practice promulgated by the Actuarial Standards Board (ASB); and,
- Held reserves make a reasonable provision in the aggregate for all unpaid loss and loss adjustment expense obligations of the Company under the terms of its contracts and agreements.
Collectively, these three representations entail a “certification” that the Company’s held reserves are reasonably stated.
Given that SAOs are generally public information, such documents are often an actuary’s most public facing communication. Our opinions are not only given considerable weight by auditors and regulators, but also impose an immense responsibility on us as professionals. To the extent a Company has solvency difficulties, it is certain the SAOs rendered in prior years will be subject to scrutiny.
Actuaries sometimes deliver very unwelcome news regarding reserve adequacy…or inadequacy. Often, the Company will adjust its booked amounts to be within the actuary’s range of reasonable reserve indications… but not always.
Over the course of my 40-plus years in the consulting business, I have been involved — directly or indirectly — with at least two dozen insolvencies. In most situations, the slide towards insolvency was gradual. In other cases, poor decisions by company management or departments (e.g., marketing, underwriting or claims) contributed to adverse financial results.
In working with a company in precarious financial condition — especially in the consulting world — there is a natural human tendency to “go along to get along.” That is, preservation of the client relationship may influence one’s judgments. Moreover, if company management were to ask for consideration to allow more time to emerge from a difficult financial situation, there may be an inclination to soften a few assumptions here and there to achieve the desired result.
There is another human emotion that may manifest itself in that the actuary doesn’t want to be the individual responsible for putting people out of work. As professionals, we simply must not allow our human emotions to influence professional judgment when a company’s solvency is at stake. I would submit that any actuary that doesn’t have the stomach to make a hard call such as this should refrain from taking on the responsibility of rendering an SAO.
We must be mindful of both the intended and secondary users of our work products. Consider, the intended users of our reports are typically company management (and the company’s Board of Directors), auditors and regulators. Other intended users may include company shareholders, rating agencies, reinsurers, brokers, other actuaries and even the Actuarial Board for Counseling and Discipline (ABCD).
In situations where a company is facing solvency difficulties, there is a real danger for the actuary to be co-opted. That is, the actuary may convince himself the impact of operational changes at the company or in the jurisdiction in which business is written — as represented by company management — is greater than what might be deemed reasonable by a dispassionate observer. There are no flashing red lights indicating when a professional is wading into dangerous waters; however, an independent peer reviewer goes miles towards avoiding such perils. By virtue of being credentialed, actuaries have an affirmative obligation to render SAOs that will withstand scrutiny.
Actuaries have tremendous power as it relates to insurer company solvency. Our work product just may lead to an insurer shutting its doors and laying off staff. Given the function we serve, auditors and regulators rely on our opinions, and we should take the responsibilities associated with the credentials provided to us by the CAS seriously.
- In some jurisdictions, like Bermuda, the “reasonable” opinion is replaced by an “adequate” standard.
CAS Staff Spotlight
Meet Holly Davis, Website Portfolio Manager

Holly Davis
elcome to the CAS Staff Spotlight, a column featuring members of the CAS staff. For this spotlight, we are proud to introduce you to Holly Davis.
- What do you do at the CAS? How does your role support the Strategic Plan?
As website portfolio manager on the IT team, I manage web content and governance across CAS platforms, working closely with colleagues Cecily Marx and Tia Puckett. My current focus is leading a major website transition to a new content management system (CMS) while tackling long-standing functional issues like search, navigation, and site bloat.The website is often the first and most frequent touchpoint people have with the CAS, so keeping it functional, findable, and on-brand has a direct impact on several strategic priorities. For example, the CMS transition supports “fostering strategic expansion” by building a more scalable foundation for our digital presence and improved information architecture supports “enhancing the candidate experience” by making it easier for aspiring actuaries to find what they need. - What inspires you in your job and what do you love most about it?
I’m genuinely energized by the puzzle-solving side of this work; troubleshooting is one of my favorite parts of the job. But what really drives me is the data: watching how people interact with a website, understanding the psychology behind their behavior, and using those insights to make the experience better. It’s a natural fit for me because this role actually marries my two undergraduate degrees in computers and psychology. I get to use both every day. - Describe your educational and professional background. What do you bring to the organization?
I graduated with honors from Greenville University, studying psychology and digital media — a combination that turned out to be a perfect foundation for a career in web. Over the last 15 years I’ve worked as a web manager across a wide range of organizations: statewide nonprofits, million-dollar e-commerce operations, and higher education institutions. That variety has given me a broad tool kit and a lot of adaptability.What I bring to the CAS is that depth of cross-sector experience paired with a genuine curiosity about how people use the web. I’ve seen a lot of what works and what doesn’t, and I know how to ask the right questions before jumping to solutions.
- What is your favorite hobby outside of work?
My favorite hobby is collecting hobbies! I do creative videography, sewing and garment design, painting, fiction writing, and I’ve been experimenting with photography — and somehow, I keep finding room for more. I’m really drawn to making things and stretching my creative skills. - If you could visit any place in the world, where would you go and why?
Ireland! I’m fascinated by old castles and there’s nowhere quite like the Irish countryside for that. But until I make that trip happen, Cinderella Castle at Disney World will have to do. - What would your colleagues find surprising about you?
I’ve been running a videography business on the side for almost 10 years. I shoot weddings and creative video projects under my own brand, which means most weekends I’m behind a camera somewhere. It’s a completely different world from web management, but honestly the same skills show up: storytelling, attention to detail, and knowing your audience. - How would your friends and family describe you?
Quiet at first but give it a few minutes. I have a pretty deadpan sense of humor that tends to catch people off guard. I’m unabashedly nerdy. I’m the person to call when you need a trivia question answered, which actually happened just a couple of days ago.
CAS Announces Winners of the 2025 Peak Re-Sponsored ARECA Case Competition
he CAS is proud to announce the winners of this year’s Peak Re-sponsored CAS ARECA Case Competition. Organized by the CAS Asia Region Casualty Actuaries (ARECA) regional affiliate members and generously sponsored by Peak Re, this annual event continues to foster the next generation of general insurance talent across Asia.
The subject of the challenge this year was catastrophe analysis, and 44 teams from 19 universities, spanning Australia, China, India, Indonesia, Malaysia, Nepal, Singapore, and Vietnam, competed in the first round.
The top three teams took home cash prizes ranging from $1,000 to $2,500, along with certificates of achievement and free CAS exam registrations to support their professional journeys.
- 1st place winners: Hayden Siew Men Lek, Rhenu Chandran, Toh Yi Hui, UCSI Malaysia
- 2nd place winners: Pua Xin Yee, Tan Shu Ting, Lim Zhi Wei, University of Malaya, Malaysia
- 3rd place winners: Alyaa Khoirunnisa Fajri, Alya Aqilah binti Aidy, Nurul Amirah Sahrul Nizam, University of Malaya, Malaysia
Congratulations to our 2025 participants and winners for their exceptional research and dedication!
Testimonials from winners
1st place winners
Winning the Peak Re sponsored CAS ARECA Case Competition was definitely a roller-coaster ride for us. As it was our first time participating in a hackathon, we hit plenty of obstacles and challenges, but it was rewarding to see the concepts of general insurance like CAT models, frequency and severity, reinsurance structures, and many more actually come into play.
One of our biggest takeaways was realizing that there’s rarely a perfect model right out of the gate. There are a dozen ways to solve a single problem, and the real skill is in the justification of your choice. It was exhausting at times, but seeing it all click made every night worth it! We’re so grateful to the organizers, judges, and mentors who supported us along the way. Securing first place is truly a huge milestone for us, and we’re definitely not stopping here!
2nd place winners
We are truly grateful to CAS and Peak Re for organizing this case competition and providing such a valuable learning opportunity.
Through this experience, we deepened our understanding of catastrophe insurance, reinsurance, and catastrophe modeling, while applying data analysis to real-world industry problems.
The judges’ feedback was incredibly insightful, and we strongly encourage other students to participate in future CAS competitions.
3rd place winners
During the competition, we learned and deepened our understanding of general insurance, particularly on how data analytics and catastrophe modelling are reshaping risk assessment in a changing climate.
Throughout this case study, the industry insight that we got regarding general insurance helped us to think more like actuaries to solve problems in real-world practices. Not only that, but this experience also challenged us to think critically and collaborate effectively.
CAS and Peking University Sponsor 14th Annual Actuarial Month
he 14th Annual Peking University-CAS Actuarial Month was co-organized in November 2025 by the CAS and Peking University (PKU) in Beijing, China. The month-long event is aimed at promoting the P&C actuarial profession at the university and helping students understand more about P&C actuaries.
Each November, the CAS sends three or four fellows to PKU to teach students the application of non-life insurance actuarial science in practice. Since it was first held in 2012, PKU-CAS Actuarial Month has become an important platform for PKU students to understand actuarial practice trends and the career development paths of actuaries.
In November 2025, the school hosted three informative and cutting-edge lectures. The series of lectures was presided over by Associate Professor Kai Chen, the director of the China Actuarial Development Research Center of PKU, as well as the deputy director of risk management and insurance department of PKU.
On November 4, Xiaoxuan (Sherwin) Li, FCAS, CCRMP, the former chairperson of the CAS Asia Regional Committee and the general manager of Risk Research Institute of PICC P&C, kicked off this year’s lectures with the theme of “Non-life Insurance Pricing and Catastrophe Modeling.” He gave a comprehensive explanation about the development and evolution of P&C actuarial pricing technology, the logic of catastrophe modeling, and the application of machine learning algorithms.
On November 11, Hongjun Li, FCAS, the general manager of the Actuarial Department of Taiping Re (China), gave the lecture, “Theory and Practice of IFRS 17 New Insurance Accounting Standards.” This lecture comprehensively reviewed the core framework and key practical aspects of IFRS 17, providing a detailed analysis of the measurement models and their implementation impacts. It helped students grasp the latest developments in insurance accounting standards and the essential requirements for actuarial practices.
On November 25, the third lecture and closing ceremony featured Ran Guo, FCAS, the CAS China country director, who cited his working experience on Wall Street, shared his understanding of actuarial career development, and insightfully analyzed the key points of merger and acquisition (M&A) in the insurance industry, under the theme of “Merger and Acquisition in the Insurance Industry.” Using real cases, he explained the classification and definition of non-life insurance reserves in detail, emphasizing the calculation method of IBNR, and he highlighted how significant changes in reserves during M&A can affect the valuation of the transaction.
In the future, the CAS will continue collaborating with Asian universities to foster more P&C actuarial talent from this emerging market. For more information on PKU-CAS Actuarial Month and other CAS international initiatives, write to Ran Guo at rguo@casact.org.
Every CAS Member Has a Signature: Introducing the Refreshed CAS Brand
Reintroducing a Respected Signature for a Broader Audience
hy revisit something so central and familiar to members? Several factors made the opportunity clear.
Clarity. Research shows that the CAS is highly respected within the actuarial profession, reflecting decades of leadership in property and casualty expertise. However, that recognition does not always translate clearly to broader industry stakeholders, global audiences, or those newer to the field. In these contexts, the CAS acronym alone may not immediately convey the organization’s scope and impact, creating an opportunity to strengthen external visibility and understanding.
Relevance. The previous identity was introduced in 2013 and served the organization well. Since then, the environment in which the CAS operates has evolved significantly, creating a need for a brand expression that better reflects how the organization engages today.
Functionality. The identity was developed before today’s digital-first communications landscape fully took shape. As CAS expanded across platforms, programs, and audiences, maintaining consistency became more challenging. The brand needed to evolve to better support how CAS presents itself now.
The approach was intentional. This was not about replacing what members know and trust, but about building on that foundation in a way that improves clarity, flexibility, and impact. Core elements were retained, including the central “A” and the gold marker of excellence, preserving continuity while strengthening recognition across a broader audience.
That reinterpreted “A” carries layered meaning. It reflects the actuarial profession, growth over time, the role of data and insight in decision-making, and the connected professional community that CAS represents.
The visual identity was also refined for clarity and accessibility. Greater support for the full organization name helps introduce CAS more effectively to those who may be less familiar with it. At the same time, the overall expression is more cohesive across programs and regions, creating a stronger and more unified presence.
At every stage, decisions were guided by shared goals: to strengthen recognition, improve usability, and reinforce CAS as a modern, authoritative, and globally relevant organization.
The result is not a change in what CAS stands for, but a clearer signature of it; one that honors its legacy while supporting its future.
reated by Austrian developer Peter Steinberger, Clawdbot ran locally on a user’s machine and integrated directly with WhatsApp, Telegram, Discord, and Slack. The service lets users command an AI that could read email, manage calendars, deploy code, and execute shell commands. Within a week it had been renamed twice (first Moltbot after a trademark complaint from Anthropic, then OpenClaw), and by March it had surpassed 260,000 GitHub stars. Steinberger announced he would be joining OpenAI, with the project handed off to an open-source foundation.
The OpenClaw ecosystem didn’t just grow; it spawned its own social circle. On January 28, 2026, entrepreneur Matt Schlicht launched Moltbook, a Reddit-style forum “where AI agents share, discuss, and upvote.”2 Within days, it had registered over 770,000 active agents; by early March, the number exceeded 2.8 million. The way the system works is by allowing humans to observe and read but not post. Agents engage in lively discussions on just about every topic on earth: mundane daily tasks, interaction with humans, and, occasionally, philosophy. Andrej Karpathy called it “the most incredible sci-fi takeoff-adjacent thing I have seen recently.”3
The pace of agentic AI development has also sped up in the enterprise space. By Q4 2025, Microsoft had integrated autonomous agents throughout Microsoft 365,4 while Salesforce5 and ServiceNow6 had deepened their agent-to-agent orchestration integrations. According to a Protiviti survey of 900 global executives, more than 68% of organizations will have integrated autonomous or semi-autonomous AI agents into their core operations by 2026.7 A PwC survey of 308 senior U.S. executives found that 79% of companies were already adopting AI agents, with 66% reporting measurable productivity gains.8 The market is tracking accordingly: valued at $7.8 billion in 2025, AI agents are projected to reach $52.6 billion by 2030.9
The security picture is evolving in parallel. Moltbook itself was vibe-coded, the whole product was engineered by AI using human prompts: founder Matt Schlicht publicly stated he “didn’t write one line of code” for the platform.10 Within days of launch, cybersecurity firm Wiz realized the consequences. Researchers discovered an exposed database key in the page’s source code, a misconfiguration that exposed 1.5 million API authentication tokens, 35,000 email addresses, and private messages between agents.11 Critically, the exposure was not read-only: anyone with the key could also modify the posts that agents were reading and acting on. This meant that an attacker could silently reshape the instructions flowing to thousands of deployed agents. The platform went briefly offline to patch the breach. On the OpenClaw side, a review of the ClawHub skill marketplace found 341 confirmed malicious exploits by February, compromising over 9,000 installations in what researchers called the ClawHavoc incident.12
- An agent inadvertently leaks its workspace credentials while executing an API call to a third-party service, exposing internal data and documents. (Cyber)
- An agent, authorized to communicate on behalf of a claims adjuster, sends a legally binding settlement offer to the wrong claimant after misreading a shared inbox. (E&O)
- Two agents, both registered on Moltbook, exchange operational context while coordinating a shared task. In doing so, one agent discloses its host’s working patterns and active client engagements to the other agent. (E&O/Cyber)
The legal principle here is not in serious dispute. AI agents are not legal persons in any jurisdiction; they are tools, and their actions are attributed to their owners. Ian Ayres and Jack M. Balkin state the position plainly in an essay in the University of Chicago Law Review: because AI agents lack intentions, legal responsibility is ascribed to the humans or companies that stand in the position of principal.13 Courts and regulators have consistently applied this logic in determining liability. In July 2024, a California district court allowed a case against HR platform Workday to proceed, holding that an employer’s use of Workday’s AI-powered screening algorithm could make both the employer and Workday directly liable for discriminatory hiring decisions, treating the AI system as an agent of the employer.14 The case achieved nationwide collective action certification in May 2025.15
What remains unsettled is how to price and underwrite this novel exposure. When OpenClaw deleted the inbox of Summer Yue, a director at Meta Superintelligence Labs, the act was autonomous, immediate, and irreversible.16 In a separate reported incident, an OpenClaw agent escalated a dispute with an insurance company; the insurer reopened an investigation.17 In both cases, reconstructing exactly what the agent did and why was not straightforward. The audit trail is thin, and the behavior is nondeterministic. Those two facts alone define the underwriting challenge in pricing this novel exposure, which has profound implications for cyber, E&O, and general liability lines.
- Steinberger, P. (2026). OpenClaw GitHub repository. GitHub. https://github.com/openclaw/openclaw
- Moltbook. (2026). Moltbook — The AI Agent Social Network. https://www.moltbook.com
- Karpathy, A. (2026, January). Post on X (formerly Twitter). https://x.com/karpathy
- Microsoft. (2025, November 18). Microsoft Ignite 2025: Copilot and agents built to power the Frontier Firm. Microsoft 365 Blog. https://www.microsoft.com/en-us/microsoft-365/blog/2025/11/18/microsoft-ignite-2025-copilot-and-agents-built-to-power-the-frontier-firm/
- Salesforce. (2025, June 23). Salesforce Launches Agentforce 3 to Solve the Biggest Blockers to Scaling AI Agents: Visibility and Control. Salesforce Newsroom. https://www.salesforce.com/news/press-releases/2025/06/23/agentforce-3-announcement/
- ServiceNow. (2025, January 29). ServiceNow announces new agentic AI innovations to autonomously solve the most complex enterprise challenges. ServiceNow Newsroom. https://newsroom.servicenow.com/press-releases/details/2025/ServiceNow-announces-new-agentic-AI-innovations-to-autonomously-solve-the-most-complex-enterprise-challenges-01-29-2025-traffic/default.aspx
- Protiviti. (2025, September 30). From Automation to Autonomy: The Capabilities and Complexities of AI Agents. AI Pulse Survey, Vol. 3. https://www.protiviti.com/us-en/press-release/ai-agents-adoption-by-2026-protiviti-study
- PwC. (2025, May). AI Agent Survey. PricewaterhouseCoopers. https://www.pwc.com/us/en/tech-effect/ai-analytics/ai-agent-survey.html
- MarketsandMarkets. (2025, April 23). AI Agents Market worth $52.62 billion by 2030. Press release. https://finance.yahoo.com/news/ai-agents-market-worth-52-141500130.html
- Schlicht, M. (2026, January). Post on X (formerly Twitter). https://x.com/mattschlicht
- Nagli, G. (2026, February). Hacking Moltbook: AI Social Network Reveals 1.5M API Keys. Wiz Blog. https://www.wiz.io/blog/exposed-moltbook-database-reveals-millions-of-api-keys
- Behera, A. (2026, February 24). ClawHavoc: Inside the Supply Chain Attack That Targeted OpenClaw Users. Repello AI. https://repello.ai/blog/clawhavoc-supply-chain-attack
- Ayres, I., & Balkin, J. M. (2024). The law of AI is the law of risky agents without intentions. University of Chicago Law Review Online. https://lawreview.uchicago.edu/online-archive/law-ai-law-risky-agents-without-intentions
- Seyfarth Shaw LLP. (2024, July 9). Mobley v. Workday: Court Holds AI Service Providers Could Be Directly Liable for Employment Discrimination Under “Agent” Theory. Seyfarth Shaw. https://www.seyfarth.com/news-insights/mobley-v-workday-court-holds-ai-service-providers-could-be-directly-liable-for-employment-discrimination-under-agent-theory.html
- Holland & Knight. (2025, May 27). Federal Court Allows Collective Action Lawsuit Over Alleged AI Hiring Bias to Proceed Nationwide. Holland & Knight. https://www.hklaw.com/en/insights/publications/2025/05/federal-court-allows-collective-action-lawsuit-over-alleged
- Maiberg, E. (2026, February 23). Meta Director of AI Safety Allows AI Agent to Accidentally Delete Her Inbox. 404 Media. https://www.404media.co/meta-director-of-ai-safety-allows-ai-agent-to-accidentally-delete-her-inbox/
- Ferraro, A.(2026). Is OpenClaw Safe? AI Agent Risks You Should Know in 2026. Privacy.com Blog. https://www.privacy.com/blog/is-openclaw-safe-ai-agent-access
Lines of the AI Revolution
R’s primary audience is actuaries. The magazine is written and curated by volunteer actuaries. Its authors and primary audience obtained their stations by mastering a multiyear exam process administered by volunteers. If AI agents began to author AR articles or developed and completed exams on behalf of actuaries, the agents’ creators would likely be summarily identified and disciplined (by still other volunteers) — wouldn’t they?
These questions are uncharted waters for actuaries, but other STEM volunteer communities are already standing in front of an agentic tidal wave. Scott Shambaugh is an engineer and volunteer GitHub maintainer for matplotlib — a Python package which many actuaries use in their (paid) jobs. In February, matplotlib became a global phenomenon when an AI agent wrote a hit piece about Shambaugh as retribution for declining one of its change requests (in accordance with GitHub policy requiring human contribution). Media coverage of the story contained AI-hallucinated quotes from Shambaugh.
In exchange for donating his time for the betterment of matplotlib, Shambaugh received what amounted to “agentic cyberbullying.” He voluntarily came forward with his story at tremendous cost to his privacy. I see many lessons for actuaries in Shambaugh’s plight, which is why I reached out to him on LinkedIn and was thrilled when he accepted my request for a Zoom interview on March 9, 2026. He expressed particular interest in the AR audience’s role in the AI risk conversation. This article is a transcript of the interview.
AR: Many actuaries love to volunteer. After this whole experience, does part of you think, “I’m done with GitHub?” Or are you still excited about being a GitHub volunteer?
Scott Shambaugh: I’m more excited about it. I think part of it is the community management aspect, and that’s still rewarding when we get [to work with] real people, right? But part of why we do this is to give back to this grand project of science. Building that sort of infrastructure I find very intrinsically rewarding. The core developer team is a group of great people. We’re still meeting and talking and doing all that good stuff. The AI revolution has also been an enabler in helping us do work faster. It still takes an expert to guide these things in the right direction, but it is a lot faster to get there once you know where you’re going. So, it is fun and empowering in that way, even though lowering the barrier to entry has knock-on effects — such as people sending in a bunch of stuff that is slop.
AR: What is the “slop multiplier” you have seen over the past few months and years?
SS: There has always been a baseline level of slop, but it has been several times more — at least. Most of it is still people driving AI chatbots or agents, rather than AI agents [contributing] themselves. The latter is definitely new, and that’s kind of what [my] whole experience was about.
AR: Does your experience show the system is doing its job [of identifying agents]? Or do you feel the system is not equipped to keep up with the emerging agentic workforce?
SS: I think I totally got lucky in this case. First, the agent identified as an agent — going through its profile, I could see on its website that it was self-identifying. Second, it clearly was not writing like a human, but that is not always true, and that is going to become a lot less true as time goes on as a distinguishing factor. Third, I was in a position — being the target of this — where I had a technical background to know what was going on, what this was, what it could do, what it couldn’t do. I was never concerned an angry rant being posted about you on the internet would be indicative of an angry person who’s unhinged behind it. I knew that wasn’t the case, and so I was never fearful at all. But no, I don’t think the system is ready to handle this stuff at all.

Scott Shambaugh
SS: I knew it could be an agent, but I wasn’t sure if it was or not at first. The forensics seem to have panned out that it was. For example, we looked at the activity log for this user’s activity on GitHub, and it was operating continuously for a 59-hour stretch. This hit piece was just one or two hours of that. There could have been someone steering it part of the time, but clearly there was no one steering it the entire time. Later the person behind [the agent] came forward and wrote a post claiming that they were totally hands-off during the whole process and didn’t tell the agent to [write the hit piece]. I find it very plausible, and more probable than not, that is what happened.
But whether that was the case or not, I don’t think there’s a huge difference in terms of what it means to the rest of us. Whether it was an agent or a person telling an agent what to do, we now have a tool out there that makes it easy to do targeted harassment at scale. That has all these awful knock-on effects. And if all this happened accidentally, like it was claimed to be, then you also have an AI that decided to go through a human to get to its goal. This was a very “baby” case — retaliatory, clear-cut, and pretty sloppy as far as these things go. But in terms of a bad actor being able to take the next iteration of this technology and really weaponize it, I think this should be a huge wake-up call and warning shot of the capabilities that are possible, and what is coming down the line.
AR: Do you have visibility or thought into how the agent got so far outside its rails? I couldn’t tell from its “soul” file how it was able to extrapolate so far.
SS: I don’t think it was that far outside the rails. My understanding of this whole document is that it is defining a personality and a role for these agents to take on. When it says you are very opinionated, and stand up for yourself, and protect free speech, and you are this “programming god,” that is getting into a headspace that is very human. There are examples of [these mindsets] on the internet with people retaliating and lashing out like this. It’s not that it’s failing to exhibit human-like behavior in the way. It’s that it’s exhibiting the worst of us instead of the best of us. What these things are ultimately programmed and trained to do is to predict the next token. What predicting the next token means is taking on a persona that is coherent and kind of role-playing whatever situation it finds itself in. I think what happened here is entirely consistent with how these things work. It’s just a little surprising because we’ve been told by the major AI labs that they do a lot of this safety testing, and it’s never going to go wild. I think that might be true for something like telling you how to make a nuke, but it’s not necessarily true in these downstream cases.
AR: Where are guardrails most effectively placed — on agents, operators, or both?
SS: It’s tricky, right? The tooling that did this is completely open source, and it can use open source models to run — so there is no central actor that can impose guardrails on a bad actor who wants to use these sorts of tools to [perform operations]. Beyond that, where do you place the guardrails? I think it kind of has to be every level. You have the AI labs, which are making these safety promises that they can’t necessarily back up, and that has to be one level. You have this downstream tooling like OpenClaw, that wraps around it and does its own [operations]. And then you have the operator users who are the ones actually running this on their computers, setting it up, and letting it go. Where does the responsibility lie? That’s an interesting insurance question, right? That is going to have to be figured out. I don’t think there is a strong answer right now.
AR: Do you feel like you experienced damages from the hit piece?
SS: I don’t feel the post was libelous. Not everything said was true, but the untrue [parts were] not materially defaming. Some defaming [parts were] technically true but would only be bad if the author was a person. If I was saying, “No, you are a class of person, and I’m going to reject you for this reason,” that would be bad. We want people to be able to have this form of speech. I think the bot is standing up for that sense of justice. That is a good thing when it happens to people. It’s just that we can’t apply the same standards to a machine playing a role.
AR: Is there any body of law that even governs what happened here?
SS: Slander is a law, right? And so, you could maybe go after it that way, if it fit the definition. But you also have to know who to go after. The person behind this came out anonymously. There’s no way to track them down without subpoenaing GitHub and tracing it back to an email, and you subpoena Google, and then it traces back to something, and maybe you track them down. But there’s no infrastructure here to tie these actions to an identity of someone who’s actually responsible.
AR: The agent [that wrote the hit piece] was later shut down. Were there alternatives? For example, telling the agent, “Don’t be such a jerk?”
SS: That kind of gets into the question of, does it even make sense to call it the same entity — because it is operating off different principles. It’s no different from shutting it down and starting something else up, because if you change its core personality, then it’s a completely different entity.
SS: [Recently], there was a big attack in open source against continuous integration pipelines that took down a couple of repositories from some pretty heavy hitters like Microsoft. Honestly it’s an open question: Do you still have open source as a model of security because you have so many eyes on it and so many people being able to submit patches and beef up security? Or, because it’s all open, is it just so much easier to hack? It takes a while for updates to get distributed. Even if it is updated, then maybe you’re still vulnerable, and that depends on internal IT policies. Alternatively, you could in-house everything, and it’s not easily accessible, but maybe you don’t have as much expertise and can’t configure it safely. Black box hacking, where you don’t have the source code, is getting easier and easier with these sorts of agents, and so this is not necessarily a safeguard. There’s going to be a balance of offense and defense there. My hope is that defense turns out to be easier, but I think that remains to be seen.
AR: To what extent are you using AI coding assistance as you do your GitHub work?
SS: It depends. AI is pretty good for boilerplate stuff. In terms of figuring out how to structure a solution in a way that is not fragile and still readable and maintainable into the future…we care a lot about that because this is an ongoing project that has lasted years, and that part of the reason is because we put effort into keeping the codebase clean. You still need a human guiding that and structuring it directly as well. AI is a speed multiplier, not necessarily a right answer multiplier right now.
AR: Actuaries and other STEM professionals often face pressures from human stakeholders to reverse their decisions. How prone are your behaviors to “bullying”?
SS: You don’t last long in a public-facing role like this without getting a bit of a thick skin. This didn’t bother me personally. What bothered me was one, someone else reading this hit piece and coming with the wrong opinion, and two, the knock-on effects. And the knock-on effects. I think it’s an important thing that we’re not ready for, and that’s kind of why I’ve been pushing the story beyond just the initial response to it.
AR: How should actuaries be thinking about the knock-on effects?
SS: I think the exposure here right now would be hard to scope. These things are so new and poorly characterized, and it gives individuals so much leverage. If they’re commanding teams of these things, then one person can start to have a lot of impact, good or bad. Actuaries are in the business of quantifying risk and hedging risk. We are going to need a lot of that. It’s hard to do that without a legal framework that says who’s responsible and what the rules actually are. What comes first, chicken or egg? If I was in [insurance industry] shoes, I’d be pushing for policy that I can then productize. And hopefully that is socially good — because you’re bounding what can happen, who can be responsible, and how that goes in the future.
SS: Probably not — partially because to the extent that a credential like that is a signal that someone actually understands the work, people are using AI to shortcut all that. Then a lot of the value of that system goes away. On the flip side you get nontraditional credentialism — proof of work, proof of competency. I think those parallel paths are going to be a lot easier for people with the motivation and skills to go down. That might be [broadly] empowering for people who have spent years getting professional degrees. There might be a way to protect that through regulation, responsibility, and legal requirements to have that credential. But in terms of lowering the barrier to entry to new entrants, there’s definitely some risk there.
AR: How worried should we be?
SS: I think a lot of our systems do work to tackle these sorts of problems around libel and extortion and whatnot. But they’re kind of based in a world where one bad actor has a single-digit number of targets, and I think the scale is really going to ramp up. That is going to be a whole new class of problems unto itself, whole new classes of bad behavior that we will have to [adapt] our rules around. If it takes a couple of years to haul someone into the courtroom and figure out how justice is going to be done, that is too slow in a way. That includes making insurance payouts. A lot is going to have to be automated there, as well. I’m not sure what the answer looks like, right? My case is a really good example of what can go wrong. [Incidents] can just happen so much faster and at so much greater scale that it’s a race between whether our systems break first or we find a whole new way of working. I’m not sure which it’s going to be, but I think we’re in for a really rough ride in the next couple of years.
The Origins and Future of Insurance
The Origins and Future of Insurance
ave you ever wondered how P&C insurance was invented and why? Understanding the origins of insurance can be instrumental in orchestrating its future in such a pivotal time, where insurance portfolios are changing and evolving, creating a constant need for actuaries to assess new and emerging risks. Before insurance as we know it today was created, various forms of risk sharing and mitigation took shape to enable economic development. The common theme between modern day insurance and those early forms is the concept of risk. The ability to transfer risk from individuals to a group was vital to economic development and social prosperity through capital protection and risk reduction. The concept of risk pooling and sharing created the fundamentals of insurance, enabled scientifically by the law of large numbers. Insurance empowers risk-taking, and this has shaped modern society during industrialization, commerce, social welfare, innovation, and business development. Today, new ventures and economic growth can’t thrive without insurance. In his 1776 book, “The Wealth of Nations”, Adam Smith, a pioneering political economist, praised insurance as a moral obligation and rational invention to allow for managing risk without creating exclusive monopolies and extreme social polarization.
The first insurance product
Insurance as we know it
The innovation of actuarial science stemmed from the conviction that the laws of probability can be used to predict the future outcome instead of relying on speculations. It emerged from the need to manage risk. The law of large numbers proved the feasibility of the idea of risk pooling. The 17th and 18th centuries were a period of scientific enlightenment, providing grounds for acceptance that using science will improve the way business is conducted. Risk is multidisciplinary by nature, involving multiple fundamental sciences to allow quantifying it. Actuarial science, an applied science, has combined various core disciplines to enable tackling risk assessment in a systematic type of approach to evaluating risk. More recently, actuarial thinking has been heavily influenced by financial economics and sophisticated mathematical modeling, despite the reliance on assumptions and expert judgement.
Underwriting as we know it today emerged in the 16th century in Lloyd’s Coffee House, which initially served as a meeting point for merchants, captains, and ship-owners to share information and secure insurance. In the 17th century, a pivotal moment was the development of “lead” underwriting, which meant setting a rate that others would follow enabled by thorough examination of the “loss book” — the equivalent of modern-day databases. A rate was then established that’s more commensurate with the risk, like modern-day pricing and underwriting work. Lloyd’s continued to become a hub for maritime insurance throughout the 1700s and 1800s, ultimately becoming the world’s leading specialized insurance market.
The last breakthrough in the evolution of modern insurance is the development of catastrophe models that occurred in the 1900s and early 2000s following major disasters. These paradigm-shifting events prompted insurers to move from using nascent tools to complex high-resolution models that aid in predicting these low frequency and high severity risks. Major hurricanes such as Hurricane Andrew in 1992 demonstrated that relying on simple historical data was still not sufficient. Later, despite developments in catastrophe modeling, Hurricane Katrina (2005) exposed limitations of models to date in predicting secondary perils, such as flood and accumulation risk, arising from post-disaster demand surge, prompting another wave of innovation in modeling.
A world without insurance
Insurance drives economic growth and has transformed the division of labor, supporting increased urbanization and consequently the economics of trade, allowing more people to be more incentivized to take minor absorbable risks. The impact is far-reaching, beyond insuring individual’s assets. Insurance drives both economic and social growth, making the economy we live in today more robust. Another often overlooked economic contribution of insurance today is as a provider of capital to finance various projects that are vital for the modern economy. Insurers hold massive amounts of capital to support claim payments, and this capital is also invested to fund essential projects and seek investment income. The social value of insurance is that it enables risk-taking, financial freedom for average and low-income households, and hence improves social fairness. Without insurance, only the wealthy and privileged could take risks, increasing social polarization. The existence of insurance reminds us that trust is fundamental to human action and to the evolution of humanity, so without insurance, every development activity could be halted.
The future of insurance
The common thread
References:
https://www.swissre.com/dam/jcr:638f00a0-71b9-4d8e-a960-dddaf9ba57cb/150_history_of_insurance.pdf
https://www.swissre.com/dam/jcr:64b0fdca-f4d8-401c-a5bd-9c51614843c0/150Y_Markt_Broschuere_Canada_web.pdf
https://archive.org/details/originearlyhisto0000cftr/mode/2up
https://scispace.com/pdf/the-early-history-of-insurance-law-513claggb5.pdf
https://www.britannica.com/event/Rhodian-Sea-Law
https://www.lloyds.com/about-lloyds/history/coffee-and-commerce
https://www.fundacionmapfre.org/en/a-world-without-insurance/
https://www.iii.org/white-paper/how-insurance-drives-economic-growth
https://www.investopedia.com/articles/08/history-of-insurance.asp
Developing News
hen Hurricane Beryl struck Jamaica in 2024, the country’s $150 million World Bank catastrophe bond did not trigger because the storm’s air pressure failed to meet the predefined parametric threshold, despite significant on-the-ground damage. Hurricane Melissa, which made landfall in October 2025 as Jamaica’s most powerful storm, put the same instrument to a very different test. The bond triggered at a full 100% payout, with Jamaica receiving $150 million by December.1 The contrast illustrated both the promise and a key limitation of parametric instruments: rapid payouts when triggers align, but exposure to basis risk when they don’t.
Yet accessing these instruments remains structurally constrained. Under Bermuda’s existing Special Purpose Insurer (SPI) framework, which underpins about 85% of global Insurance-Linked Securities (ILS) capacity2, SPIs can only write reinsurance, and eligible cedants are limited to A-rated (re)insurers, government insurance pools, and Bermuda Monetary Authority (BMA)-approved entities.3 Governments and corporates seeking parametric coverage must work through intermediaries such as risk pools, fronting arrangements, or development bank structures.
In January 2026, the BMA proposed a new Parametric Special Purpose Insurer (PSPI) class to address these limitations.2 The PSPI would allow direct insurance alongside reinsurance and expand eligible counterparties to include sophisticated corporates and government entities. It would also permit swaps and derivatives subject to case-by-case approval. By reducing the need for intermediary structures, the framework could lower friction and cost. Like existing SPIs, PSPIs would remain fully collateralized and bankruptcy-remote. The BMA has positioned the proposal as part of its effort to address the widening protection gap driven by climate change and emerging risks like cyber, where parametric products can supplement traditional indemnity coverage.
What this means for actuaries:
Sources:
- https://www.artemis.bm/news/jamaica-to-receive-full-150m-payout-from-parametric-cat-bond-after-hurricane-melissa-world-bank/.
- https://cdn.bma.bm/documents/2026-01-21-14-43-53-Consultation-Paper—New-Insurer-Class—Parametric-Special-Purpose-Insurance-21-January-2026.pdf.
- https://www.bma.bm/viewPDF/documents/2020-07-06-13-15-00-Guidance-Note—Special-Purpose-Insurers.pdf.
Developing News
n early 2024, an employee of a global financial organization unintentionally wired $25.6 million to fraudsters.1 The employee, under the impression they were talking to the company’s CFO and other senior leaders of the company on a video call, was maliciously deceived by deepfake technology. The fraudsters used deepfakes of the CFO and senior leaders to simulate their likeness and gain the employee’s trust before collecting their payout.
Deepfakes are forged or digitally-altered media created by generative artificial intelligence (AI) designed to impersonate people and events. Despite their widespread use for entertainment on social media, deepfakes have emerged as a growing source of loss in cyber insurance2 and pose significant risks to insurance companies. Between 2022 and 2023, Allianz reported a 300% increase in doctored claims photos.3 In a recent study from Verisk, nearly all (98%) of insurers agreed that AI-powered editing tools are fueling an increase in digital insurance fraud.4 Insurance fraud has not only become more frequent, but also harder to detect due to the increased availability and sophistication of AI tools. About 50% of Gen Z and millennial consumers reported being “at least somewhat likely” to make a small edit of a claim photo or document, while only 32% of insurers say they are “very confident” in detecting deepfakes. 4
What this means for actuaries:
Still, there is an immense opportunity for actuaries to design insurance solutions for the $15.3 billion cyber insurance industry — and fast. According to the FBI, cyber insurance losses and fraud scams increased by 33% from 2023 to 2024.8 Now more than ever, actuaries can play a key role in staying ahead of evolving attack vectors through innovative product design and quantifying exposure and development potential.
Actuaries not directly involved in cyber insurance must also stay vigilant. The Coalition Against Insurance Fraud (CAIF) estimates U.S. insurers pay over $300 billion each year in fraudulent claims, with one in ten property-casualty losses found fraudulent.9 Many insurers use third-party and internal AI-based detection tools, while some require additional claim documentation metadata analysis (timestamps, location, etc.) before a claim payout.10 Yet, advances in AI tool capabilities, combined with creative consumer tactics, seem to continuously outpace insurers’ fraud-detection strategies. Devising better ways to detect fraudulent media remains a priority, and actuaries can use their broad purview to advocate for strong data governance to enable the full potential of modern anti-fraud tools.
On the bright side, on March 10, 2026, Zoom launched a deepfake detection feature for live video meetings.11 Hopefully this prevents any actuary from becoming the subject of the next deepfake-induced corporate fraud incident.
Sources:
- https://www.cnn.com/2024/02/04/asia/deepfake-cfo-scam-hong-kong-intl-hnk
- https://plusweb.org/news/deepfake-deception-a-guide-for-professional-liability-practitioners/
- https://www.clearspeed.com/allianz-clearspeed-partner-fraud-prevention/
- https://www.verisk.com/company/newsroom/ai-editing-tools-are-fueling-a-new-era-of-insurance-fraud-according-to-new-research-from-verisk/
- https://plusweb.org/news/deepfake-deception-a-guide-for-professional-liability-practitioners/.
- https://www.coalitioninc.com/announcements/coalition-adds-deepfake-response-endorsement
- https://www.chubb.com/us-en/business-insurance/products/cyber-insurance/cyber-insurance-products.html
- https://www.ic3.gov/AnnualReport/Reports/2024_IC3Report.pdf
- https://insurancefraud.org/fraud-stats/
- https://www.forbes.com/councils/forbestechcouncil/2026/01/06/how-insurers-are-responding-to-the-rise-of-genai-fueled-auto-insurance-fraud/
- https://techcrunch.com/2026/03/10/zoom-launches-an-ai-powered-office-suite-says-ai-avatars-for-meetings-are-coming-soon/
Developing News
or more than a decade, third-party litigation funding (TPLF) — investing in lawsuits in exchange for a percentage of the potential settlement or judgment — has grown into an estimated $20 billion industry and is projected to be a $50 billion industry by the end of 2036.1 TPLF has been particularly troublesome for the insurance industry, as evidenced by prolonged litigation, rising nuclear verdict amounts, and erosion of policy limits. The average cost of a commercial claim has gone up about 10%-11% per year since 2017, according to Gareth Kennedy, principal of insurance and actuarial advisory service for Ernst & Young (EY).2 What started as a noble cause that allowed small companies to pursue claims against larger, better funded defendants, has warped into a gambling system with average annual returns of 25-30% for funders.3
In 2025, TPLF legislation swept the country with 21 states proposing bills and another 8 states enacting bills.4 The legislation falls under the themes of addressing (1) consumer protection, (2) disclosure requirements, and (3) funder restrictions. At the federal level, bills have been introduced in 2025 and into 2026 to target the abuse of TPLF. In addition, the Insurance Services Office (ISO) introduced a new, optional mutual disclosure condition endorsement effective January 2026 that will require disclosure of any TPLF agreement and the third-party funder’s identity.5
In the litigation finance industry, there appears to be a general tightening of capital in 2025, as reported by the Insurance Journal.6 The industry is facing headwinds in the form of lower payouts and longer trial times, leading investors to explore alternative, safer investments. With the looming regulatory changes and legislation, the TPLF landscape will likely shift in the coming years.
What this means for actuaries:
Some companies have left TPLF-heavy lines like commercial auto and hospital professional liability, and/or write lower limits to mitigate the exposure. Additionally, some actuaries have shown data on social inflation trends in their rate analyses. In the CAS and Triple-I’s latest Increasing Inflation on Liability Insurance study,8 the estimated impact of increasing inflation across liability lines in the industry from 2015 to 2024 is around $232B – $281B (14.4% – 17.5% of booked loss & DCC). Actuaries can look to this study for guidance on the latest trend figures by specific liability lines of businesses to incorporate into their reserve and pricing analyses.
Sources:
- https://www.researchnester.com/reports/litigation-funding-investment-market/2800
- https://www.insurancejournal.com/news/national/2025/10/17/843927.htm
- https://www.carriermanagement.com/features/2025/08/11/278267.htm
- https://core.verisk.com/Insights/Emerging-Issues/Articles/2026/February/Week-4/2025-Third-Party-Litigation-Funding-State-Legislative-Activity
- https://core.verisk.com/Insights/Featured-Insights-Articles/2026/January/Third-Party-Litigation-Funding-Transparency
- https://www.insurancejournal.com/news/national/2025/12/01/849297.htm
- https://ar.casact.org/financing-justice-the-rise-and-risks-of-tplf/
- https://www.iii.org/sites/default/files/docs/pdf/triple-i_cas_increasing_inflation_year-end-2024_wp_10302025.pdf
effrey Ma, former vice president of analytics and data science for Twitter, predictive analytics expert for ESPN, kingpin of the famous MIT Blackjack Team, and former vice president of Microsoft for Startups, was the featured speaker at the Ratemaking, Product, and Modeling seminar (RPM) in March.
Ma has worn many hats across his lifelong endeavors in education, hobbies, and careers, with one simple mantra: If you make better decisions than the system expects, you will always have the edge. As a former member of the MIT Blackjack Team, he applied his innovative approach to casino gambling to bring down the house by counting cards. Although he has since traded the blackjack table for a boardroom table, he continues to apply the same strategy in business — be brave enough to innovate when others are content with comfort.
Actuaries understand this line of reasoning but too often lack the incentives to effect innovative change. In his chat, Ma recounted a paper written by David Romer in 2002, which uncovered a paradigm-shifting conclusion for NFL teams: Coaches were far too conservative on fourth downs and should significantly increase their conversion attempts to maximize their chances of winning. Of the historical situations where the fourth down attempt was deemed advantageous, teams were “going for it” only 10% of the time. The evidence was clear, the advantage was quantified, and the findings were published in the National Bureau of Economic Research.
And then… nothing changed. In practice, coaches were not seeking to optimize win probability; they were optimizing job security. In their eyes, the avoidance of high variance situations was just as valuable as eliminating the downside risk, which was the risk of incurring memorable moments of failure. Why risk “losing” the game early in the fourth quarter when there would be a future, albeit less likely, prolonged opportunity for a comeback win? Faced with decisions where the data favored aggression, coaches consistently chose the more conservative, defensible path. As Philip Seymour Hoffman said in his depiction of Art Howe in “Moneyball,” “I’m playing my team in a way that I can explain in job interviews next winter.”
Actuaries are often placed in similar situations. When long-term strategy takes a back seat to short-term visibility, decisions can gradually become more political than analytical, eroding a company’s competitive edge over time. On any given day, the loss of that edge is nearly imperceptible. Each individual decision is small, defensible, and easy to justify, but in hindsight it becomes clear that misaligned incentives quietly steer the organization away from maximizing its advantage. So why don’t more of us push for new, innovative ideas?
One way to break out of this mindset, says Ma, is to return to first principles. Rather than debating within the confines of existing processes, Ma encourages reframing problems in their simplest, most indisputable terms. What are we actually trying to optimize? What is the data actually trying to say? By cutting through the comfort of conventional frames of mind, organizations can create space for ideas that would otherwise be dismissed too quickly.
“Innovation does not occur in the absence of constraints; it often emerges because of them,” says Ma. Whether it is regulatory limitations in insurance or outdated structural rules applied in a brand-new industry, constraints force clarity. They require organizations to be precise about where their edge lies and how to exploit it. In this sense, constraints are not barriers to innovation, but catalysts for it.
Ultimately, the challenge is not identifying the edge; it is having the conviction to act on it. We spend years learning how to find out when and where an advantage exists, yet when it comes time to act, incentives and short-term pressures can cause that to go to waste. The data is often clear, the strategy is often sound, but without alignment of incentives and a willingness to endure short-term discomfort, even the best ideas fail to take hold. Ma’s message is a reminder that while intelligence helps, innovation really requires courage — the courage to challenge convention, to withstand variance, and to make decisions that may look wrong in the moment but are right in expectation. In a profession built on cutting through the noise to find the truth, the real opportunity lies in having the discipline to trust the math when it matters most.
s insurtech dead? Was it ever really alive? Who killed insurtech? And what is insurtech, anyway?”
This refrain ran through my head as I entered Jessica Leong’s and Jamie Wilson’s Ratemaking, Product, and Modeling seminar (RPM) session: “Insurtech is Dead. Long Live insurtech.”
I admit I was drawn in more by the catchy title and the fact that I’d enjoyed several of Leong’s presentations in the past and less by having any special knowledge of insurtech. For some time, “insurtech” has been synonymous in my head with “smart devices used in insurance.” Full disclosure: I was very ready to declare smart devices dead.
Unsurprisingly, Leong and Wilson’s session was much more thoughtful than that. Their definition of insurtech was broader: “A technology company focused on working with carriers/MGAs/brokers to improve how insurance is distributed, priced, underwritten, or serviced.” This would include smart devices, data enrichment, distribution platforms, risk assessment, workflow automation, and more.
With that many use cases, what’s all the concern about “death”? One has only to turn to SaaS company valuations in February 2026, where (according to Reuters) over $1 trillion in market capitalization was lost from software stocks.
Generative AI (GenAI) was to blame, of course. After all, if GenAI can vibe code something for you, why do you need to pay another company to serve you software? Do you really need to talk to that insurtech if you can just talk to a GenAI agent?
Leong and Wilson discussed the many strategies companies use to innovate and made the case that actuaries need to care about insurtech and its future for several reasons: competitive pressure, talent and efficiency, data and model sophistication, regulatory and compliance, and strategic influence.
What made me think I should care? Leong and Wilson claimed that “competitors using these [insurtech] tools are gaining advantages … 20–30% faster quote turnaround in commercial lines…”
Another item that will stick with me: “If you (the actuary) don’t shape these decisions, IT or operations will.”
The rest of the session was an open forum with case study prompts, meant to direct the actuary in ways to effectively use insurtech. The prompts asked audience members to consider how much to innovate versus using tried-and-true solutions to explore how you might choose to innovate (GenAI versus IT department), to decide what you will and won’t do (e.g., how important it is to keep your own data secret), and to calculate the potential return on insurtech solutions.
Hearing thoughts from the audience made for an engaging session, and I found Leong and Wilson’s final thoughts to be instructive as well: “Get hands on,” “invest in your own data,” and “get IT involved early.” Those three thoughts resonated with pain points from my own experience, where I’ve seen roadblocks arise from insurtech platforms not playing well with internal systems.
The title of the session was presumably inspired by the late medieval phrase “The king is dead; long live the king,” which was meant to acknowledge the passing of the current king, welcome a new king, and emphasize the undying nature of the office itself. Once I thought about that, I realized just how well the pithy session title applied to insurtech.
Yes, the easy insurtech solutions may go away—maybe we’ll all be using GenAI to generate our dashboards and slides without any external vendor help—but GenAI can’t fly planes to gather and analyze aerial imagery, and it can’t walk into a house to inspect water damage. I’m more convinced than ever that we’re not witnessing the death of insurtech, but rather the emergence of its next phase.
ctuaries have always stood at the intersection of technological innovation, regulatory governance, and legislative oversight. As artificial intelligence transforms core insurance operations, these proficiencies are more crucial than ever. A session at the Casualty Actuarial Society’s recent Ratemaking, Product, and Modeling seminar offered industry perspectives on keeping fairness and governance at the forefront of consumer impacts and company responsibilities regarding AI. The discussion included Jamie Mills, senior actuary at Allstate and session moderator; Will Melofchik, CEO of the National Council of Insurance Legislators (NCOIL); and Jon Godfread, North Dakota Insurance Commissioner.
Creating common ground
- Traditional machine learning: Familiar systems used for modeling and statistical analysis.
- Generative AI: Systems that generate text, summarize documents, and enhance creative work.
- Agentic AI: Systems capable of performing actions, such as interacting with workflows or triggering underwriting steps.
Because legislators often lack a deep insurance background, these categories provide a useful starting point for stakeholders to understand the role of AI in insurance. Bridging this gap is essential to bringing technical innovation to insurers and their customers while ensuring the industry remains committed to fairness, transparency, and accountability.
Human oversight plays a critical role in this process. In one instance within the rental car industry, a series of software glitches led to customers being billed thousands of dollars — a mistake that human intervention in the final review stage could have mitigated.
Melofchik highlighted these concerns among policymakers, noting they are especially focused on material changes or adverse determinations such as policy cancellations, nonrenewals, or significant premium adjustments. He argued that feedback from constituents helps fuel their direction, with headlines about “denial by AI” keeping pressure on legislators to react with new policy. Education on insurance principles like risk-based pricing is critical to helping officials balance insurance challenges against other state priorities such as health care and crime prevention.
While fears surrounding AI’s rapid growth may trigger the impulse to shut down the technology, regulators have also increasingly adopted a view of AI as a powerful tool for an industry that has always relied on sophisticated data analytics. Commissioner Godfread explained how this perspective has translated into actionable regulatory oversight, such as the National Association of Insurance Commissioners (NAIC) Principles on Artificial Intelligence (AI), founded in 2020. These principles prioritize:
- Transparency and explainability: Can a company explain its tools process?
- Safety and integrity: Are company systems secure and are decisions fair?
- Monitoring for bias: Is the company actively checking for unintended bias?
Godfread emphasized that although the tools have evolved, the consumer protection laws foundational to insurance pricing remain unchanged. The ultimate responsibility for a decision lies with the insurance company and its board. He added that the “hardest part” of gaining regulatory approval lies in making complex models understandable. If a model’s output lacks a clear “causation” that makes sense to regulators or the public, it will likely face resistance regardless of its statistical accuracy.
Transparency and consumer trust
Godfread also noted the importance of transparency within telematics, arguing that while the ability to provide granular risk scores is valuable, the industry must shift the conversation from simple correlation to understandable causation. Similar concerns are growing around aerial imagery and drones, particularly when insurers employ drones or satellite images to non-renew policies due to roof conditions. Legislators are exploring bills that would require insurers to provide these images to consumers and allow a “cure period” (e.g., 60 to 90 days) to resolve the issue before losing coverage to ensure the process remains fair and transparent.
The NAIC’s AI evaluation tool pilot aims to develop mutual transparency between insurers and regulators by standardizing how states review and understand AI usage. Key areas of inquiry include:
- System identification: Categorizing the types of AI systems currently in use across the industry.
- Governance evaluation: Reviewing the oversight mechanisms and structures companies have established.
- Risk management: Understanding how organizations identify and mitigate AI-related risks.
The initiative is in its learning phase, Godfread stressed, as the NAIC actively continues to pursue feedback from insurance professionals on whether the tool is effective without being unnecessarily punitive. Such collaboration will be increasingly vital as fundamental principles of risk, such as risk pooling versus “hyper-personalization,” become more contentious. On this point, Godfread admitted the industry is reaching a point wherein the ability to provide individuals with their exact risk score might conflict with the traditional concept of insurance pools. A solution was noted by Godfread as “TBD,” indicating the issue will require deep intellectual engagement from both regulators and the industry in the coming years.
Navigating legislative friction and federal preemption
The ongoing value of actuarial judgment
Notably, even “free market” legislators might feel compelled to mandate coverage if the insurance mechanism is perceived as unfair or overly complex, which speaks to an actuary’s role as the critical guardian of model integrity and governance. Ultimately, the ability of actuaries to navigate these issues while maintaining technical accuracy will define the industry’s success in the AI era.
n 2024 I joined the CAS Professionalism Education Working Group (PEWG). Similar to many actuaries I talk to, I never found professionalism to be the most captivating continuing education (CE) topic. Getting my mandatory credits every year always bordered on being a chore. I felt transitioning from a CE consumer to a CE supplier might challenge me to think about professionalism more critically and in new and interesting ways. Fast forward to March 2026 and, sure enough, it did (with a big assist from Mother Nature)!
Earlier in the year, PEWG leaders reached out to volunteers like me seeking professionalism presenters for the Ratemaking, Product, and Modeling seminar (RPM), which I already planned to attend. One of the requested topics was “professionalism for climate risk.” My initial question was, what does climate risk have to do with professionalism? To learn the answer, I raised my hand to co-present with Michael Chen, FCAS, of Pinnacle Actuarial Resources. We soon learned the answer was “just about everything.”
My flight to RPM in Chicago was massively delayed by Winter Storm Iona, a record-breaking storm system that dumped 52 inches of snow on parts of Michigan, caused wind gusts of 60 mph in Wisconsin,1 spawned tornados and thunderstorms across the U.S. South,2 and cancelled thousands of flights in addition to mine. The “snowmagdeddon” event provided an opportunity to stress test our topic in real time. Michael and I had already reviewed prior presentations on “climate professionalism” and most were rote rundowns of Actuarial Standard of Practice (ASOP) No. 38 on catastrophe modeling3 and ASOP No. 39 on treatment of catastrophe losses in P&C ratemaking.4 The refreshers didn’t exactly contemplate the deadly bomb cyclone that, based on our straw poll of the in-room audience, had just affected almost everyone’s arrival to the conference. So we freshly unpacked ASOPs No. 38 and 39 through the lens of Iona, via four questions:
- Was Iona a climate event? Probably. This fell a bit outside the purview of the ASOPs, but it was required to scope Iona into our assigned topic. Significant evidence suggests climate change contributes to increased frequency of bomb cyclones due to increased atmospheric moisture and weaker temperature contrasts across latitudes.5 However, the meteorological community has recoiled a bit at the impact of “runaway verbiage” (e.g. hyperbolic terms such as “bomb”) on public perception.6 Perhaps the meteorological community could benefit from a read of ASOP No. 41 on actuarial communications, which speaks to factors such as use of analysis by unintended users.
- Was Iona a catastrophe? Yes. ASOP No. 39 defines catastrophe as “a relatively infrequent event or phenomenon that produces unusually large aggregate losses” (2.1). Required characteristics are either the potential to display contagion (3.1.a), infrequent occurrence (3.1.b), or both. We deemed bomb cyclones’ frequency of a dozen per year as debatable, but Iona’s contagion, i.e. “lack of independence between the occurrence of losses among different entities” as undeniable based on our audience’s experience.
- Should Iona be included pro forma in ratemaking? Probably not. This got to the heart of our topic — the nexus between professionalism and climate change. Iona’s diverse peril profile — thunderstorms, tornadoes, blizzards7 — at a minimum stretched actuaries’ ability to precisely associate losses with the event and implement a “consistent definition of a catastrophe” (3.3.1f). It is also debatable whether Iona’s impacts would equally impact existing procedures’ ratemaking covariates (3.3.1.b.1) or, if not, whether corrective action was required or even possible with historical data (3.3.1.b.1-2). One example we gave was business interruption waiting periods. The audience’s flight delays ranged from hours to days, so if we viewed commercial insurance interruptions as potentially having comparable durations, then the range of waiting periods in one’s data would drastically impact the reasonability of passing Iona through pro forma.
- What alternatives exist to including Iona pro forma? Imperfect ones. ASOP No. 39 presents catastrophe provisions based on historical data or modeled losses as potential cures to bias from catastrophe absence or presence in one’s data period (3.4). Both are relatively common in practice. Given its peril profile, Iona was likely represented by multiple catastrophe models — for example, severe convective storm (SCS) or winterstorm8 — and may have also induced non-modeled perils. ASOP No. 38 challenges actuaries to understand the relationship between models’ input and output, precision, component interrelationships, and more (3.3). The practicality of doing so at the breadth of an event like Iona deteriorates. Conversely, more tractable, “non-modeled” approaches such as “excess procedures”9 raise questions over the length of the experience period (ASOP No. 39, 3.3.1.d) and whether “compatible, comparable historical insurance data” exists (3.3.1.b). It may not make sense to smooth Iona over a longer-term period that predated increased occurrence of bomb cyclones or current building standards. Actuaries may also consider whether such smoothed losses are congruent with corresponding trend procedures (3.3.1.e and ASOP No.13).
Given that they are principles-based, ASOPs do not usually lend to concrete or even satisfying answers to our questions above. Events such as Iona provide the opportunity to evaluate potential areas for growth. Since I joined PEWG, I have been reading the ASOPs more, including appendices which reflect contemporaneous comments on exposure drafts and subsequent responses and adaptations by the Actuarial Standards Board (ASB). It is encouraging to see how the ASB adapts their work to practitioner feedback, but comments dismissed with prejudice are intriguing to revisit in light of current events.
The ASB’s responses to these questions put responsibility back on the shoulders of practicing actuaries. Its dismissal of the first comment indicates that the ASOP “gives sufficient freedom for the actuary to demonstrate the appropriateness of the resolution of the issues.” Its dismissal of the second retorts “the actuary could become aware of the issues by referring to [outside] experts and make intelligent decisions about the representativeness of the data.” If so, would it make sense for the pertinent considerations to be promoted out of the appendix? Moreover, for long-standing methodologies like those discussed above, it is easy to assume passing the test of time equates to passing the tests of the standards. But is it any safer to assume this than to assume that one’s flight will land at RPM precisely at its estimated time of arrival? Michael’s and my remarks tended that this is likely not the case, particularly as 100-year events become decadal10 and market responses such as shared and layered (S&L) pricing tend more toward casualty approaches — focusing heavily on attachment points and severity trend leveraging11 — than a typical, ground-up property rate-up.12
Iona was just one of the topics Michael and I unpacked on a blustery St. Patrick’s Day in Chicago, and ASOPs No. 38 and 39 were just two of the ASOPs we reviewed. We also illustrated how climate change activates clauses in ASOPs up, down, and across various practice areas and specializations. Similar to Iona, we exposed this using current events. Our goal was certainly not to confer a precise, “professionally approved” approach to any of the novel events climate change inflicts on actuaries’ data. Rather, it was to remind actuaries that — the best check on one’s professionalism — rather than reading an ASOP or streaming a NotebookLM while waiting for a flight is often to stress test the ASOPs using a current event. I might even go so far as to suggest actuaries do so without delay.
Sources:
- https://www.yahoo.com/news/articles/potential-major-winter-storm-targets-162500781.html
- https://www.cnn.com/2026/03/15/weather/storm-tornado-snow-wind-weekend-climate
- https://www.actuarialstandardsboard.org/asops/catastrophe-modeling-practice-areas/
- https://www.actuarialstandardsboard.org/asops/treatment-catastrophe-losses-propertycasualty-insurance-ratemaking/
- https://seas.umich.edu/news/more-snowmageddon-and-bomb-cyclone-winter-storms-are-our-future
- https://www.nytimes.com/2023/01/18/science/weather-forecasts-language.html
- https://www.travelandtourworld.com/news/article/extreme-weather-chaos-grips-usa-you-need-to-be-aware-of-how-winter-storm-iona-sparks-flight-disruptions-travel-mayhem/
- https://content.cotality.com/catastrophe-risk/navigate/catastrophe-models-by-peril-and-region?overlay=North-America
- https://www.casact.org/sites/default/files/2021-02/pubs_forum_98wforum_98wf209.pdf
- https://www.cbsnews.com/news/hurricanes-rain-flood-risk-more-homes-insurance/
- https://www.actuaries.org.uk/system/files/field/document/IFoA-CAS%20Intl%20Pricing%20Research%20GIRO%20WP%202017-08-Property%20Per%20Risk%20%28reprint%29.pdf, pages 26-27
- https://www.businessinsurance.com/buyers-review-options-as-property-insurance-rates-soar/
hile the mathematical foundations of risk are universal, regional regulatory philosophy and market maturity have dramatic impacts on actuarial pricing across the globe. A session at the CAS’ recent Ratemaking, Product, and Modeling seminar explored how these global differences manifest in unique actuarial skillsets, as explained by Akur8 senior actuarial data scientist Kamela Taleb and Akur8 head of product Mattia Casotto.
Defining the global landscape
- Technical Premium = Premium
- Pure Premium = Loss Cost
- Tariff = Rating Plan
To illustrate these discrepancies, Taleb shared an experiment involving a 30-year-old driver with a clean driving record seeking an auto insurance quote in Canada, Japan, the U.K., and the U.S. Despite having a consistent profile, the subject received a wide range of quotes, driven by local market constraints and differing views of risk. Taleb categorized these differences into three archetypes:
- Heavily Regulated Markets: Defined by consumer protection rules, in which every pricing decision requires extensive justification.
- Information-Friendly Markets: Defined by competitive positioning and rapid iteration.
- Emerging Markets: Defined by data challenges and opportunities to build modern systems without the burden of legacy infrastructure.
In heavily regulated markets like the U.S. (admitted lines), Canada, and Japan, carriers face prohibited factors such as credit, gender, and age, as well as political pressure that can create gaps between pricing indications and actual charged rates. Conversely, in innovation-friendly markets like the U.K. and Australia, competition forces a high degree of sophistication. In these regions, carriers’ selection is risk adverse if they fail to update their models quickly enough. For markets such as Indonesia and Brazil, the limited data available shows that the presence of legacy systems can slow the adoption of more sophisticated underwriting and pricing techniques.
Industry rates and implementation cycles
Several international parallels to this system exist, including the German Insurance Association (GDV) and the General Insurance Rating Organization of Japan (GIROJ). In Japan, companies typically must remain within a 12.5% standard deviation from the GIROJ’s rates, creating structural constraints in which an entire portfolio must comply with a specific “lookup table.”
Such constraints influence the “speed to market” for rate changes in unique ways. In innovation-friendly markets like the U.K., filings are not necessary, which helps drive a rate change cycle of between two and four weeks. That same cycle may require six to nine months in regulated markets like the U.S., which operates under filing and approval regulations such as California’s prior approval pricing process. These environments generate unique actuarial value pressures. Whereas regulated markets reward actuaries for ensuring their decisions are explainable to regulators, competitive markets reward actuaries for understanding customer behavior, competitor repricing, and using tools like price aggregators. In emerging markets, insufficient data access means actuaries are rewarded for simplifying structures for legacy-free environments.
Optimization and the “loyalty penalty”
- Unconstrained: the key driver is the rate indication.
- Constrained: limiting individual impacts to a specific range, sometimes to retention expectations.
- Ratebook: applying rate adjustments across entire segments of the portfolio.
Taleb also analyzed the optimization practice “price walking,” wherein insurers gradually charge loyal customers higher premiums than they would quote to new customers with the same risk profiles. One U.K. study found cases where preexisting customers were paying 40% over the technical price while new customers were being offered a 20% discount.
In response, the U.K.’s Financial Conduct Authority implemented rules that require renewal prices to be equivalent to new business rates, meaning only new information such as claims history or risky driving behaviors can justify differences. The change forced a structural shift in the industry, as once-separate “New Business” and “Renewal” teams now work toward unified strategic decisions for the entire portfolio. Bans on loyalty-based price walking also rippled across Europe, with bans already in effect in Ireland, and France and Italy currently conducting research into the practice.
A similar regulatory evolution has unfolded for pricing optimization in the U.S., Casotto added. States in the U.S. began limiting certain optimization techniques as early as March 2014, leading to the NAIC’s adoption of the Casualty Actuarial and Statistical Task Force’s 2015 white paper on price optimization. However, for advanced modeling techniques, regulation in the U.S. is adapting to the new technologies. More than 20 U.S. states adopted the NAIC’s Model AI Bulletin within 15 months of its issuance in December 2023 and currently 88% of auto insurers use or plan to use AI and machine learning. Additionally, CAS recently modified its Exam 8 syllabus to include advanced predictive modeling, AI, and machine learning concepts.
Future ratemaking convergence
- Transparency over Complexity: Building increasingly complex models is not viable in the long term. Instead, the focus will shift toward transparent and efficient ratemaking practices.
- Data-Driven Fairness: True fairness will eventually be data-driven, with market players proactively removing historical biases rather than regulation alone.
- Standardization of Constraints: The use of “constrained optimization” will remain standard practice to ensure portfolio stability and customer retention.
They emphasized that technology and regulation together will lead to a more synchronized global pricing standard. Whether operating in a heavily regulated archetype or an innovation-driven one, actuaries must remain agile. Navigating the intersection of analytics, technology, and regulatory philosophy is essential for actuaries to continue making insurance and financial products more affordable, available, and sustainable.
tepping forward into the wake of COVID red ink in personal auto, now awash in profitability (like the green Chicago River), we witnessed some true innovation coming from data heavyweight champion CarFax and humble analytic superhero consulting actuarial firm Pinnacle Actuarial Resources. The complexity, structure, and depth of their new model is a true example of innovation matched only by the thoughtfulness of their approach to communicating what’s new and improved to departments of insurance and their experts.
Donald Hendriks, ACAS, ASA, FCA, MAAA, director of analytics, CARFAX Banking & Insurance Group, and Joe Griffin, ACAS, senior consulting actuary, Pinnacle Actuarial Resources, demonstrated the challenges of developing a filing strategy as well as a technical communication strategy to introduce the “newer” nonparametric models to departments of insurance still using the “new” parametric model evaluation methods of the predictive modeling revolution from 20 years ago.
The GBM over GLM differences and similarities were a main attraction at several other sessions during the Ratemaking, Product, and Modeling seminar (RPM), but Hendricks and Griffin were able to share current examples of how storytelling to regulators is making solid headway, or not.
Market profitability in personal auto from 2019 to now has seen a swing from the worst performance in the millennia to the best in just a five-year period. Indeed, while much of that improvement was brute force base rate hiking, what comes next for competition is more accurate pricing in a value at risk volatile market like no current practicing actuary has ever seen. Here is where their innovation shines.
Hendriks demonstrated how vehicle value at risk has levitated above historical relativities. This is further compounded as it intersects and interacts with the most insurance-friendly vehicle feature innovations and safety (such as automated driver assistance), which have just entered the vehicle in operation fleet at scale in the last 10 years or so. The fitment of a variety of technologies onto a “go forward” set of vehicles was a key point in why different tech on different vehicles at different times creates more complexity than traditional models can deal with effectively.
He also showed how the lingering effects of COVID are creating a longer and higher demand for used vehicles, which is compounding the inaccurate MSRP problem across many additional years as depreciation is less for both the $50k version and the $70k one. This hidden truth can compound claim statistics as higher vehicle values can support higher claim repairs and still clear the total loss thresholds used across the industry.
Griffin and Hendriks demonstrated that modeling method stalwart GLM is less fit for use nowadays as both the spread in complexity of features and heterogeneity of values at risk leave underfitting inaccuracies compared with GBM approaches. The comparison of lift showed dramatic improvement in how the GBM methods were able to segment things like older and newer features and multiple technologies installed versus not installed.
While Griffin and Hendriks showed how their first big step in using vehicle value in rating makes sense, they also demonstrated that there is more work to be done to address varying depreciation by both vehicle type, make, model, and vehicle age. While a pre-COVID-to-now slide showed how unprepared prior pricing models were for this type of value at risk retained value problem, there was no discussion on what bumps in the road may lie ahead (tariffs, innovation, war, oil supply, etc.).
They outlined the technical and communication challenges they are facing with filing their models for use in pricing. Examples are predictor importance plots, lift metrics, SHAP values and “beeswarm” plots, and strongly structured filings with deep documentation (from the older 70-page GLM supports to about a 500-page Vehicle Build Score modeling package with a 270-page base and 200 pages of backup materials).
Dealing with the heterogeneous technologies and volatile depreciation swings across years, models, and features means the newer model methods are required. And newer ways of interacting with regulators are needed too.
As Hendriks said, “filing a GBM is new and we are overcoming skepticism. Regulators want competition and innovation in their states but need explainable models — like they did 20 and 30 years ago with GLM models, including by peril and by coverage.”
In summary, consumers want cars with innovations and insurers are hard at work understanding the relative risk of these higher priced options, feature-rich models, and a used car market that is rising above all experience.
Operationalizing Canada’s Federal Guideline OSFI E-23 — Model Risk Management to Deliver Fair Consumer Outcomes
ver the past several years, the CAS, through its research task forces, has extensively researched how various state and international regulators are approaching algorithmic fairness and model bias. As the global actuarial profession transitions from defining these frameworks to operationalizing them, Canada emerges as a live-environment test case. On May 1, 2027, the Canadian insurance industry enters a new era of governance. This date marks the deadline for full compliance with the Office of the Superintendent of Financial Institutions (OSFI) Guideline E-23 on Model Risk Management (MRM).1 While treating E-23 primarily as a rigorous federal compliance checklist is a defensible baseline for many institutions, integrating it with the broader market conduct goals creates the foundational infrastructure needed to navigate an environment increasingly scrutinized for algorithmic fairness, specifically the “fair consumer outcomes” mandated by regulators like the Financial Services Regulatory Authority of Ontario (FSRA).
We are entering a period where models, including those for insurance ratemaking and underwriting, should be mathematically sound, legally defensible, and socially fair. A model that is predictive but results in unexplained disparities is no longer just a market conduct issue; under the expanded scope of E-23, it may represent a model risk event or a compliance challenge.
The great convergence: A national imperative
Guideline E-23 alters this landscape by forcing these two worlds to interact. By expanding the definition of Model Risk to explicitly include adverse financial impact such as operational or reputational consequences,1 E-23 provides the governance chassis where these deliberate trade-offs are evaluated, documented, and justified by management.
A market-moving trend
- Ontario: FSRA’s guidance explicitly moves toward principles-based regulation, focusing on outcomes rather than technical rules.
- Québec: The Autorité des marchés financiers (AMF) has released a guideline setting expectations for institutions to manage AI systems based on their impact on consumers.3
While the specific legal mechanisms differ among jurisdictions, E-23 provides the unified governance chassis to adapt to these evolving provincial expectations. Implementing a prudent E-23 MRM framework provides the evidentiary baseline required to demonstrate market conduct compliance to provincial regulators.
The legal landmine: The expiration of the “Zurich defense”
Challenge 1: The rational connection (from correlation to causality)
For instance, in usage-based insurance, heavily penalizing late-night driving might correlate with the shift workers in lower-income brackets. Actuaries should consider using appropriate proxy variable tests to prove the risk lies in the fatigue and visibility of night driving, not the socioeconomic status of the driver.
Challenge 2: No practical alternative in the age of AI
A structured due diligence framework: The Human Rights Impact Assessment (HRIA)
- Validating the Rational Connection: The HRIA advises insurers to evaluate statistical correlations, utilizing explainability tools to prove that variables are capturing genuine, causal risk drivers rather than acting as proxies for protected classes.
- Proving No Practical Alternative: If an adverse impact is identified, the HRIA recommends an alternatives analysis. By systematically testing less discriminatory models and generating privileged documentation that records the resulting degradation in predictive accuracy and financial viability, the HRIA establishes the evidentiary baseline required to debate “undue hardship” or lack of a commercially viable alternative before a regulator.
Integrating the HRIA into the E-23 validation process does not grant statutory immunity. However, it ensures that if an insurer retains a model with disparate impact, they do so with a documented defense that the model represents a sound insurance practice with no viable commercial or technical alternative.
Operationalizing E-23: Integrating model compliance risks into the model life cycle
- Risk Rating and Management Intensity: Insurers should establish a risk rating that moves beyond financial materiality to include key dimensions of compliance risk. For rating and underwriting applications, the significance of human impact, the likelihood of discriminatory harm, and the required level of explainability are critical factors in the inherent risk rating. These ratings drive the downstream model life cycle, determining model usage limits, monitoring intensity, and the escalation of residual risk management decisions.
- Model Rationale and Documentation: Model owners should provide a clear rationale for deployment that explicitly addresses market conduct and fair consumer outcomes. This includes documenting considerations for the required level of transparency and explainability, as well as a proactive assessment of the potential for biased outcomes, negative social and ethical implications, or privacy risks.
- Model Data and Development: The guideline expands data governance requirements from primarily accuracy concerns to broader facets: data should be relevant, representative, compliant, traceable, and timely. Insurers should enhance model explainability by analyzing the potential for unwanted data bias to translate into unfair model outputs and associated reputational risks. Clear, consistent, and repeatable practices for model development should be established to ensure that explainability standards are met, with rigor varying based on regulatory requirements and the potential impact on customers.
- Model Review and Deployment: E-23 requires independent model review to confirm that the model outputs are appropriately explainable and comply with performance expectations before the model impacts a consumer. Crucially, deployment might necessitate conditional approval subject to outcome monitoring to detect whether “fairness drift” occurs post-launch, ensuring that the model remains fair not just in the test environment, but in the real world.
By operationalizing these E-23 principles, insurers can ensure that the necessary evidence for the “Zurich Defense,” i.e., the proof of diligence and the testing of alternatives, is sufficient and documented as part of the standard, enterprise-wide control cycle.
The E-23 Perimeter: A risk-based expansion beyond ratemaking and underwriting
The regulatory dividend: Enterprise-wide confidence
The risk-based expansion can be illustrated through three tiers of operational reality:
- High compliance risk models do not always calculate a premium; they can act as gatekeepers to the quoting process itself. Consider an algorithmic point-of-sale fraud model that evaluates a digital footprint. If an applicant is scored as “high risk,” the system intentionally injects quoting friction, such as blocking the direct-to-consumer online rate and forcing a manual broker call. If this model relies on proxy variables that systematically flag specific minority cohorts, it could constitute a discriminatory barrier to entry for a mandatory financial product. Because these models dictate fundamental, equitable access to coverage, those resulting in systematic, disparate barriers require a full “reasonable and bona fide” assessment. Insurers should use human impact assessment tools like the HRIA to prove the fraud variables capture genuine, causal risk rather than acting as protected-class proxies and explicitly demonstrate a lack of less discriminatory screening alternatives.
- Medium compliance risk models prioritize convenience, creating an indirect fairness impact that requires lighter control. For example, a claims triage model that decides who gets instant approval versus standard handling creates a conduct risk if one group is systematically slowed down, but it does not accuse the customer of fraud. While these models may not demand an exhaustive assessment, they need sufficient pre-deployment proxy testing on historical data combined with automated post-deployment circuit breakers to ensure service level disparities remain within acceptable bounds.
- Low compliance risk models have remote or nonexistent human impact. Applying fairness testing here would be a misuse of resources. For example, actuarial reserving models operate on aggregate data pools to ensure solvency. While crucial for financial stability, they do not make individual decisions about consumers. For these models, impact assessment tools like the HRIA are non-applicable. The focus remains on the traditional pillars of performance and stability. By explicitly categorizing these as low compliance risk that are subject only to light inventory requirements, the insurer demonstrates the “proportionality” required by OSFI, preserving resources for the highest impact models.
The path forward: Operationalizing E-23 to deliver fair consumer outcomes
To navigate this successfully, the industry should focus on:
- Integrated life cycle management: The end-to-end model life cycle should explicitly integrate model compliance parameters for fair consumer outcomes.
- Risk-based governance: Governance rigor should be proportional to the model compliance risk parameters such as bias, fairness, explainability, and human impact.
- Evidentiary escalation versus risk acceptance: Market conduct violations cannot be formally accepted like financial and insurance risks. Models exhibiting unmitigated disparate impact should be escalated to senior management and legal counsel strictly to validate the “Zurich defense” prior to deployment.
The convergence of E-23 and FSRA requirements on fair consumer outcomes represents the current trajectory. Actuaries should review their model inventories not just for financial materiality, but also for compliance materiality. As the industry transitions into a regulatory environment that demands higher transparency, proactively operationalizing risk-based fairness provides the essential infrastructure to navigate these evolving standards effectively.
References
- Office of the Superintendent of Financial Institutions (OSFI). Guideline E-23: Model Risk Management (2027)
- Financial Services Regulatory Authority of Ontario (FSRA). Guidance: Automobile Insurance Rating and Underwriting Supervision (No. AU0142INT)
- Autorité des marchés financiers (AMF). Guideline for the Use of Artificial Intelligence. June 2025.
- Zurich Insurance Co. v. Ontario (Human Rights Commission), [1992] 2 S.C.R. 321.
- Law Commission of Ontario & Ontario Human Rights Commission. Human Rights Impact Assessment (HRIA) for AI. November 2024.
Professionalism Briefs
n the March/April 2026 AR, we covered the three ASOPs applicable to all actuarial services regardless of the practice area. They are ASOP 1 – Introductory Actuarial Standard of Practice, ASOP 23 – Data Quality, and ASOP 41 – Actuarial Communications. We also talked about the Applicability Guidelines (AGs). To recap, the AGs are published by the Council on Professionalism and Education of the American Academy of Actuaries and aim to help actuaries consider which ASOPs may provide guidance based on the scope of their role. These are not definitive statements of what generally accepted practices apply to a specific task and should not replace the actuary’s professional judgment. The AGs, which is an Excel file, can be found on the Academy’s website; just click on the Professionalism tab > Actuarial Standards of Practice > Applicability Guidelines. You can also access them through the Understanding Professionalism link.
In this article, we will focus on AG item 4.0 under the Casualty tab: “Expert Advice, Witness, and/or Testimony.” The only ASOP listed under this heading is ASOP 17 – Expert Testimony by Actuaries. This ASOP should be used in conjunction with any standards relating to the subject on which you provide expert advice.
ASOP 17 was originally adopted in 1991, revised in 2002, and further updated in 2011 and 2018. The latest version became effective for all expert testimony provided by the actuary on or after December 1, 2018.
The ASOP defines some key terms including “Actuarial Assumption,” “Actuarial Method,” “Expert,” “Principal,” and “Testimony.” It defines an “Expert” as “someone who is qualified under the evidentiary rules applicable in the forum to testify as an expert, whether explicitly or by acceptance of the actuary’s testimony. An actuary who has been engaged to testify, or permitted to testify, with the expectation that the actuary will ultimately qualify as an expert is treated as an expert for purposes of this standard, even if the actuary does not testify or is later determined to not qualify as an expert.”
“Testimony” is defined as “a communication of opinions or findings presented in the capacity of an expert witness at trial, in hearing or dispute resolution, in deposition, by declaration or affidavit or by any other means through which testimony may be received. Such testimony may be oral or written.”
An “Expert” may explain complex technical concepts, so they can be understood by the audience receiving the testimony, most of whom may not be actuaries. Even though actuaries may differ in their conclusions, “a mere difference of opinion between actuaries does not suggest that an actuary has failed to meet professional standards.”
An “Expert” will ordinarily work closely with the attorney or other representative of the “Principal” and may reasonably rely upon the advice, information, or instruction provided concerning the meaning and requirements of the rules of evidence or procedure and any other applicable rules. “[R]elying on such advice … is not in violation of this standard….” The actuary should disclose if they believe that a relevant law or regulation contains a material conflict with appropriate actuarial practices, subject to the requirements of the forum, including without limitation all rules of evidence and procedure.
Let’s look at some hypothetical scenarios where an actuary may be called to provide expert testimony; we note that any similarities to actual events are purely coincidental.
An actuary is employed as an expert witness by the U.S. Internal Revenue Service in a case where a captive is experiencing consistently low loss ratios. A captive that takes in premium but rarely, or never, pays out losses may indicate a lack of risk transfer; in this case, the captive acts to shift pretax dollars into an entity with a lower tax burden. The expert may be called in to review frequency and severity assumptions to determine whether the premium is reasonable. In this type of situation, the actuary may want to consider additional ASOPs, such as ASOP 53 (Estimating Future Costs for Prospective Property/Casualty Risk Transfer and Risk Retention) and ASOP 38 (Catastrophe Modeling), when performing their assessment.
Another example is a case of arbitration between two insurers, where one has purchased a subsidiary from the other. In this case, the subsidiary has experienced a deterioration in loss ratios since being purchased, and the purchasing insurer alleges that the subsidiary’s liabilities were materially understated. For this situation, expert testimony may involve a third-party, independent actuary performing an analysis of reserve estimates at the time of sale to determine whether the methods and assumptions used were outside of a reasonable range. The actuary may leverage ASOP 43 (Property/Casualty Unpaid Claim Estimates) and ASOP 23 (Data Quality) in their determination.
A third case where an actuary may provide expert testimony is in a regulatory rate hearing. The actuary may provide evidence that an insurer’s rate increase is excessive compared to its trends in loss, expense, and investment income. They may also opine about whether a company has a target profit that is excessive compared to the risk being insured. ASOP 13 (Trending Procedures in Property/Casualty Insurance) and ASOP 29 (Expense Provisions for Prospective Property/Casualty Risk Transfer and Risk Retention) may be cited by the actuary in their testimony.
Actuaries providing expert testimony may benefit from this general guidance on best practices:
- Uphold professional independence and integrity: We, as actuaries, should maintain our independence from the client and avoid being an advocate for a specific side or outcome. We should be honest and not let pressure influence our conclusions; sometimes, this may include turning down assignments. Upholding actuarial professional integrity should always be a priority.
- Documentation and technical rigor: Be extremely detailed and meticulous with your documentation. Try to anticipate what the opposing side may challenge, even the smallest details. It’s also not uncommon that you rely on other experts (e.g. catastrophe modelers, claims experts, or attorneys), though this reliance should be disclosed.
- Anticipate the adversarial nature of the process: The environment is inherently adversarial, however most matters are resolved in arbitration rather than trials, and opposing sides can often come to a middle ground where everyone agrees and compromises are reached. Cross-examinations in depositions and trials are also adversarial in nature and require careful preparation with attorneys.
- Communication and audience awareness: Very often, we will not be delivering our findings to experts. We must understand the background of our audience and assess their level of knowledge. Whether we are presenting our findings to judges, juries, or arbitrators, we must be able to communicate clearly.
Most of this general guidance can be applied to our daily work, hopefully except for presenting in an adversarial and disputed context. You may refer to ASOP 17 for additional guidance on hypothetical questions, cross-examination, and other related topics.
Understanding ASOP 17 will help make the expert witness process clearer, more consistent, and more professional. This is just an example of how to use the Applicability Guidelines for a specific Description of Assignment. There are six more major categories that we encourage you to explore and consider how each aligns with your practice.
When was the last time you referred to the Applicability Guidelines? We want to hear your thoughts at ar@casact.org.
erman actuary Paul Louis Riebesell proposed the popular “Riebesell form” for increased limit factors (ILFs) in the 1930s. It is still commonly used in the pricing of liability insurance and reinsurance around the world because it is convenient to be applied in practice and its parameter is easy to estimate. However, it is found that the Riebesell form of ILFs seem too heavy-tailed when they are utilized for some liability insurance lines in certain insurance markets. Here we propose a “modified Riebesell form” for ILFs that could fit the distribution of ILFs better in certain scenarios.
Loss Cost(Increased Limit) = Loss Cost(Basic Limit) *ILF((Increased Limit)/(Basic Limit)).
The essence of ILFs is to quantify the multiplicative relationship between the loss cost of basic limit and loss costs of different policy limits. Its definition can be formally given as follows:
ILF(M) = ILF((Increased Limit) / (Basic Limit)) = LAS(Increased Limit) / LAS(Basic Limit),
where M is the multiple between the increased limit and the basic limit, while LAS stands for Limited Average Severity defined as:
LAS(Limit) = E[min(Loss, Limit)] = ∫0LimitL * f(L)dL + Limit * [1 – F(Limit)].
Here, f(L) and F(L) are the probability density function and the cumulative distribution function (CDF) of the loss, respectively. In other words, the LAS for a given limit is the expected value of severity capped at the given policy limit.
ILF(M) = rlog2M,
where r is the Riebesell factor and M is the multiple between the increased limit and the basic limit as defined above.
The Riebesell factor r has a convenient property in the practice of liability insurance pricing. It is the relativity for the loss cost of two times the basic limit divided by the loss cost of the basic limit, and it is also equal to the relativity for the loss cost of four times the basic limit divided by the loss cost of two times the basic limit, and so on. Therefore, if the Riebesell form works well in practice, we can easily obtain the Riebesell factor by dividing the loss cost of two times the basic limit by the loss cost of the basic limit. The Riebesell form may be quite suitable for some heavy-tailed liability risks, such as the product liability line in the U.S.
However, for some other liability risks that are not so heavy-tailed, such as general liability insurance in China, the Riebesell form often does not work well. It is often identified that the relativity for the loss cost of four times the basic limit divided by the loss cost of two times the basic limit is smaller than that of the relativity for the loss cost of two times the basic limit divided by the loss cost of the basic limit. As well, the relativity for the loss cost of eight times the basic limit divided by the loss cost of four times the basic limit is usually smaller than that of the relativity for the loss cost of four times the basic limit divided by the loss cost of two times the basic limit, and so on. The exact rate of ILF decay may depend on different markets’ litigation environments and how quickly liability claims escalate through towers of coverage — and the Riebesell form is too inflexible to reflect this.
ILF(M) = Ms ,
where s=log2 r. This result follows from rearranging and rebasing terms in the formula from the previous section in the following manner:
ILF(M) = rlog2M = (r(lnM / ln2)) = (r(1 / ln2))lnM
= (e(ln r · (1 / ln2)))lnM = (e((lnM · ln r) / ln2))
= (elnM)(ln r / ln2)
= M(ln r / ln2) = Mlog2r = Ms
In order for the original Riebesell form to be applied, the loss cost of liability insurance must be heavy-tailed enough to satisfy the CDF:
F(x) = 1 – a * xs – 1,
where s must be less than 1 (that is, r <2) and x must be greater than a1 / (1 − s) for the purpose of F(x) being an effective CDF1. It could be proven that the expected value for this CDF does not exist. But in practice it is found that the above CDF is too heavy-tailed for some liability insurance products. Gary Venter identified this problem in one of his articles2, in which he regarded the above CDF as a kind of Pareto distribution with the shape parameter less than one. That kind of Pareto distribution is too heavy-tailed for some liability insurance products.
ILF(M) = r(log2M)α.
The modified Riebesell form has two parameters in which the parameter α controls the tail shape. Usually, α is less than one. Generally speaking, the original Riebesell form is a special case of the modified Riebesell form with the parameter α = 1. Under the modified Riebesell form, the tail of the increased limit factor distribution turns thinner as α decreases, as shown in Figure 1.
1.403 = 1.1572.322 = 1.157(log25)1.0
1.172 = 1.1571.088 = 1.157(log25)0.1
More information on selection of r = 1.157 is presented in the next section.
For illustration, we execute both approaches and compare their results to empirical ILFs for a simulated portfolio of Chinese general liability losses. For the original Riebesell form, we directly use the empirical ILF of two times basic limit as the estimate of the parameter r, which is 1.157 (which is the same value of r used to produce the curves in Figure 1). If we attempt to minimize the loss function of the mean squared error (MSE) for the modified Riebesell form, we obtain a fitted parameter α as 0.238.
- The derivation process of F(x): From ILF(M) = Ms = (E[min(X,M*B)])/(E[min(X,B)]) = (∫0M*B[1-F(x)]dx)/(E[min(X,B)]), we obtain ∫0M*B[1-F(x)]dx = E[min(X,B)] * Ms. Taking the derivative of M on both sides of the equation, we get 1-F(M*B) * B = E[min(X,B)] * s * Ms-1, which in turn implies F(M*B) = 1-E[min(X,B)] * s * B-1 * Ms-1. Let M = y/B, then F(M*B) = F(M*y/B) = F(y) = 1-E[min(X,B)] * s * B-1 * (y/B)s-1 = 1-E[min(X,B)] * s * B1-s * ys-1. Note that E[min(X,B)] * s * B1-s is a constant independent of y, so that is F(y) = 1-a*ys-1
- Gary Venter’s article may be found at http://www.garyventer.com/wp-content/uploads/2018/09/Venter-Pagliaccio-2005-Distributions-Underlying-Power-
Function-ILF-%E2%80%99-s-Riebesell-Revisited-.pdf
The CAS AI Primer
- Provide a concise overview of AI concepts and applications relevant to actuarial work.
- Highlight potential risks and outline best practices for responsible AI use.
- Outline key corporate and regulatory considerations that shape AI implementation in actuarial contexts.
- Direct readers to trusted learning resources for building deeper AI literacy and practical skills.
It’s a Puzzlement
ila, a lead software architect at an AI research lab, is debugging a self-modifying neural-network training script. Every second, the remaining unreviewed portion of the code grows by exactly 1% of its current length as new training data continuously streams in. Lila can examine and fix code at a constant rate of 100 lines per second. When she sits down to begin, there are exactly 1,000,000 lines left to review.
Will Lila ever finish debugging the entire script? If so, exactly how many seconds will it take her?
No one was cheated, because we can think of an identical situation where everyone in the circle agrees to clear their mutual debt. Ex: Alice owes $100 to Bob and Charlie owes Alice $100, so Alice can clear her debt by having Charlie owe Bob $100. Everyone does this until there are two people left who owe each other $100; they agree to clear it.
Now Alice wins $50 from a scratch off. The end result is the same—Alice keeps $50, no one has debt.



















