This presentation explores the evolving intersection of social media and insurance coverage, emphasizing the rapidly growing risks and claims stemming from platforms like TikTok. Legal, regulatory, and market-based concerns are brought to light, especially as technologies such as AI integrate into social platforms. Peter Calleo and Troy Shuman of Enstar, Grace Seigel of Guy Carpenter, and Jonathan D. Henry of Dentons provided their insight into what they have been seeing as it relates to these types of expanding risks and coverage for them. A video replay of the presentation is available to AIRROC members via the AIRROC On Demand library.
Current Claims Landscape
We have all become familiar with the major players in the social media platform space—Facebook/Meta, TikTok, Snapchat, Twitter/X. Troy Shuman noted that each of these have active litigation revolving around the way these platforms operate. These companies make money off of advertising by driving people to the platform, and then keeping users on the platform for as long as possible. This can lead to a variety of different types of claims as they relate to the content being presented, the addictive nature of these platforms, and the impact these factors may have on individuals.
Social media has introduced a new frontier for insurance claims, particularly in the areas of product liability linked to social media addiction, phishing, social engineering attacks, and privacy violations like breaches of the Biometric Information Privacy Act (BIPA). The rise of AI-enhanced platforms has further widened the spectrum of potential liabilities.
Troy pointed out that a key question that is currently being litigated is whether or not these platforms are actually a product or a service. While they have created and presented to users, the users are then in command of how they choose to interact with the platform. This presents challenges as to whether or not product liability laws govern the types of claims that arise, or do standard negligence principles cover these types of claims. Jonathan Henry detailed the serious nature of the claims that have recently arisen from users of TikTok, which include injuries and death. As these cases evolve, underlying issues regarding the intentional design of platforms to drive addictive behaviors become more prominent. Grace Siegel also adds that “social media” doesn’t necessarily have an accepted definition. Not all internet-based platforms operate the same way, and more are reliant on platform driven content that are pushed out to users, and others are more driven by user interactions and posts.
Troy also discussed the pros and cons of litigating these cases in state courts versus federal courts. It is hard to pin down which is more idea, as it depends on which courts are available to you within the jurisdiction, and it depends on the underlying case in question. Peter Calleo pointed out that there are some courts that tend to favor defendants more than others.
Jonathan provided a summary of Section 230 of the Communications Decency Act, which as it stands, essentially grants companies legal immunity from being held legally liable for the content posted on their platforms by users. It is often referred to as the “26 words that created the internet” because it created a framework for user-generated content. With this in mind, what liability can exist? Courts have debated whether or not there are Section 230 protections from these claims, as the platforms are making editorial decisions within their algorithms that drive maximum viewership and clicks.
Risk Expansion & AI
Artificial Intelligence introduces a novel but urgent risk dynamic. Citing a high-profile lawsuit involving Character.AI and the tragic suicide of a teen, the presentation compares AI-driven platforms to historically harmful products like asbestos. The argument is that AI tools can become addictive and psychologically damaging, particularly to minors. Class action claims may become easier to substantiate as the commonality of harm among affected users increases.
The voidability of End User License Agreements (EULAs), especially where minors are concerned, opens further liability channels. AI-generated content and algorithmic engagement strategies are reframing traditional claims in a modern context.
The phenomenon of social inflation is accelerating. Social media companies are reducing their content moderation, creating fertile ground for claims such as:
- Defamation,
- Invasion of privacy, and
- Emotional distress.
These trends indicate a longer tail for liabilities. New forms of expression are giving rise to traditional legal complaints under new guises, particularly within personal and advertising injury claims.
Impacts on the Current Market
Insurers must evolve to address these challenges effectively. Key recommendations include reevaluating the due diligence process for insureds, asking targeted questions when reviewing books involved in litigation, considering enhancements to policyholder defense strategies in anticipation of novel and potentially protracted litigation arising from social media risks. Grace Seigel points out that due to the complicated and rapidly evolving nature of these types of claims, we are seeing exclusions being added to policies.
The presentation closed by encouraging further discussion and questions, emphasizing the urgency with which insurers must adapt to this rapidly shifting digital and legal landscape. The core message: social media and AI are redefining exposure, and the clock is ticking for insurers to stay ahead.