The Tort Storm Has Arrived
What the Meta and YouTube Verdicts Mean for AI Liability
This week, two juries, one in California, the other in New Mexico, handed tech companies their most consequential legal decisions to date. These verdicts show what we at Fathom have been arguing for months: tort law and technology are on a collision course. They also highlight a fundamental issue with the current state of tech regulation: the courts are now doing the work of assigning accountability where legislatures have been asleep at the wheel. And the consequences for both the safety of users and the state of American innovation are profound.
Tech on Trial
On Wednesday, a Los Angeles jury found Meta and YouTube negligent in the design of their platforms. The court awarded $6 million in combined damages to a plaintiff who alleged that she became addicted to Instagram and YouTube as a child, leading to depression, anxiety, and body image issues. The jury determined that the companies were negligent in platform design, that negligence was a substantial factor in causing harm, that the companies failed to adequately warn users of the dangers, and that both companies acted with malice (meaning, under California law, highly egregious conduct).
This case was selected as a bellwether by California’s Judicial Council Coordination Proceedings (JCCP), with the understanding that the outcome would help guide the resolution of approximately 2,000 pending lawsuits brought by parents and school districts against social media companies across the country. As one plaintiffs’ attorney put it, this trial was “a vehicle, not an outcome.” Critically, the case was argued and decided on a negligence theory focused on platform design (i.e., on features like infinite scroll, autoplay, and notification systems), not on the content that users encountered, meaning that Section 230 of the Communications Decency Act was not implicated.
TikTok and Snap, originally co-defendants, settled before trial—an early sign that the industry is already beginning to price in liability exposure.
Echoes of Big Tobacco
One day earlier, a jury in Santa Fe ordered Meta to pay $375 million for violating New Mexico’s Unfair Practices Act. The jury found that Meta misled the public about the safety of its platforms, engaged in trade practices that exploited children’s vulnerabilities, and concealed what it knew about child sexual exploitation. The case was built on an undercover investigation in which state agents posed as children on social media and documented the sexual solicitations that followed. It relied heavily on Meta’s own internal communications and whistleblower testimony, a litigation strategy mirroring that used during tobacco trials of the 1990s: the company knew, and the company concealed.
A second phase will begin in May, when a judge will determine whether Meta created a public nuisance and whether the company should be ordered to make specific changes to its platforms, including implementing effective age verification and removing predators. This second phase could result in a court directly mandating how Meta designs its products—a decision more consequential than the damages themselves.
The Coming Tort Storm
What happened this week is the beginning of a wave. In our 2026 predictions, we wrote that “AI is a tort nightmare waiting to happen” and predicted that without a standard of care defined upfront by experts, it would increasingly fall to the courts to determine liability.
The California bellwether alone is tied to approximately 2,000 pending lawsuits. A separate federal trial is set to begin this summer in the Northern District of California, consolidating hundreds of claims by school districts and parents nationwide. More than 40 state attorneys general have filed lawsuits against Meta. And beyond social media, the same pattern of foreseeable harm, available but unimplemented safety measures, and failure to warn is emerging in AI chatbot cases (including the lawsuits against Character.AI and OpenAI over teen self-harm and suicide), in deepfake harms, and across other categories where AI-powered products create risks that developers know about but fail to adequately mitigate.
The key question is: what should we do about it? For many, the answer may simply be, “Let tech have its comeuppance.” That anger is understandable. But surely there are better outcomes for everyone than an ever-growing cost of liability exposure. Consumers and families deserve technology that strives to be as safe and responsible as possible before litigation is filed, so that far fewer people are harmed in the first place. Companies that deploy AI need enough clarity in the system to build the right kind of protections into their products from the outset. And if AI adoption is, indeed, a necessary part of the American economy and our global leadership, we need a way to establish safety without inviting millions of lawsuits.
Everyone harmed deserves their day in court. But some of the questions being answered in courtrooms may be better addressed earlier, and by independent experts. The negligence framework requires non-expert judges and juries to determine after the fact whether a defendant exercised reasonable care; that is, whether their conduct created or failed to avoid unreasonable risks of foreseeable harm. This is a judgment that depends on the likelihood and severity of the harm and the relative burden of taking additional precautions (as we’ve discussed here).
In the California case, twelve ordinary citizens, none of them platform engineers, machine learning researchers, or child development experts, had to determine whether Meta’s and YouTube’s platform design was “negligent.” They deliberated for nearly 44 hours over nine days. They heard competing expert testimony, reviewed internal documents, and ultimately rendered a binary verdict: liable or not liable. And they made their best judgment.
Even assuming they made the correct call, this verdict tells other companies nothing actionable about what specific design practices constitute reasonable care going forward, or what standard of care AI companies should be meeting tomorrow. And by the time the appeals in this case are resolved, which could take years, the technology will have evolved dramatically. The design features at issue will have been updated, replaced, or supplemented by new ones. The tort system looks backward. The technology moves forward.
IVOs: Stepping into the Breach
Courts should hold companies accountable. But accountability after the fact is not the same thing as safety going forward.
We need a system that empowers experts to determine the standard of care before a harm is committed, so that AI companies actually know what is expected of them and can be held accountable if and when they fall short. Independent Verification Organizations are designed to do exactly that: they define and enforce a heightened standard of care, through proactive verification rather than retroactive litigation.
Under the IVO framework, the government sets outcome-based objectives: goals like ensuring AI systems do not provide material encouragement of self-harm, or that platform design features do not exploit the developmental vulnerabilities of minors. Independent, expert-led organizations then develop the technical criteria and testing protocols to verify whether companies are meeting those outcomes. Companies that achieve verification earn a trusted seal of approval signaling that a heightened standard of care has been met. Critically, these criteria are continuously updated to keep pace with technological change, something no legislature, regulator, or court can do.
Had an IVO framework been in place, Meta would have needed to demonstrate, through independent expert verification, that its recommendation algorithms and engagement features met specific, continuously updated criteria for minor safety before the harms at the center of this week’s cases occurred, and before any lawsuit was filed. The standard of care would not have been determined for the first time by twelve jurors in 2026, years after the fact. It would have been defined, applied, and enforced by experts in real time, with Meta’s verification status dependent on meeting it. And if the system worked correctly, Meta would have been required to make design choices that would have, hopefully, prevented the harms from ever occurring.
A Better Approach for Better Outcomes
This week’s verdicts are both a watershed and a warning. Without a proactive, expert-led system to define what reasonable care looks like, and to update that definition as the technology evolves, we will be left with a patchwork of jury verdicts that may provide emotional catharsis, a liability cost that will be baked into the sector, but far too little practical guidance that actually prevents bad societal outcomes.
Legislation to establish IVO frameworks has been introduced in several states. We are working with partners internationally to scale the model, and are building up an ecosystem of assurance providers and deployers to operate in the IVO marketplace. The appetite for a better approach from policymakers, industry, and the public is real, and growing. The question is whether we act now, before the next verdict—or whether we wait until an army of Kevin Malones accidentally brings about the next tragedy.


