A Los Angeles jury just told Meta and Google something the industry has spent two decades avoiding: The way you build your products is itself a form of liability.
On March 25, a California jury found Meta and Google responsible for the depression and anxiety of a young woman who became addicted to Instagram and YouTube as a child. The $6 million damages are financially irrelevant to two of the most valuable companies on earth. The precedent is not.
One day earlier, a separate New Mexico jury ordered Meta to pay $375 million for misleading consumers about child safety on its platforms. Together, these verdicts represent the first time juries have held technology companies liable — not for what users posted, but for how the platforms themselves were designed.
That distinction is the legal heart of the matter.
Are Social Media Companies Liable for Product Design?
A landmark 2026 legal shift has established that tech companies can be held liable for product design defects rather than just user content. Key factors include:
-
Design as Liability: Features like infinite scroll, autoplay and notification systems are now viewed as engineered defects that can cause psychological harm.
-
Section 230 Limits: While Section 230 protects platforms from liability for what users post, it does not shield them from harms caused by the platform’s internal architecture.
-
The Big Tobacco Moment: Social media optimization could parallel the tobacco industry if engagement mechanics continue despite internal evidence of harm.
-
Legal Precedent: Recent verdicts against Meta and Google mark the first time juries have awarded damages ($6M and $375M) based on addictive design rather than third-party speech.
The Legal Shift Is Underway
For decades, Section 230 of the Communications Decency Act shielded platforms from liability for user-generated content. The plaintiffs in the Los Angeles case did not try to pierce that shield directly. Instead, they reframed the claim and argued that the harm arose not from content, but from architecture. Infinite scroll, autoplay, beauty filters, notification systems, algorithmic feeds — these are product design choices, not speech. And product design choices can be defective.
The jury agreed. It found that Instagram and YouTube's engineering constituted a design defect, which was a “substantial factor” in causing the plaintiff’s mental health injuries. This is product liability theory applied to software, the same legal framework used against automakers, pharmaceutical companies and tobacco manufacturers.
Thousands of similar lawsuits are pending across federal and state courts, brought by individual families, school districts and state attorneys general. A federal multi-district litigation consolidated in the Northern District of California has bellwether trials scheduled to begin in June. More than 100,000 individual arbitration claims have been filed against Meta since late 2024. Snap and TikTok settled before the trial began.
Both Meta and Google have announced appeals, and no appellate court has yet ruled on whether the design-defect theory survives Section 230. This means no binding precedent has been set for other courts to follow. But that uncertainty cuts both ways. Trial judges have repeatedly rejected motions to dismiss these cases on Section 230 grounds, and juries are now hearing the evidence. The legal question is no longer whether platforms can be held accountable for how they are designed. It is how often, by whom and for how much.
A Harm the Industry Can No Longer Deny
The debate around social media has evolved beyond abstract concerns about misinformation or polarization. It is now about documented harm, particularly among teenagers.
Over the past several years, whistleblower disclosures and internal company documents have made the problem impossible to ignore. Meta’s own research suggested that Instagram could worsen body image issues among teenage girls and was associated with increased anxiety, depression and suicidal ideation. Those findings were generated internally, discussed within the company and, in some cases, not fully disclosed publicly.
Internal documents presented at trial went further. Meta's own communications compared the platform's effects to pushing drugs and gambling. A YouTube memo described “viewer addiction” as a goal. An Instagram employee wrote that the company was staffed by “basically pushers.”
What makes this moment different is not simply that harm exists. It is that the industry can no longer credibly claim ignorance of the damage its products can do.
In both law and ethics, knowledge changes the standard: Once a company knows its product is causing harm, continuing to sell it without modification is no longer negligence. That is intentional recklessness.
The Tech-Tobacco Parallel, Properly Understood
Comparisons to the cigarette industry are often dismissed as rhetorical. They shouldn’t be.
The analogy is not that social media is identical to smoking. The mechanisms differ, creating physical addiction in one case and psychological dependence in the other. The benefits differ too; social media creates real economic and social value that cigarettes never did. Regulation will reshape the industry, not eliminate it.
But the parallel becomes meaningful when framed around incentives and awareness.
Tobacco companies were not condemned simply because their products were harmful. They were condemned because they continued to promote and optimize those products while internal evidence of harm accumulated. Social media platforms now face the same question: What happens when a company keeps optimizing for engagement after its own research shows that the engagement mechanics are causing harm?
Engagement loops, like nicotine, are not accidental. They are engineered. Infinite scroll, variable rewards and social validation mechanics are deliberate design choices, not emergent properties of neutral tools.
The issue is no longer whether platforms create value. It is whether they are accountable for the harms embedded in how they produce that value.
When Profit Meets Responsibility
This raises a question that goes beyond regulation and into philosophy: What is a company? Is it simply a vehicle for shareholder returns? Or does scale impose a different standard, one where impact on society becomes part of its obligation?
For decades, the prevailing view in Silicon Valley leaned toward neutrality: Platforms are tools, and users decide how to use them. But that position becomes harder to defend when those tools are explicitly designed to influence behavior and when that influence can be measured, optimized and targeted with precision.
Money, in itself, has no morality. But companies are not money. They’re collections of decisions about what to build, what to prioritize and what to ignore. The question is whether profit can remain the sole organizing principle when the negative externalities are this significant.
At scale, influence is no longer a feature. It is a form of power. And power inevitably attracts accountability.
The End of Engagement at Any Cost
If courts accept that algorithmic amplification creates foreseeable harm, the core economics of social media must change. Engagement may no longer be maximized without constraint. Product design may require built-in safeguards and friction. Transparency and auditability may become regulatory expectations, not voluntary gestures. Liability risk may extend beyond content to the recommendation systems that decide what billions of people see and in what order.
In effect, engagement stops being a pure growth metric and becomes a regulated variable. Platforms will still measure time-on-app, session length and return rates, but those numbers will need to be defended in regulatory filings, in court and in risk committees against a competing standard. That new standard is whether the mechanics generating that engagement are causing foreseeable harm, particularly to minors. Engineers will still optimize. They will just optimize against a different objective function, one that includes legal exposure and duty-of-care obligations alongside growth. The metric does not disappear. It acquires a counterweight.
The US ruling is not happening in isolation. The European Union’s Digital Services Act already requires large platforms to offer non-personalized feeds, bans targeted advertising to minors and mandates systemic risk assessments, including for harms related to addictive design. The EU fined X €120 million in late 2025 for transparency violations.
Age-gating requirements, algorithmic accountability standards and duty-of-care frameworks are advancing across multiple jurisdictions. Together, these measures aim to verify user ages before access, force platforms to explain and audit how their recommendation systems work and impose a legal obligation to prevent foreseeable harm, particularly to minors.
What is forming is not a single regulation but a new category of expectations that treat large technology platforms less like neutral utilities and more like influential systems with corresponding obligations. This does not dismantle the business model. But it does fundamentally reshape it.
AI and the Next Phase of Big Tech
The implications do not stop at social media.
If platforms are liable for the harms caused by algorithmic amplification of existing content, the extension to AI systems that generate content is not difficult to see. Recommendation engines curate. Large language models produce. Generative systems do not merely find harmful content; they can create it on demand, personalize it to the individual and deliver it without the intervention of another human being.
If a jury can find that an infinite scroll feature constitutes a design defect, the legal theory is available to evaluate whether an AI system that produces misinformation, manipulative persuasion or psychologically harmful outputs is itself defectively designed. The EU is already moving in this direction, requiring large platforms to assess and mitigate risks arising from generative AI specifically.
AI companies would be wise to study the social media litigation record not as a curiosity, but as a preview of the regulatory framework they will inherit.
What Will Come Next?
None of this will come easily. The companies at the center of this shift have built some of the most successful businesses in history. Their economic models, their valuations and their strategic positions all depend on maintaining high levels of engagement and growth. They will resist changes that threaten those foundations: in the courts, in Washington and Brussels and in how they design and roll out their products. That’s not surprising. It’s rational.
But history suggests that once a regulatory and societal shift reaches critical mass, resistance does not stop it, but only shapes how it unfolds.
We’re moving from an era where platforms could plausibly claim neutrality to one where the public expects accountability. The harms, especially to younger users, have been too visible, too persistent and too well-documented to ignore.
The tobacco industry did not disappear. It adapted. So will tech. The only question is whether the adaptation comes from its own leadership or from litigation.
