After the Meta Verdict, What Should Social Media Look Like?

Social media has traditionally treated attention as currency, but the recent Meta verdict shows that model won’t last forever. To have a sustainable future, these companies have to prioritize safety over engagement.

Written by Natalie Boll
Published on Apr. 09, 2026
A closeup on someone holding a smartphone with social media reaction icons
Image: Shutterstock / Built In
Brand Studio Logo
REVIEWED BY
Seth Wilson | Apr 08, 2026
Summary: Recent court rulings in New Mexico and Los Angeles mark a shift in tech liability. Juries found Meta and YouTube's addictive designs and deceptive safety claims harmful to users. Meta faces $375 million in penalties, which means social media platforms will have to rethink their engagement models going forward.

Two recent landmark legal decisions in New Mexico and Los Angeles against Meta and YouTube have sent a shockwave through the technology industry. For years, concerns around social media’s impact have been debated in classrooms, homes and policy circles. But historically, technology companies have largely been shielded from liability under Section 230 of the Communications Decency Act, a U.S. law that protects platforms from being treated as the publisher of user-generated content. Now, for the first time, platforms are being held accountable for the consequences of their design. 

The outcomes of these cases signal a shift. Courts are beginning to recognize that the harms associated with social media are not incidental. They are, at least in part, a product of how these platforms are built.

What Is the Future of Social Media?

After the recent judgments against Meta and Google, the future of social media may move away from the current attention-as-currency model toward systems designed for human well-being. 

  • Shifting Economic Models: Moving from behavioral advertising, which incentivizes addiction, toward membership or direct user-supported models that reward utility over clicks.

  • Safety-First Design: Creating dedicated digital spaces for younger users where discovery is intentional rather than algorithmically aggressive and content is moderated to the highest safety standards.

  • Accountability for Architecture: A transition from policing content to holding companies liable for design choices, such as People You May Know features or unverified accounts that can be exploited by bad actors.

  • Prioritizing Well-Being: Rethinking platform incentives to optimize for meaningful interaction rather than maximizing time on site and emotional intensity.

More on the Meta VerdictIs Tech Facing Its Big Tobacco Moment?

 

Designing for Addiction

In the case in Los Angeles against Meta and YouTube, the focus centered on addictive design. Evidence presented showed that platform features were intentionally optimized to maximize engagement. Internal documents revealed a deep understanding of how design choices influenced user behavior, particularly among younger users. The jury acknowledged that, while companies may not have set out with the explicit goal of causing harm, they created systems that made disengagement extremely difficult.

The case also examined how recommendation systems can direct users toward harmful content, including material that promotes self-harm. The court considered whether algorithmic amplification, driven by engagement metrics, contributed to real-world consequences.

In the case brought by the State of New Mexico, a jury found Meta Platforms liable under the state’s Unfair Practices Act, concluding that the company misled consumers about the safety of its platforms. At the heart of the case was a fundamental question: Did Meta’s public assurances about safety reflect the realities of how its platforms are designed and operate?

Evidence presented at trial included internal company documents and testimony indicating that executives and employees were aware of risks, including exposure to predators and harmful content. The jury ultimately found that Meta’s representations were deceptive and that elements of its platform design contributed to those risks.

The jury awarded $375 million in civil penalties. Meta has said it will appeal the ruling.

 

An End to Attention as Currency

These decisions are significant. But they are also complex.

It is tempting, in moments like this, to call immediately for more regulation. And regulation will undoubtedly play a role. But when regulation is created out of fear, it often produces unintended consequences. Overcorrection can lead to censorship, stifle innovation or fail to address the root cause of the problem.

To understand what comes next, we need to step back and look at the system as a whole.

Social media is not just a collection of apps. It is an ecosystem built on a specific economic model: attention as currency.

Every action you take on a platform is tracked. The post you linger on. The video you almost watch but scroll past. The message you type and delete. The system collects and analyzes these signals, using them to build an increasingly precise profile of your behavior.

The company then sells that profile to advertisers. This is the most powerful advertising engine in history. And it works because the system keeps you there.

If attention is the product, then time spent on site is the metric that matters most. Product teams are therefore incentivized to design systems that maximize engagement. Features are continuously refined to reduce friction, increase return visits and deepen immersion.

The unintended consequence of structuring the system in this way is addiction.

This was a key finding in the Meta case. It’s not addiction in a clinical sense alone, but in the structural sense. The company built a system designed to be difficult to leave.

But design alone does not explain everything. Content plays a critical role.

Algorithms have learned that content that evokes strong emotional reactions performs best. Anger, outrage, fear and desire drive more engagement. These signals are then fed back into the system, reinforcing the types of content that keep users scrolling.

Even the now ubiquitous “hook” is a direct response to this system. The brief, attention-grabbing moment designed to stop the scroll at the beginning of a video, like “This will change how you think about…,” or “You’ve been doing this wrong your whole life” are designed to immediately capture attention and interrupt scrolling behavior. Creators have learned to capture a viewer’s attention immediately, often by triggering urgency or anxiety, because the algorithm rewards doing so. This structure creates a feedback loop where the most extreme or emotionally charged content rises to the top of users’ feeds, because these signals optimize for engagement. The result is a system that systematically amplifies emotional intensity over accuracy, nuance, or well-being.

Consider how drivers instinctively slow down to observe a car accident. This response is rooted in human attention to potential threats. When translated into algorithmic systems, that same signal becomes a data point, one that is learned from, optimized, and increasingly surfaced. The result would be a feed that begins to resemble a constant stream of car crashes.

 

Engagement Entails Risk

Layered onto this is another issue: the presence of bots and fake accounts.These accounts can artificially inflate engagement, manipulate conversations and, in some cases, actively provoke conflict to generate more interaction. More engagement means more data. More data means more precise targeting. And more targeting means more revenue.

In the New Mexico case, the courts also examined how platform features can be exploited for harm, particularly by bad actors. Evidence presented at trial showed that gaps in safeguards and oversight allowed harmful interactions to occur, including contact between minors and adults outside their trusted networks.

Features like “People You May Know” on Facebook can unintentionally connect vulnerable users like children with individuals outside their trusted networks. The ability to create accounts without robust verification allows for anonymity at scale. Private messaging systems and unmoderated groups can become environments where harmful interactions occur without oversight.

Individually, each of these features may seem benign. But together, they create conditions where exploitation becomes possible. The risk of exploitation on a massive scale is where the industry faces a fundamental dilemma.

Many of the features that drive engagement also increase risk. And because companies are accountable to shareholders, they’re incentivized to prioritize growth and revenue above all else. Without clear incentives to do otherwise, safety measures that reduce engagement can be difficult to justify internally.

More in Online SafetyAssume Everyone Is AI Until Proven Otherwise

 

Regulation Isn’t Enough; We Have to Design for Safety

So, where do we go from here?

The answer isn’t as simple as making platforms legally liable for every piece of content on their sites. That path risks overly restricting speech and constraining innovation. Nor is it sufficient to layer regulations onto the existing model, which was never designed with well-being in mind.

What these cases reveal is a deeper issue: The current model itself may be the problem.

If we continue to build platforms that rely on behavioral advertising, we will continue to optimize for attention. And if we optimize for attention, we will continue to encounter the same outcomes, regardless of how many safeguards we attempt to add later.

Instead, we should be asking a different question: What would social media look like if it were designed for maximizing human well-being from the beginning rather than engagement?

One path forward is the creation of dedicated digital spaces designed specifically for younger users. Not as an afterthought or a modified version of adult platforms, but as environments built with entirely different principles.

In these spaces, content would be moderated to the standard of the youngest participant. Discovery would be intentional, not algorithmically aggressive. Features that enable anonymity without accountability would be limited. And monetization would not depend on capturing and reselling attention.

This would require moving away from ad-based models, where platforms are incentivized to increase engagement at all costs, toward alternatives such as membership or direct user-supported models. These models shift the underlying incentives: instead of optimizing for clicks, views and time spent, platforms are rewarded for creating environments people choose to return to because they are meaningful, useful or supportive.

It would also require a shift in mindset. For years, the dominant approach to building platforms has been that growth at all costs is the goal. These cases challenge that assumption. They suggest that design choices have consequences, and that those consequences can no longer be ignored.

The future of social media will not be determined by courts alone. It will be shaped by the decisions we make now, as builders, investors, regulators and users.

We’re at an inflection point. We can continue to refine a system that was never designed for our well-being. Or we can take this moment as an opportunity to rethink it entirely.

The question is not whether social media will evolve. It is whether we are willing to change what we are optimizing for.

Explore Job Matches.