Social Media Needs Better Governance to Survive

As social media companies increasingly take on the role of free speech arbiters, they need to ensure their policies offer transparency and the types of checks and balances government relies on.

Written by David Ryan Polgar
Published on Jul. 13, 2022
Greek columns in front of a government building
Brand Studio Logo

Social media companies don’t want to moderate speech. They want to show you ads. Mark Zuckerberg famously told Senator Orrin Hatch as much in 2018 when asked about how Facebook makes money. But as advertisers and the people populating these ad-based platforms began to demand an online experience that was more heavily regulated to mitigate abusive behavior, platforms like Facebook were thrust into a quasi-governmental role. That’s a huge expansion in responsibilities for a company that just wants to show users ads. Predictably, it hasn’t been a smooth transition.

Mark Zuckerberg recently made his third U.S. Congressional visit since July. The hearing, which also included Twitter’s Jack Dorsey, was titled “Breaking the News: Censorship, Suppression and the 2020 Election.” Similar to previous hearings, Congress had a lot of zingers but very little in the way of progress toward defining the desired role of social media platforms in regards to speech. Without the prospect of meaningful regulation that will shift power back to governmental bodies, social media companies are starting to replicate the very features of government in terms of creating laws (i.e., terms of service), enforcing them (i.e., content moderation), and interpreting their meaning (i.e., Facebook’s Oversight Board). Suffice it to say, both Congress and social media trust and safety teams are filled with lawyers and policy experts. And, similar to Congress, social media companies are now forced to make uncomfortable decisions around thorny social issues. As you might expect, there are significant trade-offs and competing values involved.

Today, social media companies are in between a rock and a hard place when it comes to content moderation. Every decision to limit misinformation, hate speech and trollish behavior leads to accusations of censorship. At the same time, every decision to allow offensive speech in the name of free expression leads to accusations of negligence. Every decision is simultaneously wrong and right, depending on which corner of the web you consult. Some users are telling social media companies that they should exercise more power while others insist they have no moral authority for that very power. “We are facing something that feels impossible,” said Jack Dorsey at this week’s Senate hearing. “We are required to help increase the health of the public conversation while at the same time ensuring that as many people as possible can participate.” Rock, meet hard place.

This no-win situation emerged as the power to dictate what is and isn’t acceptable regarding speech has largely shifted from governmental bodies to social media companies. These companies have an eye toward scaling and a blind spot toward their impact on fundamental civil rights. This transition was more accidental than intentional, as no platform wanting to maximize shareholder wealth would want to assume the contentious and costly role of determining allowable speech. Worse, although social media companies have assumed the significant power of determining appropriate speech, they didn’t (and largely still don’t) have the structures in place to carry out this work responsibility. Trust and safety departments, which focus on reducing abusive behavior, misinformation, deepfakes, hate speech and the many other ways that platforms can be misused and weaponized, have become crucial. They have gone from being an afterthought to serving a central function of every major platform.”When we started the company, we weren’t thinking about this at all” said Dorsey in 2018. “We were thinking about building something that we wanted to use.” But as Twitter has evolved from being known as the “free-speech wing of the free-speech party” into one that balances the trade-offs between safety, privacy, and free speech, “the company has had to massively adjust.

Although we might blithely use the phrase “public square” to describe platforms like Twitter, actual public squares and what speech is allowed in them have hundreds of years of legal precedent that have helped develop what “free expression” actually entails. There are also significant checks and balances between the legislative, executive and judicial branches of government that help guide speech decisions. For example, laws regarding speech are subject to judicial review and are generally struck down if a court determines them to be arbitrary and capricious. This process makes sense because it aims to prevent unfair treatment of citizens by eliminating vague laws that could be used to target censorship at specific groups while remaining unenforced for others. If you boil down most of the criticism directed at social media platforms regarding speech, it typically revolves around the belief that its enforcement of rules seems to exhibit the arbitrary and capricious nature laws must avoid. At what point do those rule violators get suspended? Regular, everyday people get booted off the platform and get their accounts suspended for much less,” said Erin Shields, the national field organizer at MediaJustice, speaking to Protocol about how platforms appear to treat people unevenly by giving more leeway to well-known individuals. Social media companies are torn between making business decisions and judicial decisions, and the general public is demanding more transparency and uniformity.

Though the issue of speech regulation on social media may feel new, American society has always fought about allowable speech. For example, listen to George Carlin’s routine “Seven Words You Can Never Say on Television,” which led to the Supreme Court case of FCC v. Pacifica Foundation. The case’s central questions asked about the subtle differences between decency, indecency and obscenity and what role the government should play in regulating those. Clearly, determining the nuanced contours of allowable speech has always been and will continue to be a messy fight. Having “freedom of speech” has never meant the total absence of speech regulation, but instead, a discussion over what types of speech should be protected and what types prohibited. Now, though, that fight is happening in the court of public opinion about the role of social media in speech moderation rather than in our judicial courts about the role of government.

 

Where Do We Go From Here?

Social media companies are bound to mirror our democratic institutions. We are entering what I like to refer to as a period of “Social Media Democracy.” Given the consequences these companies’ decisions have on our fundamental civil rights, they will continue to evolve in ways that mimic our governmental bodies. Democracies value transparency, even-handedness and adequate checks and balances to prevent the abuse of power. These are all values that social media companies will have to take up if they still maintain significant power around determining appropriate speech. While there has been a tremendous amount of discussion around modifying Section 230, which would alter how platforms moderate speech, that will likely not happen in the short term.

The antithesis of democracy is when a single person or group has the roles of the judge, jury and executioner. Typically, businesses are given tremendous leeway in the United States around how they make their decisions. But since social media companies have become intertwined with the debate around speech, it will become increasingly necessary for platforms to set up structures reminiscent of our legislative, executive and judicial branches, along with clear separations of power and check and balances.

Right now, social media companies “make laws,” but what role should the public have in crafting these laws? Will the public be able to vote for individuals that act as social media lawmakers? Likewise, social media companies enforce their “laws” (i.e., terms of service, community guidelines). In a democracy, you would never want a situation in which someone could get punished without knowing exactly what law she violated. Social media platforms will need to ensure that their laws can only be carried out if their “citizens” are clearly informed as to what law they violated and how they can seek recourse. Lastly, as we have seen with Facebook’s Oversight Board, every major social media company will have to offer people a process to appeal its speech decisions and make this process transparent enough for the public to gain insight into allowable versus unallowable speech.

We have learned in recent years that social media companies have a major impact on our democracy. Moving forward, social media’s structure will start reflecting it more and more.

Next Up in Tech EthicsThe Key to a Better Social Media Ecosystem? Friction.

Explore Job Matches.