Major social media companies provide their users with tools and an environment to connect with each other and spread messages. Given the recent problems social media has seen, including misinformation and hate speech, some people have argued for a range of solutions like environmental changes, algorithmic tweaks, and better enforcement of terms of service and community guidelines. But what if the way we conceive of social media is wildly off base?
The language we use to describe major social media companies like Facebook, Twitter, YouTube and TikTok significantly impacts how we think about the responsibilities these companies have to the public. Do we think of social media as publishing the content they feature, which entails lots of responsibility? Or do we merely see them as conduits for user-generated content with very little responsibility for what appears on the sites?
Language matters. How we classify any issue dramatically alters the next steps we take. Take the phrase “death tax,” for example. In April of 2017, an Ipsos/NPR poll found that 66 percent of Americans opposed an estate tax. Meanwhile, a much larger percentage, 78 percent, opposed a death tax. The only problem? These two taxes are exactly the same. The only change is in language, but that subtle tweak massively alters how people think of the tax. Changing the language we use to describe a given issue changes our response to it. Social media needs a change in language, and, by extension, a change in approach.
Social Media Is a Springboard
What Makes a Platform?
Right now, we describe major social media companies as “platforms.” But are Facebook, Twitter, YouTube and TikTok really platforms? I don’t think so, and here’s why. According to the legal definition, a “social media platform means an application or website through which users are able to create and share content and find and connect with other users.” The term platform connotes neutrality.
Social media companies, by their own admission, are not neutral. In January of this year, Adam Mosseri, head of Instagram, tweeted: “We’re not neutral. No platform is neutral, we all have values and those values influence the decisions we make.”
Although he argued that platforms aren’t neutral entities, I would point out that the word “platform” creates the perception of neutrality. The term suggests that these companies merely provide an environment in which speech can flow. In reality, however, social media companies set the current that dictates the flow.
This problem has roots dating back to the internet’s widespread adoption. We conceived of the early web serving as an intermediary for communication, which made its way into Section 230 of the Communications Decency Act of 1996. This clause states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This has been interpreted to claim that social media companies have the right to moderate content, but they don’t have the responsibility to do so, minus certain exceptions.
Social media companies typically do not act in the manner that traditional publishers do. In fact, a publisher and a social media company tend to take the exact opposite approach to making content available. A publisher filters its content through an editorial process, which often involves vetting and fact-checking, before making it available.
A social media company, by contrast, makes the content publicly available, minus some immediate AI-based filtration. Only then is the material vetted through moderation and reporting. Classifying social media companies as publishers would have subjected them to a panoply of legal action, given the liability placed on publishers for defamatory content: “[A] person who publishes a defamatory statement by another bears the same liability for the statement as if he or she had initially created it. Thus, a book publisher or a newspaper publisher can be held liable for anything that appears within its pages. The theory behind this ‘publisher’ liability is that a publisher has the knowledge, opportunity, and ability to exercise editorial control over the content of its publications.”
But what if social media companies operate in an in-between status with less responsibility than a publisher but more than being a mere platform? For one thing, we shouldn’t call social media a platform when it doesn’t operate with the characteristics we generally associate with that term.
The Springboard Model
Social media isn’t a platform. Rather, it’s a springboard. The policies and algorithmic decisions of social media companies are the springs that determine what content is seen and what goes unseen, either elevating it to viral status or consigning it to the dustbin of history. A springboard falls between the limited responsibility assigned to technology that serves as a conduit or intermediary for communication, such as a phone company, and the high level of responsibility we apply to publishers.
As a springboard, social media companies actively influence the behavior of users and the content they inject into the information ecosystem. Although the average person may already think of a social media company’s algorithm as a springboard that elevates the popularity of content, debates on this topic tend to focus on the content itself and why it becomes popular. Left unexamined is why a company created the structure that allowed the content to go viral. That’s a major difference in orientation.
If users are cognizant of how important an algorithm is to the popularity of their posts, and they have a general understanding of what type of posts will be successful, that very awareness influences how users act. According to researchers at the Leibniz Institute for the Social Sciences in Mannheim, Germany, “algorithms that pervade our societies influence individual and group behaviour in many ways — meaning that any observations [that researchers make from how people act on social media] describe not just human behavior, but also the effects of algorithms on how people behave.” In other words, how people communicate on social media and what content becomes popular is not the result of natural human behavior but is instead heavily influenced by algorithms. That’s a lot of power.
Social media companies make active value judgments when tweaking their algorithms. These judgments greatly influence which content users see and share. These companies aren’t hands-off platforms; they’re hands-on springboards.
The difficulty for users is that it is hard to conceive of how each message goes through a complex filtration process that determines who sees or doesn’t see it. And although users often get a false sense of neutrality since an algorithm can feel like an objective arbiter of popularity, that algorithm was derived from subjective judgments on what should be popular.
Changing How We Think
Why does this matter? The problem is that our traditional understanding of social media as a platform separates the actions of a user from those of the company. A platform, by its very definition, doesn’t influence the type of content it features or its popularity. Instead, it merely allows users access to a much larger audience than they could otherwise attract. Although a platform provider has some responsibility in terms of whether or not they allow speakers to broadcast their messages, we generally don’t hold the platform accountable for users’ actions.
Our conception of how social media companies operate has been modeled on the concept of a public square, which is part of the problem. Private companies shouldn’t run public squares. Rather, the public, through our governmental structure, controls what happens in the public square. By thinking of social media companies this way, it is easy to see why people incorrectly think that content moderation infringes on their First Amendment rights. In this conception, we conflate social media companies with the government. On some level, this misconception makes sense since the government has First Amendment constraints on how they can restrict speech in a real public square.
Social media companies, by contrast, are private entities that are not violating the First Amendment through the decisions they make regarding content. As Lata Nott, the executive director of the First Amendment Center succinctly puts it: “The First Amendment protects individuals from government censorship. Social media platforms are private companies and can censor what people post on their websites as they see fit.” But given that we regard social media as a digital public square, it is no wonder the general public often thinks their First Amendment rights are being violated by social media companies. Indeed, if social media companies were a real public square, their moderation would be constrained by the First Amendment.
If we visualize social media as a public square, we may think of every speaker standing on a proverbial soapbox to deliver their message. The soapbox is the platform, with a potential audience made up of the people in the public square. Under this model, the popularity of the message is determined by the persuasiveness of the speaker. The people in the public square would vote with their feet to either be attracted or repelled by a message. The message arrives unfiltered, and its popularity (or lack thereof) is a rather organic process.
That’s not how social media works.
Changing how we talk about social media affects how we think of their responsibility for the communication that they facilitate. If we view social media as a platform, our primary focus is on deterring abusive or inappropriate behavior by users. By contrast, considering social media’s impact as the result of the company’s actions instead of those of independent users would dramatically alter the level of responsibility we demand from a company.
Right now, the conversation about improving social media focuses on user behavior, which makes sense when we call social media entities platforms. By thinking of them as springboards, however, we can aim the conversation on making a better information ecosystem where it should be — fixing the underlying structure of how content spreads.