UPDATED BY
Rose Velazquez | Sep 08, 2022

A censorship narrative has been brewing around the concept of shadowbans, the term for when a user is blocked on a social platform without their knowing.

In the past few years, the word shadowban has taken on a life of its own, evolving from a signifier of a specific moderation technique to shorthand for anything from actual downranking to unfounded conspiracy theories about Silicon Valley types trying to suppress conservative voices.

“‘Shadowbanning’ sounds quite nefarious, and I think that is part of its success in the public discourse,” said Stephen Barnard, an associate professor at St. Lawrence University whose research focuses on the role of media and technology in fostering social change. “It has this sense of a faceless entity, and it’s conspiratorial. It contains a more or less explicit assertion that these liberal tech executives from California are censoring us and conspiring to force their progressive agenda throughout American politics.”

The ongoing controversy — and lack of transparency — surrounding shadowbans points to a tension inherent to any attempt at building a global community: Most of us want some kind of moderation, but opinions differ widely as to where lines should be drawn.

More on Social Media:Nano-Influencers Are Marketing’s Not-So-Secret Weapon

 

What Are Shadowbans?

A moderation technique first popularized in bulletin boards and early web forums, shadowbans block users or individual pieces of content without letting the offending user know they’ve been blocked. To a shadowbanned user, the site continues to function as normal — they can still make posts and engage with other people’s posts — but to others, the user appears to have gone silent.

What Are Shadowbans?

Shadowbans refer to a web moderation technique in online forums and social platforms where a user (or a user’s content) is blocked without it being apparent to them that they’ve been blocked.

While an explicitly banned user is likely to create a new account and keep posting, a shadowbanned user might conclude that other people just don’t care what they have to say. Over time, the thinking goes, they will lose interest and go away.

In that sense, shadowbans are just a technical implementation of a strategy long employed by forum users — “Don’t feed the troll!” — with the added benefit of not relying on users to exercise restraint.

By and large, social media companies don’t recognize shadowbanning as an official practice. They typically point to ranking systems, changing algorithms or technical errors in response to claims that certain types of content or content from particular users is being secretly muted.

In an August 2022 podcast interview, Meta CEO and Facebook founder Mark Zuckerburg said Facebook doesn’t have a shadowbanning policy, but acknowledged the term likely refers to “demotions,” which is when a post is less likely to be shown to other users because it has been flagged for some reason, such as containing misinformation or being harmful.

This sort of downranking is essential to preventing the spread of misinformation, and offering too much transparency about automated moderation systems will make it easier for bad actors to circumvent them. At the same time, any secretive, large-scale moderation system is bound to cause frustrations.

On Twitter’s debunking myths FAQ page, the company says outright, “Simply put, we don’t shadow ban! Ever.” It also directs to a 2018 blog post that further explains shadowbanning doesn’t happen on the basis of “political viewpoints or ideology,” but there is a system that ranks tweets and search results “for timely relevance.” 

That means users are more likely to see content from people they’re interested in or that’s gaining popularity and being widely shared. The practice is also intended to “address bad-faith actors who intend to manipulate or detract from healthy conversation” by ranking their content lower.

Direct dialogues from social media companies in response to shadowbanning allegations haven’t put a stop to the growing discourse. A search of TikTok, for example, reveals more than 22 billion views for videos with the hashtag “shadowbanned” and upwards of 3 billion views for the “shadowban” hashtag.

 

History of Shadowbanning

Duane Roelands moderated a message board called Quartz BBS hosted on a Rutgers University server in the late 1980s and early 1990s. Predating the modern internet, Quartz BBS was essentially a collection of chat rooms dedicated to specific topics, ranging from jokes to television shows and political debates. The text-only rooms supported 10 concurrent users and displayed posts in pure chronological order, maxing out at 200 posts, at which point old messages would get automatically deleted.

To keep debates from getting needlessly ugly, Roelands and his fellow moderators would shadowban users, either temporarily or permanently, depending on the offense.

“Offensive behavior could be anything from simply being constantly abrasive and obnoxious, to being disruptive in a room devoted to serious topics like sexual orientation, gender identity or politics,” he said. “Our behavioral guidelines basically boiled down to, ‘Don’t be a jerk.’” 

On traditional message boards, shadowbans are a clear-cut proposition: Either your posts are blocked or they aren’t. And that approach makes sense when posts are served up chronologically. But modern social networks — which usually serve up content through algorithmically curated feeds — can achieve similar results through subtler means that limit certain users’ reach without blocking their content entirely.

One approach is to exclude a user’s posts from discoverability features. In 2017, a number of photographers, bloggers and influencers noticed substantial drops in engagement with their Instagram posts. At the time, “more than a dozen” Instagram users told tech reporter Taylor Lorenz that shadowbanned users’ posts weren’t showing up “in hashtag searches or on the Instagram Search & Explore tab.”

Instagram didn’t tell these users what they’d done wrong, but Lorenz pointed to spammy hashtag usage and unauthorized automation tools as behaviors that likely triggered the changes. 

The bans reported by Lorenz look quite different compared to traditional shadowbans from a technical perspective, but they also share important similarities. The strategy targeted accounts according to undisclosed criteria, and aside from a rapid drop in reach, users had no way to find out why — or even whether — they’d been affected by the platform’s decisions.

“People would always find a way to follow the letter of the law while violating the spirit of the law.”

That secrecy seems to be the only real throughline among the techniques — real and imagined — that users refer to when they talk about shadowbans.

“We figured out early on that if you clearly defined what was acceptable behavior and what was not, people would always find a way to follow the letter of the law while violating the spirit of the law,” Roeland said. “This is a behavior that has persisted online to this day, and there’s never really been a good solution for it.”

 

Why Shadowbans Are Controversial

As far back as 2018, conservative politicians have been calling out shadowbanning as a targeted way to suppress political content Silicon Valley types disagree with. And a resurgence in accusations came in 2021 when Facebook took the once unthinkable step of banning the world’s most powerful elected leader from its namesake platform and subsidiary Instagram after a violent mob of Trump supporters stormed the U.S. Capitol. Platforms like Twitter, YouTube, Reddit, Twitch, TikTok, Snapchat and Discord followed suit, banning a constellation of accounts and groups affiliated with the sitting president, or involved in the effort to spread misinformation about the 2020 presidential election. A number of high-profile Republicans framed this as the latest ploy in an alleged conspiracy among tech companies to silence conservative voices.

To be clear, there’s no reason to believe this claim of political targeting. A 2021 report from the New York University Stern Center for Business and Human Rights calls the idea that social media companies unfairly target conservatives “a falsehood with no reliable evidence to support it.”

And in moderation systems where everyone has something to be unhappy about, secrecy around their animating policies provides fertile ground for conspiracy theories to spread.

Platforms like Instagram and TikTok have also faced backlash from social media influencers and content creators who say the companies unfairly and disproportionately use shadowbanning to silence marginalized groups.

The Intercept reported in March 2020 that it had obtained internal TikTok documents showing the company “instructed moderators to suppress posts created by users deemed too ugly, poor, or disabled for the platform.” German site Netzpolitik  reported on similar policies in 2019. In response to the reporting, a TikTok spokesperson told The Intercept the policies were part of a bullying prevention effort and were no longer used. Sources told the outlet, however, they had been in use through at least 2019, and the documents themselves made no mention of bullying, indicating the justification was “to retain new users and grow the app.”

Amid the growing momentum of the Black Lives Matter movement in 2020, particularly on social media, Black creators also called out platforms, saying they were limiting the reach of their messages in user feeds and searches.

One TikTok user told Time, “I was out protesting and sharing [videos] and when I went back to my normal content, I saw that my videos went from getting thousands if not hundreds of thousands of views to barely getting 1,000. With that being the direct next event after my Black Lives Matter posts, it was kind of hard to see it as anything but shadow banning.”

 

Shadowbans Vs Algorithm Changes

In cases where a platform publicly announces changes to its feed’s ranking algorithm, referring to the outcome as a “shadowban” feels like a bit of a stretch.

“Often, the term is being used to describe more subtle [strategies] described by social media companies as ‘downranking.’”

“I don’t think, when we hear the term [shadowban], it always means the same thing,” Barnard said. “Often, the term is being used to describe more subtle [strategies] described by social media companies as ‘downranking.’”

Facebook’s effort to limit the spread of engagement bait is a typical example of downranking. In a Medium post published in March 2021, Meta President of Global Affairs Nick Clegg acknowledged that the News Feed downranks content with exaggerated headlines (clickbait) as well as content from pages run by websites that “generate an extremely disproportionate amount of their traffic from Facebook relative to the rest of the internet.”

Demoting pages for being disproportionately successful at drawing Facebook traffic seems more shadowy at first glance, but the internal logic makes sense when you consider how news sites compete for eyeballs on social networks. If a story is true and its headline is accurate, a number of credible news outlets will quickly corroborate it. As a result, users will share multiple versions of the same story from different, competing outlets. Conversely, if a story relies on sketchy sourcing, or if its headline makes claims unsupported by the reporting, the article can become a permanent exclusive — the only link available for spreading the word. 

In the aggregate, then, a disproportionate reliance on viral Facebook hits over other sources of traffic may be a pretty good indicator that a site is willing to stretch the truth — although it’s certainly possible that legitimate publications may get caught up in the dragnet.

And while the platform may not offer clear guidance on where the line is, exactly, most sites that veer into “extremely disproportionate” Facebook traffic territory probably know that they are, in fact, actively juicing the algorithm.

But social media companies also employ strategies that look even more like traditional shadowbans — with some important adjustments to account for how users engage with their platforms.

More on Marketing:Will Newsletters Launch a Marketing Boom the Way Podcasts Did?

 

How to Tell if an Account Is Shadowbanned

One key difference between a traditional message board and social networks like Instagram, Facebook or LinkedIn is the overlap between the user’s digital and “real life” social circles. If a message board user simply stops posting one day, you might not think too much of it. But if a close friend stops showing up in your social feeds, you might ask them why they’ve disappeared the next time you see them.

In an effort to make feeds less spammy, Facebook deployed a machine learning model in 2017 to identify and reduce the reach of people and pages who rely on “engagement bait.” Common examples of the genre include calls to “share with a friend who’s addicted to coffee,” “like if you support local coffee shops” and “tag someone you want to hang out with on our patio.”

In addition to limiting the reach of individual posts that employ the strategy, Facebook announced that it would “demote” repeat offenders at the page level.

This machine learning model has since been expanded to include comments under posts. And in 2019, Facebook started flagging engagement bait in the audio content of videos as well.

According to Facebook’s own documentation, the platform does not tell publishers if their pages have been demoted, or why, citing concerns that users could rely on specific details to find workarounds — a concern that hearkens back to the early days of shadowbans.

To help moderators slow the spread of offensive content and reduce the chance of backlash, Facebook’s moderation approach allows for blocked posts to remain visible to a user’s first-degree connections. So instead of tilting at windmills all by yourself, you can do so in an echo chamber of people just like you.

“They don’t know what’s happened immediately. But they always find out.”

This might be a necessary concession to effectively keep the user from finding out about their status — the core idea of shadowbanning. And ultimately, it’s probably more effective, because, according to Roelands, users are smarter than shadowban proponents tend to give them credit for.

“When users are shadowbanned, they don’t know what’s happened immediately,” he said. “But they always find out.”

The Center for Democracy and Technology released a 2022 report called Shedding Light on Shadowbanning. It presented the results of a survey of 1,205 U.S. social media users — 274 of whom said they believed they had been shadowbanned. These were the most common signs users reported noticing when they identified their shadowban:

Signs an Account May Be Shadowbanned

  • Reduced engagement in the form of fewer likes, comments or shares.
  • A profile, posts or comments not showing up when attempting to view them from a different account.
  • Other users noting a profile’s content no longer appears to them.

In surveying users who experienced content moderation in the run-up to the 2016 election, Sarah Myers West, now a postdoctoral researcher at New York University’s AI Now Institute, found that many users saw correlations between what they referred to as online censorship and algorithmic intervention.

“My definition of content moderation might be: ‘Was a post, photo or video you posted taken down, or was your account suspended,’” West said. “But a number of folks would interpret content moderation as something like: ‘I lost a lot of followers and I don’t really have a good explanation for that — but I think this is an overt effort by someone at this platform to shut me down.’ Or: ‘My posts normally get a certain amount of engagement, but I posted about this topic, and all of a sudden my number of likes is negligible.’”

Social engagement can fluctuate for reasons that have nothing to do with censorship, of course. But West, who is also one of the conveners of the Santa Clara Principles on Transparency and Accountability in Content Moderation, said the opacity surrounding moderation systems and algorithmic feeds leave users speculating about how it all works.

One common theory among users was that social platforms deliberately aimed to suppress their points of view. Others believed their posts were actively sabotaged by other users making concerted efforts to flag posts for violating platform policies, in turn triggering mechanisms that limit a post’s reach.

“And then some people were just genuinely perplexed,” West said. “They just really did not know or understand what was going on, and they just wanted an explanation of how they could modify their behavior in the future so they wouldn’t encounter this kind of issue.”

 

What Will It Take for the Shadowban Narrative to Go Away?

Chris Stedman, author of IRL: Finding Realness, Meaning, and Belonging in Our Digital Lives, sees the debate over shadowbans as a symptom of a broader anxiety about the power social media platforms have over our means of self expression.

“At one point, the internet was a discrete space that we could step into and out of,” Stedman said. “Now, a bigger and bigger part of what it means to be me — how I find a sense of connection and community and express myself — has moved into digital spaces.”

According to Google Trends, which measures the popularity of search terms over time, interest in shadowbans remained more or less flat from 2004 (the start of the data set) until April 2017 (when the Instagram shadowban controversy reported by Taylor Lorenz began picking up steam). But interest really started ramping up in 2018, following a Vice story that used the term to describe a bug in Twitter’s interface that prevented some conservative leaders from showing up as suggestions within its search feature. 

In Barnard’s view, the lack of insight into the inner workings of social networks play a role — especially among conservatives, who tend to have a lower level of trust in media institutions. Together, these factors form a perfect storm where each social post that fails to gain traction becomes another piece of evidence of a broad-based effort at suppression, as opposed to just a failed attempt at going viral.

“It becomes a seemingly plausible explanation, of course ignoring all the ways these platforms are helping them spread their messages — which of course is the deep irony of all of this,” Barnard said.

And underneath it all is a kernel of truth: Social networks aren’t censoring conservatives on ideological grounds, but they are trying to limit the spread of misinformation — most notably about elections and about COVID-19. And among those most vocal in accusing social platforms of liberal bias are noted purveyors of misinformation on exactly those topics: Senator Josh Hawley, who raised his fist in solidarity with rioters outside the Capitol in January; Breitbart News, which was investigated by the FBI for its role as a vector for Russian propaganda in the 2016 election; Ben Shapiro, whose “censored” site The Daily Wire saw its social media engagement skyrocket last year.

“If someone is not welcome in your community, you should escort them from the premises.”

The term shadowbanning has come to be a nebulous one. And because most people learned about ‘shadowbans’ as the centerpiece of a bad-faith argument, it’s hard to see how the term could return to its original meaning.

And on some level, maybe shadowbans were never all that great to begin with. Adrian Speyer, who is head of community at the forum software provider Vanilla Forums, urges users of his company’s platform to treat shadowbans as a last resort.

“If someone is not welcome in your community, you should escort them from the premises,” Speyer said. 

In his view, shadowbans provide an easy way out of having difficult, but important conversations about community standards. These conversations can help foster a greater sense of ownership, and empower users to help moderators as they seek to uphold those standards.

Looking back on his time as a bulletin board moderator, Roelands has also come to see shadowbans differently. For one, because there’s no feedback loop directly related to a specific action, users are never given an opportunity to learn where they went wrong. But perhaps more importantly, because people could usually tell when a trouble-making user suddenly disappeared, it created an environment where otherwise-upstanding community members sought to publicly humiliate users who had been shadowbanned.

“It made our community meaner,” Roelands said. “It’s like the difference between restorative and retributive justice. Shadowbans won’t turn people into better members of the community.”

At any rate, social platforms are starting to recognize that opacity is a real problem. In his March 2021 Medium post, Clegg announced a Facebook roadmap that includes “providing more transparency about how the distribution of problematic content is reduced.” Since then, Meta’s independent Oversight Board has been reviewing Facebook and Instagram content decisions, with its decisions and recommendations posted to the company’s Transparency Center.

Twitter has also been rethinking its moderation policies with an eye toward increasing transparency. In January 2021, the company rolled out a pilot program called Birdwatch that lets users annotate tweets they believe to be misleading. Users vote on each other’s submissions, in a system that will be familiar to users of web forums like Reddit or Hacker News. Twitter announced in September 2022 it would be expanding the program’s contributor base and increasing the visibility of notes on public tweets.

These features are unlikely to solve the companies’ moderation problems for good. Facebook’s user base includes more than a third of the world’s population, which makes creating any agreed-upon set of community standards impossible. And an upvote-driven moderation system like Birdwatch could devolve into something resembling mob rule. 

But both social media giants seem committed to giving their users more insight into why posts are downranked or flagged.

And that’s a start, at least.

Great Companies Need Great People. That's Where We Come In.

Recruit With Us