UPDATED BY
Brennan Whitfield | Jun 14, 2023

A censorship narrative has been brewing around the concept of shadowbans, the term for when a user is blocked on a social platform without their knowing.

In the past few years, the word shadowban has taken on a life of its own, evolving from a signifier of a specific moderation technique to shorthand for anything from actual downranking to unfounded conspiracy theories about Silicon Valley types trying to suppress user voices.

“‘Shadowbanning’ sounds quite nefarious, and I think that is part of its success in the public discourse,” said Stephen Barnard, an associate professor at St. Lawrence University whose research focuses on the role of media and technology in fostering social change. “It has this sense of a faceless entity, and it’s conspiratorial. It contains a more or less explicit assertion that these liberal tech executives from California are censoring us and conspiring to force their progressive agenda throughout American politics.”

The ongoing controversy — and lack of transparency — surrounding shadowbans points to a tension inherent to any attempt at building a global community: Most of us want some kind of moderation, but opinions differ widely as to where lines should be drawn.

 

What Are Shadowbans?

Shadowbans block users or individual pieces of content without letting the offending user know they’ve been blocked, a moderation technique first popularized in bulletin boards and early web forums. To a shadowbanned user, the site continues to function as normal — they can still make posts and engage with other people’s posts — but to others, the user appears to have gone silent.

Shadowban Definition

Shadowbans refer to a web moderation technique in online forums and social platforms where a user (or a user’s content) is blocked without it being apparent to them that they’ve been blocked.

While an explicitly banned user is likely to create a new account and keep posting, a shadowbanned user might conclude that other people just don’t care what they have to say. Over time, the thinking goes, they will lose interest and go away.

In that sense, shadowbans are just a technical implementation of a strategy long employed by forum users — “Don’t feed the troll!” — with the added benefit of not relying on users to exercise restraint.

 

How to Tell if an Account Is Shadowbanned

Reduced post engagement for an account, such as less likes, comments and views than previously garnered could be a sign of a shadowban. Also, if followers or subscribers of an account report not seeing new posts or notifications from it in their feed, this could be another sign.

The Center for Democracy and Technology released a 2022 report called Shedding Light on Shadowbanning. It presented the results of a survey of 1,205 U.S. social media users, 274 of whom said they believed they had been shadowbanned. These were the most common signs users reported noticing when they identified their shadowban:

How to Know If Someone is Shadowbanned

  • Reduced engagement in the form of fewer likes, comments or shares.
  • A profile, posts or comments not showing up when attempting to view them from a different account.
  • Other users noting a profile’s content no longer appears to them.

In surveying users who experienced content moderation in the run-up to the 2016 election, Sarah Myers West, a postdoctoral researcher at New York University’s AI Now Institute, found that many users saw correlations between what they referred to as online censorship and algorithmic intervention.

“My definition of content moderation might be: ‘Was a post, photo or video you posted taken down, or was your account suspended,’” West said. “But a number of folks would interpret content moderation as something like: ‘I lost a lot of followers and I don’t really have a good explanation for that — but I think this is an overt effort by someone at this platform to shut me down.’ Or: ‘My posts normally get a certain amount of engagement, but I posted about this topic, and all of a sudden my number of likes is negligible.’”

Social engagement can fluctuate for reasons that have nothing to do with censorship, of course. But West, who is also one of the conveners of the Santa Clara Principles on Transparency and Accountability in Content Moderation, said the opacity surrounding moderation systems and algorithmic feeds leave users speculating about how it all works.

One common theory among users was that social platforms deliberately aimed to suppress their points of view. Others believed their posts were actively sabotaged by other users making concerted efforts to flag posts for violating platform policies, in turn triggering mechanisms that limit a post’s reach.

“And then some people were just genuinely perplexed,” West said. “They just really did not know or understand what was going on, and they just wanted an explanation of how they could modify their behavior in the future so they wouldn’t encounter this kind of issue.”

Find out who's hiring.
See all Marketing jobs at top tech companies & startups
View 1617 Jobs

 

What Social Media Sites Use Shadowbanning?

While shadowbanning has been speculated to occur on popular sites like Facebook, Instagram, Twitter and TikTok, social media companies generally don’t recognize shadowbanning as an official practice. They typically point to ranking systems, changing algorithms or technical errors in response to claims that certain types of content or content from particular users is being secretly muted.

 

Facebook on Shadowbanning

In an August 2022 podcast interview, Meta CEO and Facebook founder Mark Zuckerburg said Facebook doesn’t have a shadowbanning policy, but acknowledged the term likely refers to its “demotions.” “Demoting” in this sense means a post is less likely to be shown to other users because it has been flagged for some reason, such as containing misinformation or being harmful. This tactic isn’t new, however; Facebook has been building upon it for several years.

In 2017, in an effort to make feeds less spammy, Facebook deployed a machine learning model to identify and reduce the reach of people and pages who rely on “engagement bait.” Common examples of the genre include calls to “share with a friend who’s addicted to coffee,” “like if you support local coffee shops” and “tag someone you want to hang out with on our patio.”

In addition to limiting the reach of individual posts that employ the strategy, Facebook announced that it would “demote” repeat offenders at the page level. 

This machine learning model has since been expanded to include comments under posts. And in 2019, Facebook started flagging engagement bait in the audio content of videos as well.

According to Facebook’s own documentation, the platform does not tell publishers if their pages have been demoted, or why, citing concerns that users could rely on specific details to find workarounds — a concern that hearkens back to the early days of shadowbans.

To help moderators slow the spread of offensive content and reduce the chance of backlash, Facebook’s moderation approach allows for blocked posts to remain visible to a user’s first-degree connections. So instead of tilting at windmills all by yourself, you can do so in an echo chamber of people just like you.

This might be a necessary concession to effectively keep the user from finding out about their status — the core idea of shadowbanning. And ultimately, it’s probably more effective, because users are suspected to be smarter than shadowban proponents tend to give them credit for.

 

Instagram on Shadowbanning

Instagram appears to apply a similar “demotion” system as Facebook. Users report having less engagement on their Instagram posts after violating the platform’s community guidelines, recommendations guidelines or terms of use. It’s thought that accounts that are posting inappropriate content or misinformation, emulating bot behaviors like spamming hashtags or gaining inorganic engagement quickly, or being repeatedly reported are likely to see a drop in engagement. 

 

Twitter on Shadowbanning

On Twitter’s debunking myths FAQ page, the company says outright, “Simply put, we don’t shadow ban! Ever.” It also directs to a 2018 blog post that further explains shadowbanning doesn’t happen on the basis of “political viewpoints or ideology,” but there is a system that ranks tweets and search results “for timely relevance.” 

That means users are more likely to see content from people they’re interested in or that’s gaining popularity and being widely shared. The practice is also intended to “address bad-faith actors who intend to manipulate or detract from healthy conversation” by ranking their content lower.

 

Shadowbans vs. Algorithm Changes

In cases where a platform publicly announces changes to its feed’s ranking algorithm, referring to the outcome as a “shadowban” feels like a bit of a stretch.

“I don’t think, when we hear the term [shadowban], it always means the same thing,” Barnard said. “Often, the term is being used to describe more subtle [strategies] described by social media companies as ‘downranking.’”

Facebook’s effort to limit the spread of engagement bait is a typical example of downranking. In a Medium post published in March 2021, Meta President of Global Affairs Nick Clegg acknowledged that the News Feed does downrank content with exaggerated headlines (clickbait) as well as content from pages run by websites that “generate an extremely disproportionate amount of their traffic from Facebook relative to the rest of the internet.”

Demoting pages for being disproportionately successful at drawing Facebook traffic seems more shadowy at first glance, but the internal logic makes sense when you consider how news sites compete for eyeballs on social networks. If a story is true and its headline is accurate, a number of credible news outlets will quickly corroborate it. As a result, users will share multiple versions of the same story from different, competing outlets. Conversely, if a story relies on sketchy sourcing, or if its headline makes claims unsupported by the reporting, the article can become a permanent exclusive — the only link available for spreading the word. 

In the aggregate, then, a disproportionate reliance on viral Facebook hits over other sources of traffic may be a pretty good indicator that a site is willing to stretch the truth — although it’s certainly possible that legitimate publications may get caught up in the dragnet.

And while the platform may not offer clear guidance on where the line is, exactly, most sites that veer into “extremely disproportionate” Facebook traffic territory probably know that they are, in fact, actively juicing the algorithm.

More on Marketing:Will Newsletters Launch a Marketing Boom the Way Podcasts Did?

 

Why Shadowbans Are Controversial

The sort of downranking system shadowbanning utilizes may be essential to preventing the spread of misinformation. At the same time, any secretive, large-scale moderation system is bound to cause frustrations — and direct dialogues from social media companies in response to shadowbanning allegations haven’t put a stop to the growing discourse.

As far back as 2018, conservative politicians have been calling out shadowbanning as a targeted way to suppress political content Silicon Valley types disagree with. And a resurgence in accusations came in 2021 when Facebook took the once unthinkable step of banning the world’s most powerful elected leader from its namesake platform and subsidiary Instagram after a violent mob of Donald Trump supporters stormed the U.S. Capitol. 

Platforms like Twitter, YouTube, Reddit, Twitch, TikTok, Snapchat and Discord followed suit, banning a constellation of accounts and groups affiliated with the sitting president, or involved in the effort to spread misinformation about the 2020 presidential election. A number of high-profile Republicans framed this as the latest ploy in an alleged conspiracy among tech companies to silence conservative voices.

To be clear, there’s no reason to believe this claim of political targeting. A 2021 report from the New York University Stern Center for Business and Human Rights calls the idea that social media companies unfairly target conservatives “a falsehood with no reliable evidence to support it.”

And in moderation systems where everyone has something to be unhappy about, secrecy around their animating policies provides fertile ground for conspiracy theories to spread.

Platforms like Instagram and TikTok have also faced backlash from social media influencers and content creators who say the companies unfairly and disproportionately use shadowbanning to silence marginalized groups.

The Intercept reported in March 2020 that it had obtained internal TikTok documents showing the company “instructed moderators to suppress posts created by users deemed too ugly, poor, or disabled for the platform.” German site Netzpolitik  reported on similar policies in 2019. In response to the reporting, a TikTok spokesperson told The Intercept the policies were part of a bullying prevention effort and were no longer used. Sources told the outlet, however, they had been in use through at least 2019, and the documents themselves made no mention of bullying, indicating the justification was “to retain new users and grow the app.”

Amid the growing momentum of the Black Lives Matter movement in 2020, particularly on social media, Black creators also called out platforms, saying they were limiting the reach of their messages in user feeds and searches.

One TikTok user told Time, “I was out protesting and sharing [videos] and when I went back to my normal content, I saw that my videos went from getting thousands if not hundreds of thousands of views to barely getting 1,000. With that being the direct next event after my Black Lives Matter posts, it was kind of hard to see it as anything but shadow banning.”

 

History of Shadowbanning

Shadowbanning began on traditional online message boards as a clear-cut proposition: Either your posts are blocked or they aren’t. 

Duane Roelands moderated a message board called Quartz BBS, a collection of chat rooms hosted on a Rutgers University server in the late 1980s and early 1990s. To keep debates from getting needlessly ugly, Roelands and his fellow moderators would shadowban users, either temporarily or permanently, depending on the offense.

“Offensive behavior could be anything from simply being constantly abrasive and obnoxious, to being disruptive in a room devoted to serious topics like sexual orientation, gender identity or politics,” he said. “Our behavioral guidelines basically boiled down to, ‘Don’t be a jerk.’” 

The traditional shadowbanning approach on message boards makes sense when posts are served up chronologically. But modern social networks — which usually serve up content through algorithmically curated feeds — can achieve similar results through subtler means that limit certain users’ reach without blocking their content entirely. 

One approach in modern shadowbanning is to exclude a user’s posts from discoverability features. This approach began to be adopted online, and made its first major waves in the 2010s in places like Twitter, Reddit and notably, Instagram.

In 2017, a number of photographers, bloggers and influencers noticed substantial drops in engagement with their Instagram posts. At the time, “more than a dozen” Instagram users told tech reporter Taylor Lorenz that shadowbanned users’ posts weren’t showing up “in hashtag searches or on the Instagram Search & Explore tab.”

Instagram didn’t tell these users what they’d done wrong, but Lorenz pointed to spammy hashtag usage and unauthorized automation tools as behaviors that likely triggered the changes. 

The bans reported by Lorenz look quite different compared to traditional shadowbans from a technical perspective, but they also share important similarities. The strategy targeted accounts according to undisclosed criteria, and aside from a rapid drop in reach, users had no way to find out why — or even whether — they’d been affected by the platform’s decisions.

“People would always find a way to follow the letter of the law while violating the spirit of the law.”

That secrecy seems to be the only real throughline among the techniques — real and imagined — that users refer to when they talk about shadowbans.

“We figured out early on that if you clearly defined what was acceptable behavior and what was not, people would always find a way to follow the letter of the law while violating the spirit of the law,” Roeland said. “This is a behavior that has persisted online to this day, and there’s never really been a good solution for it.”

More on Social Media:Nano-Influencers Are Marketing’s Not-So-Secret Weapon

 

What Will It Take for the Shadowban Narrative to Go Away?

Chris Stedman, author of IRL: Finding Realness, Meaning, and Belonging in Our Digital Lives, sees the debate over shadowbans as a symptom of a broader anxiety about the power social media platforms have over our means of self expression.

“At one point, the internet was a discrete space that we could step into and out of,” Stedman said. “Now, a bigger and bigger part of what it means to be me — how I find a sense of connection and community and express myself — has moved into digital spaces.”

In Barnard’s view, the lack of insight into the inner workings of social networks play a role — especially among conservatives, who tend to have a lower level of trust in media institutions. Together, these factors form a perfect storm where each social post that fails to gain traction becomes another piece of evidence of a broad-based effort at suppression, as opposed to just a failed attempt at going viral.

“It becomes a seemingly plausible explanation, of course ignoring all the ways these platforms are helping them spread their messages — which of course is the deep irony of all of this,” Barnard said.

And underneath it all is a kernel of truth: Social networks aren’t censoring conservatives on ideological grounds, but they are trying to limit the spread of misinformation — particularly about elections and about health subjects like Covid-19.

“If someone is not welcome in your community, you should escort them from the premises.”

The term shadowbanning has come to be a nebulous one. And because most people learned about ‘shadowbans’ as the centerpiece of a bad-faith argument, it’s hard to see how the term could return to its original meaning.

And on some level, maybe shadowbans were never all that great to begin with. Adrian Speyer, who is head of community at the forum software provider Vanilla Forums, urges users of his company’s platform to treat shadowbans as a last resort.

“If someone is not welcome in your community, you should escort them from the premises,” Speyer said. 

In his view, shadowbans provide an easy way out of having difficult, but important conversations about community standards. These conversations can help foster a greater sense of ownership, and empower users to help moderators as they seek to uphold those standards.

Looking back on his time as a bulletin board moderator, Roelands has also come to see shadowbans differently. For one, because there’s no feedback loop directly related to a specific action, users are never given an opportunity to learn where they went wrong. But perhaps more importantly, because people could usually tell when a trouble-making user suddenly disappeared, it created an environment where otherwise-upstanding community members sought to publicly humiliate users who had been shadowbanned.

“It made our community meaner,” Roelands said. “It’s like the difference between restorative and retributive justice. Shadowbans won’t turn people into better members of the community.”

At any rate, social platforms are starting to recognize that opacity is a real problem.

Meta’s independent Oversight Board has been reviewing Facebook and Instagram content decisions, with its decisions and recommendations posted to the company’s Transparency Center.

Instagram too is attempting to keep its users informed on how their posts are recommended or moderated with the implementation of Account Status in October 2021. This feature lets professional accounts on Instagram see if they’ve posted content that violates the platform’s community guidelines, and whether it is “eligible to be recommended to non-followers in places like Explore, Reels and Feed Recommendations.”

Twitter has also been rethinking its moderation policies with an eye toward increasing transparency. The company announced in September 2022 it would be expanding the program’s contributor base and increasing the visibility of Birdwatch notes on public tweets, now known as Community Notes. In December 2022, Community Notes released globally to all Twitter users.

While these features are unlikely to solve the companies’ moderation problems for good, these social media giants seem committed to giving their users more insight into why posts are downranked or flagged.

And that’s a start, at least.

Great Companies Need Great People. That's Where We Come In.

Recruit With Us