What Are All These Algorithms Doing to Us?
People post 6,000 tweets every second of every day. So, if you want to follow the pulse of the internet by reading tweets, you’re trying to swallow a firehose’s worth of content when you can only handle drinking from a water fountain. Gulp.
The web generally, and social media in particular, is a double-edged sword. Tapping into the web’s hivemind and having instant access to nearly all the world’s information is both its beauty and its curse. On social media platforms, the sheer volume of communication happening — even just by the accounts we choose to follow or be friends with — is an unmanageable amount of content to handle. Something needs to give.
Enter algorithmic curation, which serves as a helpful way to reduce the firehose of information into a usable water fountain. Unfortunately, it also transfers decision-making power from the individual to the platform. Worse, algorithmic curation means that individual users typically don’t understand how those platforms make decisions for them.
What Is Algorithmic Curation?
These decisions aren’t trivial, either. Altering a person’s newsfeed affects their mood and their overall worldview. In January 2012, data scientists at Facebook showed how curation decisions for its News Feed could alter the happiness level of users. More recently, the January 6th attack on the US Capitol brought into sharp relief the potential for someone’s social media diet to contribute to radicalization.
The ubiquity of algorithmic curation and recommendations mean that this radicalizing content is not just picked by users — it’s actually pushed out to them. Furthermore, since salacious material keeps users engaged longer, gearing an algorithm toward provocative and dangerous content to promote engagement can have the unintended consequence of leading users down dangerous rabbit holes. Even showing a user content that lines up with their interests can cause problems. Doing so has the tendency to reinforce someone’s beliefs by putting them in a “filter bubble” rather than popping that bubble with content that might challenge those beliefs. This dynamic can lead to greater polarization.
Therefore, the rise of algorithmic curation has spurred a countermovement that aims to allow users to determine what content they see. As a testament to the popularity and value of user-curated content, we can look at newsletter platforms like Substack and the live chat app Clubhouse, to name just two.
When we delegate our decision-making authority to platforms, we give them the power to decide what we see. I think of this process as akin to the power of attorney, a legal mechanism by which a person gives their attorney the right to make important decisions on their behalf. This dynamic involves an incredible amount of trust since you have to believe that your attorney is making decisions that are in your interest. Likewise, algorithmic curation requires individuals to trust that a platform is acting in their interest when making curatorial decisions.
The Problem That Platforms Want To Solve
If a platform were to show you all of the content that’s related to your interests as well as the material from your friends and those you follow, it would be unusable. You would see far, far too much content to engage with anything meaningfully. So, algorithmic curation empowers the platform to sort content for you. These processes dictate what you see and what it recommends to you.
Another problem that algorithmic curation attempts to solve is the “paradox of choice,” which holds that too many potential choices degrades our ability to make a selection. If you’ve ever felt like there’s nothing to watch on TV when you actually have thousands of options, then you’ve experienced this phenomenon firsthand.
Platforms are well aware of this dilemma and are actively working on ways to make selections for individuals, easing the paradox of choice and making the flood of available content seem more manageable. Last month, for example, Netflix announced a new feature called Play Something that will automatically play a movie determined by Netflix’s algorithm. Even though Netflix users already make their own curated list of desired shows and movies, selecting from a vast list of these options can still be too much to handle, necessitating this new feature.
Although algorithmic curation serves a necessary purpose, it comes at the cost of individual choice. It also raises some thorny questions: If you accept what a platform is choosing for you, what are you missing? Recommendations are typically based on predicting your interests based on your previous behavior, but what if the platform is wrong? How do a platform’s business interests, which are often tied to ad-based models that reward engagement metrics, affect the algorithmic decisions being made for you?
The Power of Algorithmic Curation
More than 70 percent of the videos that people watch on YouTube happen through its recommendation algorithm. So, even though “You” is literally in its name, you are often not the one making viewing choices on the platform. It decides your information diet, presumably based on your previous watching habits.
If you trust the platform, delegating choices to it is an efficient way to determine which videos you watch. If you don’t trust it, however, you may be concerned that YouTube’s choices aren’t the ones that you would have made, given the opportunity. Given the outsized role that media plays in shaping our values and attitudes, platforms like YouTube play a huge role in forming our reality by making these recommendations to us.
To put this in stark terms, algorithms hold a significant amount of power in determining our future, but we often have little control over them and a limited understanding of how they make decisions. To push back, we can either demand a far greater deal of transparency from platforms or find ways to opt out of the system.
Personally, I have begun to relish finding moments in my life that are driven by my own volition when I actively determine which content I consume. I have paid subscriptions on Substack, even though I could likely find most of that content for free. The premium that I pay guarantees that I receive content that I have chosen rather than an algorithm.
Where Do We Go From Here?
Thanks to the increased awareness around the impact that algorithmic curation has on individual behavior and society at large, what needs to change? I see four ways in which we can manage this problem:
We’ll start treating algorithms similar to pharmaceutical drugs. Their side effects need to be clearly studied and mitigated before they’re released to the general population.
Users will continue demanding greater transparency about why they see certain content in their feeds, leading to a greater degree of control to alter the assumptions and recommendations that a platform is making.
Platforms will face greater scrutiny around their algorithmic curation practices as we will view them more like editorial decisions than neutral, objective solutions.
Startups will realize and market the value of “human curation” and stop providing recommendations that influence user behavior. There may become a demand for anti-algorithm platforms.
Given the sheer volume of communication and content on the web, algorithmic curation and recommendation systems are likely to stay. But turning down the firehose of content for users should never mean that we let democracy go up in flames. Something’s gotta give, and that change needs to be greater transparency around algorithmic curation to ensure that platforms are aligned with the public interest.