In 2008, Nebraska had a problem that it wanted to solve. 

Like many other states before it, Nebraska was looking to reduce the unfortunate situation in which a distressed parent abandons their newborn — leaving the child dangerously alone, often on the steps of a building. So, the state developed a law aimed at reducing occurrences of this problem that established firehouses as a safe harbor for individuals to drop off their children without risk of penalty. To prevent babies from being harmed, the law asserted that no questions would be asked about why the child was being dropped off at a firehouse. 

Then, people started abandoning their teenagers. Oops. The law, created with good intentions, led to unintended consequences. But were the consequences unforeseeable? 

As the founder and director of All Tech Is Human, a non-profit focused on diversifying the tech pipeline to include those individuals best capable of foreseeing problems, I think about this incident often. And after spending a lot of my time considering ways to improve social media platforms, I am often struck by the myth that the consequences of social media (misinformation, hate speech, polarization, censorship by authoritarian regimes) were unforeseeable. These consequences may be unintended, but they were certainly not unforeseeable. 

In order to reduce technology’s collateral damage, we need to change the types of individuals we recruit for the field. In my opinion, the tech industry has focused too much on hiring problem-solvers at the expense of problem-finders. 

These are often two wildly different types of people with different academic backgrounds and personality types. Being bereft of problem-finders in the industry has left us in the current reactive, whack-a-mole state of tackling major social impacts related to technology after they happen as opposed to a more proactive outlook centered around anticipating and preparing for the potential trajectories of tech, both good and bad. 

Solving this issue is not easy, of course. Recent years have seen a major uptick in courses, think pieces, certificates, initiatives and more discussion around responsible AI and tech ethics. These activities are generally focused on getting traditional tech workers to more deeply consider the impact of technology. But I would argue that the tech industry doesn’t just need to think differently, it needs to hire differently. In other words, instead of trying to change a horse into a zebra (the paint will eventually fade), why don’t we just bring in more zebras? Problem-finders are the zebras that the tech industry needs right now.

Read More From David Ryan PolgarHow to Find an AI Ethicist for Your Startup

 

Unintended Consequences Versus Unforeseeable Consequences

Although we’re currently worried about social media causing the downfall of democracy, a decade ago the prevailing wisdom was that it would lead to the spread of democracy. No moment captures this cultural whiplash better than Twitter’s role in the Egyptian revolution of 2011, which led to the overthrow of Hosni Mubarak’s 30-year reign. Located in downtown Cairo, Tahrir Square is a large public square that has been associated with many public demonstrations and political movements over the years. Twitter, which had publicly embraced its role as a digital public square, was seen as instrumental in providing activists with a megaphone to organize, activate and ultimately overthrow an authoritarian regime. 

This revolution marked the first time that the physical public square became intertwined with its digital counterpart. As a result, the Arab Spring was often referred to as the Twitter Uprising or the Facebook Revolution. The tale we told ourselves in its wake was that social media would facilitate the rise of democracy across the globe. If you want to liberate a country, give them the internet, said Wael Ghonim, a youthful Google executive seen as a symbol of Egypt’s pro-democracy uprising after launching a Facebook page viewed as sparking the Egyptian protests. This revolution started on Facebook, said Ghonim, speaking to CNN. I want to meet Mark Zuckerberg some day and thank him personally. The power of social media, which provided a voice to the previously voiceless, would lead to the toppling of dictators worldwide. 

Fast forward to 2021 and authoritarianism is on the rise. Today, 35.6 percent of countries fall under “authoritarian regimes,” according to the Democracy Index). Egypt’s shimmering hope for greater freedom of expression has given way to stifling dissent. The Democracy Index shows the negative trend beginning in 2006, with 2021 marking the fifteenth straight year of declining freedom globally. The growth of social media has coincided with the growth of repressive regimes. That’s not just an unintended outcome. It’s the exact opposite of what we were promised. 

What happened? Social media companies, made up overwhelmingly of problem-solvers, didn’t adequately foresee how easily a tool intended for greater expression could be perverted into a surveillance network that quashed freedoms and a compromised information ecosystem filled with misinformation and disinformation. Although the tech industry frames these effects as unintended consequences, a range of sociologists, historians and other thinkers who work to understand interrelationships and human behavior foresaw this exact outcome. Prominent sociologist Zeynep Tufekci, author of Twitter and Tear Gas: The Power and Fragility of Networked Protest, has been pointing out the vulnerabilities of social media to be exploited for years.

An industry flooded with problem-solvers, however, only saw the problem of a voiceless public stifled by authoritarian excess. Social media was the solution to that problem, providing a necessary platform for resisting oppression. A problem-finder, on the other hand, would consider the complexities of societal behavior and how those would interact with a platform’s capabilities before unleashing a solution on the public. 

 

The Tech Industry Is Overly Optimistic. That Might Be Its Achilles Heel.

My own background is as an attorney and an educator. Yet my work life is 100 percent focused on technology and its impact on individuals and society at large. One of the major shifts that we are going through now is the realization that the tech industry is so powerful and its impact is so ubiquitous that it behooves us to have a wide range of backgrounds involved in its development and deployment. 

People of different backgrounds bring different worldviews to their work. For example, have a conversation with a tech founder and then one with a lawyer. It becomes glaringly apparent that each of them will approach issues with a completely different evaluative framework. 

Generally speaking, we might classify the tech founder as optimistic (“We’re going to change the world!”) and the attorney as pessimistic (“We’re going to get sued!”). Really, though, these outlooks just reflect a different underlying framework at play: The tech founder is trying to solve a problem, and the attorney is trying to find a problem. Without the attorney, though, the founder wouldn’t know that they have a problem to solve. In other words, the founder’s inability to solve the problems around societal harms of technology has less to do with willful negligence and more to do with glaring blindspots that they have. This creates a paradox for the founder: they can’t solve a problem if they can’t see it; in order to solve the most important problems facing tech, they need to surface the problem in the first place.

That requires bringing in more problem-finders. 

Read More From Our Expert ContributorsWhy Tech Companies Should Consider Hiring Humanities Ph.D.s

 

Problem-Finders Are Not Unicorns, They Are Zebras

Finding problem-solvers is not merely a matter of finding someone “outside of tech” that is better equipped to offer a critical eye to the multitude of ways that technology impacts us on an individual and societal level. It’s about finding people who have both the technical competencies needed to successfully understand and work with a products team, while also having a wide-lens appreciation for technology filtered through culture, communities, history, law, policy, and human behavior. It’s not about being a fuzzy or a techie, but having an almost antidisciplinary approach that makes a problem-finder a zebra that equally holds together the technical and social stripes.  

Tech companies are looking for these zebras. On the Responsible Tech Job Board we have recently started compiling for All Tech Is Human, I have noticed certain commonalities when companies are looking for people capable of not only mitigating current harms, but anticipating future harms. Here are four example job roles:

While most of these roles emphasize that a successful candidate will have an advanced degree, the job postings tend to be relatively agnostic about the discipline of that degree. Being collaborative, having a mix of both technical depth and intellectual curiosity, and experience working with diverse multidisciplinary teams is a common thread in all the positions. The Salesforce position, for example, is looking for a “[d]eep familiarity with software engineering, product management, and product design best practices” and a “[d]eep familiarity with emerging technologies and trends,” as well as an “[u]nderstanding of best practice for how to integrate considerations around ethics and unintended consequences into the design, development, and delivery of technological products. Experience working across teams of engineers, designers, user researchers, and product managers.”

For Facebook’s Responsible Innovation Strategist position, the individual “will work with product leaders and teams across the company to help anticipate the potential risks of new products and features, as well as support the creation, execution, and tracking of effective mitigation strategies. You’ll also help product teams adopt tools and practices to help them innovate responsibly.... This candidate should have a deep understanding of how technology products are developed and experience anticipating and mitigating potential harms to individuals, communities, and society, across a wide range of vectors.”

In other words, the deep social impacts of technology require people that can understand and appreciate both the technical and social sides. These are the zebras, the problem-finders.

We can’t change the disposition of optimistic tech leaders that will always want to solve problems and therefore are unaware of their own limitations. But we can change the makeup of the tech industry to include more people who are capable of finding problems to begin with.

Expert Contributors

Built In’s expert contributor network publishes thoughtful, solutions-oriented stories written by innovative tech professionals. It is the tech industry’s definitive destination for sharing compelling, first-person accounts of problem-solving on the road to innovation.

Learn More

Great Companies Need Great People. That's Where We Come In.

Recruit With Us