AI Has an Ethics Problem. Gen Z Is Poised to Help Fix It.

Gen Z data scientists are well suited to tackle bias in AI. Today’s leaders and practitioners need to lay the groundwork to enable them to do so.
Headshot of Peter Wang.
Peter Wang
Expert Contributor
September 15, 2020
Updated: September 18, 2020
Headshot of Peter Wang.
Peter Wang
Expert Contributor
September 15, 2020
Updated: September 18, 2020

Artificial intelligence (AI) today has an ethics problem. Headline after headline has shown the ways in which machine learning models often mirror and even magnify systemic biases.

According to our 2020 State of Data Science report, of 1,592 people surveyed globally, 27 percent identified social impacts from bias in data and models as the biggest problem to tackle in AI and machine learning (ML) today. Yet, for all the attention that this issue has received in recent years, institutional progress to address AI and ML bias remains slow. For instance, 39 percent of respondents said that their organization has no plans to implement a solution for fairness and bias mitigation in data and ML models.

Amid renewed calls for corporate inclusion and equity, such numbers are troubling — suggesting that organizations haven’t fully embraced the challenging work of enacting meaningful, institutional change.

The good news is that, to help with this undertaking, Generation Z is graduating into the workforce. On track to be the most diverse and most educated generational cohort in United States history, they are uniquely positioned to act as change agents in the space of AI. They crave opportunities to help companies build ethical products and services: In a joint Deloitte and Network of Executive Women survey, 77 percent said it’s important to them to work at an organization that shares their values. And 74 percent say that work should provide a purpose, not just a paycheck, according to a Monster poll.

Ignoring Gen Z is not an option; by 2030, approximately one in every five workers will be a member of this generational cohort. For the data science profession, leaders and executives must establish positions and build initiatives that will enable Gen Z to wed their growing interest in AI with their hunger for ethical work.

Companies that lack a robust AI ethics strategy — including a framework to guide what is and is not considered ethical behavior, as well as a mechanism to review questionable cases — are not only putting themselves at risk of developing biased models, but also missing out on a key recruitment strategy.

To attract and retain top talent, businesses will need to not only offer data science opportunities that combine competitive pay with purpose, but also begin to plant the seeds for an ethically minded culture. Otherwise, they may lose out on a growing talent pool of data scientists who will opt for more fulfilling projects or socially aligned companies.

Executives must begin setting the organizational tone for ethics as an open and interdisciplinary conversation that extends beyond hierarchical barriers. Stakeholders from a variety of teams, including the C-suite, legal, and data science, should jointly establish internal ethics guidelines to frame data science activities, and these guidelines should be reviewed periodically to ensure they follow the latest best practices.

Data scientists — especially those early in their careers — must feel empowered in their ability to drive ethical practices in their work and be able to freely express concerns as they arise, knowing that they’ll have the support of both their departmental and overall company leadership to examine issues further. This can start with support from senior leadership for internal conversations that ask questions like “Should we?” rather than “Can we?” and evaluate proposed projects through the lens of the established ethics guidelines. Executives should also support and encourage their data science teams to present the human angle of any data-driven analysis, reminding decision-makers that there are real people behind each plot point on a graph.

Underneath every recommendation from an AI or ML model is a story of how data was gathered, what data was included, and what tradeoffs were made in tuning the parameters — data scientists must be able to communicate these issues throughout the organization. Understanding this context helps decision-makers appreciate potential points of bias, uncertainty, and variability in data science results, which in turn allows them to use the results in a more ethical manner rather than as an absolute source of untainted truth.

As data science matures as a discipline, today’s practitioners should normalize the inclusion of these areas of nuance when reporting their findings, which will help pave the way for a healthier relationship between business decision-makers and data science results. By creating this type of environment for Gen Z data scientists to enter into, today’s practitioners can help position them for success, especially in the quest to help root out unethical uses of AI.

Experienced business leaders sometimes display skepticism toward Gen Z’s demands for work that is both meaningful and lucrative, and for products that are both personalized and equitable. Yet it is precisely this refusal to compromise that makes Gen Z particularly suited to tackling the challenges of AI. Rather than lecturing them on the limits of the possible, we should provide the tools and opportunities they need to redefine those limits. The models they build and the future they create just might surprise us.

Read This NextCounteracting AI Bias

Expert Contributors

Built In’s expert contributor network publishes thoughtful, solutions-oriented stories written by innovative tech professionals. It is the tech industry’s definitive destination for sharing compelling, first-person accounts of problem-solving on the road to innovation.

Learn More

Great Companies Need Great People. That's Where We Come In.

Recruit With Us