Imagine a world without environmental regulations or traffic laws, where unlicensed motorists drive as they please and factories pollute with impunity.

Those were the facts of life in cities around the world as the industrial revolution took hold. And a few decades from now, we may look back on the emergence of AI as a similarly lawless era.

With that in mind, governments in Canada and the European Union, among others, have been active in proposing regulations to protect consumers while the U.S. has largely remained silent — until now.

Computers are increasingly involved in the most important decisions affecting Americans’ lives – whether or not someone can buy a home, get a job or even go to jail.”

This spring, Democratic senators Cory Booker and Ron Wyden proposed the first national AI ethics bill in the form of the Algorithmic Accountability Act. The bill aims to give regulators, and the public, greater insights into how AI systems make the decisions they do — and what data is used to train them.

Here’s what the bill would entail:

  • Require large companies that either reach 1 million devices or make $50 million per year — so, Amazon, Facebook, Google and the like — to do an automated systems impact assessment and data protection impact assessment.

  • Provide a detailed description of the decision-making system and the data used to train it.

  • Assess the risk that data impacts accuracy, fairness, bias, discrimination, privacy and security.

AI is still in the developing stages, but that development is now progressing at a rapid rate. There are tools that can track faces, drive cars and automate office tasks. Artificial intelligence is also increasingly relied upon for tasks like financial underwriting, as well as to make marketing decisions for housing and education, raising new questions about how biased algorithms can shape people’s opportunities.

In other words: The conversation about AI ethics can’t be put off for too much longer.

 

Cory Booker
Image via shutterstock

Artificial intelligence and the Fair Housing Act

When Senators Wyden and Booker announced the Algorithmic Accountability Act, they evoked stories of discrimination during the Civil Rights Era.

Booker reflected on a time 50 years ago when his parents experienced a practice called “real estate steering,” in which realtors nudged them away from certain neighborhoods. It’s not hard to see how AI-directed targeting of ads for real estate listings could have a similar effect, whether intentional or not. After all, these algorithms rely on shared characteristics to determine whether a listing or neighborhood is a “fit” for a homebuyer.

And as more companies rush to integrate AI, the blind spots in our data and society are becoming increasingly apparent.

Amazon reportedly scrapped an AI recruiting tool under development after discovering that it preferred male candidates over female ones. The U.S. Department of Housing and Urban Development charged Facebook for allegedly violating the Fair Housing Act because its ad targeting software could be used to discriminate who sees housing ads based on sex, race, religion and disability status. And face-recognition software from IBM, Microsoft and the Chinese company Megvii all struggled to correctly identify the gender of anyone other than white men.

 

U.S. Senate
image via shutterstock

But where does that bias come from?

Garrett Smith, who founded Guild AI in Chicago, said the problem isn’t evil algorithms or AI systems. In many cases, algorithms are simply spotting discriminatory patterns in the data they are fed and modeling their own decision-making strategies after them. In the case of Amazon, for example, because the company has hired more men in the past, the program favored candidates who had a lot in common with those hires — i.e. other men.

When you train a model with data, your model is going to learn what’s in the data.”

Left unchecked, these algorithms can reinforce prejudices in our society, as one Princeton study found. The Princeton team let an open-source AI algorithm absorb more than 850 billion words of online content to see which associations it derived between words and concepts. The algorithm ended up matching male names to words like “profession” and “salary,” and female names to “parents” and “wedding.”

In announcing the AAA, Senator Wyden expressed concern about precisely these kinds of issues:

“Computers are increasingly involved in the most important decisions affecting Americans’ lives – whether or not someone can buy a home, get a job or even go to jail. But instead of eliminating bias, too often these algorithms depend on biased assumptions or data that can actually reinforce discrimination against women and people of color,” Wyden said in a statement. “Our bill requires companies to study the algorithms they use, identify bias in these systems and fix any discrimination or bias they find.”

Considering the ways in which bias shapes human interactions, it’s not hard to see how these problems crop up.

“When you train a model with data, your model is going to learn what’s in the data,” Smith said. “There’s no doubt we have bias in our society, and that is reflected in our data. This is all a reflection of the way things are already operating.”

Removing bias must be an ongoing process of measuring and detecting it, both manually and with software, Smith said. Data scientists should always be questioning the number of examples they are using for each class of data, where the data is coming from and what bias might exist in the systems that generated that data.

It’s very tempting to think that we can replace humans en masse with new technology, but this misguided thinking.”

Google has already released an open-sourced tool called the What-If Tool to help AI developers analyze algorithms and visualize bias.

Meanwhile, AI can and should be used to check AI, Smith added. Creating an algorithm that draws on a diversity of opinions can help create a more informed final decision.

But those hoping for a fully automated future shouldn’t hold their breath. Humans will always need to be there to keep AI in check, Smith said.

“It’s very tempting to think that we can replace humans en masse with new technology, but this misguided thinking,” Smith said. “In nearly all cases, it’s possible to introduce AI gradually while laying down checks and balances — the presence of bias being one such check.”

 

EU Flags
Image via shutterstock

AI regulation is underway elsewhere

Around the world, governments are wrestling with the same challenges as the U.S.

Proposing a bill like the Algorithmic Accountability Act is a huge step forward for the U.S., said Manoj Saxena, who’s the executive chairman at the Austin-based AI company, Cognitive Scale, on the board of the ethical AI nonprofit AI Global and a lecturer in AI ethics at the University of Texas at Austin. The U.S., which often takes a more hands-off, free-market view to business regulations, has lagged behind the EU and Canada — governments that play a more active role in protecting consumers.

The EU has taken large steps to specifically protect consumer data with the General Data Protection Regulation Act, which requires companies to ensure they have an airtight data consent management process — and fines them if they don’t.

What we have done is expanded the scope of an algorithm to include areas like data, accuracy, bias, discrimination and security.”

Meanwhile, Canada is the farthest along in AI regulation, Saxena said. Regulators there have created a responsible AI tool kit that operates like a Jiffy Lube checkup for algorithms. The tool kit assesses an algorithm’s data, quality of explainable and interpretable results, data robustness, bias and fairness and compliance. The country also requires companies to take a responsible AI oath before it contracts with them.

The Algorithmic Accountability Act goes beyond the EU’s GDPR, Saxena said, because it directly addresses algorithms and the role it can play in bias.

“What we have done is expanded the scope of an algorithm to include areas like data, accuracy, bias, discrimination and security,” said Saxena.

And it appears that each country is building on the regulations of the ones that came before it, Saxena said. Not long after the U.S. released its AAA, the EU released ethics guidelines for trustworthy AI to lay the groundwork for establishing new laws.

“It’s almost like a hop and skip. They started with data, we looked at algorithms and now they’re going to AI,” Saxena said. “I believe that, very soon, the U.S. will skip over those and we’ll get into industry-specific versions of trustworthy AI.”

Still, the AAA only addresses bias, which is one of the five areas he believes are necessary for developing responsible AI. The others include: transparency around the data that is used, explainable and interpretable results, robust data points, and compliance with industry standards. Each of those will need to be defined and regulated to protect consumers.

In Saxena’s view, the long-term vision should be a bill of digital rights.

“At the end of the day, we’ll have two forms of us — a physical me and a digital me,” Saxena said. “Just like the physical me has rights, the digital me needs to have human rights, too.”

 

City image
Image via shutterstock

What’s next?

Artificial intelligence technology is already out there and there’s no going back, Saxena said, so it’s now up to companies, schools and the government to tame it.

Right now, advancements in AI are primarily made and understood by data scientists and engineers. But beyond being a form of technology, Saxena said, AI needs to be understood on a human level.

Just like the physical me has rights, the digital me needs to have human rights, too.”

In his view, it’s important for business leaders to work with engineers to establish principles and guidelines on what they want the technology to do so that it reflects their core values. Schools need to create courses on machine learning that incorporate students across disciplines, to help everyone see the technology’s impact. And the government must develop ethical procurement standards for AI technology providers like they do with other businesses.

This bill doesn’t solve all of AI’s problems, Saxena said, but it is the beginning of that conversation. And it’s a resource Smith already plans to incorporate in his presentations to AI developers.

“It’s a point of reference,” Smith said. “I’d like to participate in community development through workshops and community events. It encourages that discussion in a very concrete way.”

Just as the industrial revolution has transformed the world we live in today, AI is poised to do the same. Laws helped keep its worst outcomes in check, and time will tell if governments will be able to do the same for AI.

RelatedThe Future of Artificial Intelligence

Great Companies Need Great People. That's Where We Come In.

Recruit With Us