Should Your Company Put a Bounty on Biased Algorithms?

Twitter offered the first bias bounty in 2021. Now, some smaller companies are following its lead.

Written by Kerry Halladay
Published on Jan. 26, 2022

Last July, Twitter announced the first bias bounty challenge. The one-off challenge was posted to HackerOne, a popular bug bounty board. It followed the same general pattern of a public bug bounty program, where companies offer monetary rewards to those who identify and report security flaws to their sites and systems. The goal of Twitter’s bias bounty: identify potential harm caused by its image-cropping algorithm, which uses a saliency model.

Eight days later, Bogdan Kulynych, a Ph.D. student at the Swiss Federal Institute of Technology in Lausanne’s Security and Privacy Engineering Laboratory, was awarded first place in the competition and paid the bounty of $3,500. His submission involved adjusting the appearances of 16 photorealistic faces he generated with StyleGAN2-ADA to be more salient according to Twitter’s cropping algorithm. Kulynych demonstrated the algorithm preferred images of people with a lighter or warmer skin tone, smoother skin, and who appear more feminine, younger and slimmer.

While Twitter was the first to offer a public bias bounty, it will not be the last. A HackerOne spokesperson confirmed to Built In “a handful” of technology companies already hosting bug bounties on the platform are exploring the potential of offering “bounties that seek to find flaws in AI or ML systems.”

Bias Bounty Considerations for Small Companies

After Twitter offered the first bias bounty in 2021, smaller companies are now considering following suit. Doing so would allow these companies to rapidly identify algorithmic bias through crowdsourcing models and improve services to customers. Though key concerns remain around unintentionally exposing safety flaws, or exposing customer data or intellectual property that could be stolen.

Additionally, in its 2022 Predictions Report, Forrester found that, “at least five large companies will introduce bias bounties in 2022.” It specifically called out Google and Microsoft as likely candidates. 

It’s not just big companies that are — or should be — considering offering bias bounties to improve their AI-backed products and services. Smaller and newer companies are also thinking about offering bias bounties in 2022. One draw is the potential to rapidly identify algorithmic issues and improve their products or services. The ability to improve customer trust and do the right thing is another. But small companies thinking about bias bounties aren’t without their reservations.

More on CybersecurityWhite Hat Hackers: Inside the World of Ethical Hacking

 

Crowdsourcing Leads to Quick Improvement

For smaller, newer companies offering technology-driven services, establishing bias bounties helps them rapidly improve services to their users.

Bias bounties modeled on bug bounties, like Twitter’s was, have the benefit of identifying problems fast through the power of crowdsourcing. Only eight days had passed from when Twitter announced its bounty to when Bogdan’s submission was named the winner. Though he did express some concerns about what such a timeline might mean for the rigor of an investigation, the speed enabled by bias bounties, according to Bogdan, could be a good thing.

“If this evolves in the same way as security bug bounties, this would be a much better situation for everyone,” he tweeted, following Twitter’s announcement of his winning submission. “The harmful software would not sit there for years until the rigorous proofs of harm are collected.”

For a lot of companies, offering bias bounties in an effort to reduce algorithmic bias in their systems just makes business sense. Legal sense, too.

“Any form of systemic discrimination or disparate impact on a protected class can can result in a discrimination claim. So there’s value in any methodologies that will help to reduce that possible exposure.”

Peter Cassat, partner at Culhane Meadows PLLC, a national full-service law firm, pointed out that as a company moves previously traditional processes and systems to automated, AI-powered processes and systems, it is still the company ultimately making decisions — and it can be held accountable.

“There are risks associated with AI bias that can amount to legal risks,” he said, speaking from experience working with clients. “Any form of systemic discrimination or disparate impact on a protected class can can result in a discrimination claim. So there’s value in any methodologies that will help to reduce that possible exposure.”

For Ruben Gamez, CEO and founder of SignWell, an e-document signing company, offering bias bounties is more about the relationship between the company and its clients rather than a risk reduction strategy. He described it as a sort of trust-building exercise with his customers and user base, particularly as the company grows.

“We are primarily dependent on our AI systems for simple automation to complex data-driven decisions,” he said. “It’ll be exciting to see users identify any possible flaws. This will give us a chance to touch base with our input data sets and redefine them to be more diverse.”

Find out who's hiring.
See all Data + Analytics jobs at top tech companies & startups
View 3894 Jobs

 

Bias Bounty Concerns Exist for Small Companies

But not everyone is entirely on board with bias bounties. Jared Stern, founder and CEO of Uplift Legal Funding, a lawsuit loan company, said that his company is not considering offering bias bounties in the near future, though it might be something it considers farther down the road. While his company does employ AI in its operations, it is still in the process of optimizing its data sets, making bias bounties premature.

“I don’t think [bias bounties are] a productive move, especially for companies who are still getting a hold of their operations with AI,” he said.

In addition to the potential of bias bounties being poorly suited to a particular company’s situation, small companies have concerns about opening their inner workings up to the public the way Twitter did. When it posted its bounty, Twitter gave potential bias hunters access to the code it had used in its own research into image-cropping fairness that found its algorithm tended to crop out Black people’s faces.

“It’s going to be tricky how you make sure that you’re protecting the confidentiality of the data that, in a sense, you’re wanting to cleanse, but at the same time, you need to make sure that you’re not violating any privacy rights of any of your employees or customers.”

Cassat pointed out that there are two main areas of risk to companies running bias bounties similar to what Twitter did: exposing customers’ to potential threats by inviting people to discover flaws they might not otherwise discover and the potential to expose intellectual property belonging to either the company or its partners.

“For example, if we use SAP in the HR space and people are coming in to see how we’re doing recruiting and hiring and promoting within our HR systems using automated algorithms, are we exposing any of SAP’s confidential information that we don’t have permission to expose?” he said. 

Customer confidentiality in a situation where a bias bounty might need the use of live data is another concern. “It’s going to be tricky how you make sure that you’re protecting the confidentiality of the data that, in a sense, you’re wanting to cleanse, but at the same time, you need to make sure that you’re not violating any privacy rights of any of your employees or customers,” Cassat said.

More on Machine Learning Bias6 Ways to Combat Bias in Machine Learning

 

Striking the Balance Between Safety and Tackling Bias

Small companies do have some potential strategies to sidestep the risks of opening themselves up to the public they can consider. One is to hire third-party coders to act as bias editors who could review a company’s algorithms. This would have the benefit of external perspectives, but without many of the risks of going to the public the way Twitter did.

Internal efforts at identifying algorithmic bias is another way companies could get around concerns related to public bias bounties, according to Cassat. This could be as simple as having a system of reporting bias or offering awards to employees who identify and report it, to modeling bias hunting on security hackathons, instead of bug bounties.

This “red-teaming” approach to tackling bias in AI was addressed in a 2020 report from AI researchers and practitioners. While it did recommend that organizations developing AI “should run red-teaming exercises to explore risks associated with systems they develop,” the report also pointed out that existing red-teaming approaches are insufficient. Instead, it recommended the development of a “community of AI red-teaming professionals,” though it acknowledged that such a situation could inspire many of the same concerns voiced about public bias bounties.

From a legal perspective, Cassat called it a balancing act between potential risks associated with identifying bias and the benefits of doing so. But he also noted that growing pressure from consumers that technologies be more equitable has increased focus on social responsibility at the board level. This will drive activity around reducing algorithmic bias.

“I don’t know the extent to which companies will want to continue to kind of crowdsource these solutions,” he said, adding that promoting diversity is good business. “And it’s the right thing to do.”

Explore Job Matches.