unbiased algorithms panel

Led by moderator Megan Rose Dickey, a recent panel at TechCrunch Disrupt SF focused on dismantling algorithmic bias. In short, it is within the realm of mathematical possibility, but is a time-intensive pursuit, according to panelist Brian Brackeen, CEO of the facial recognition startup Kairos.

The simplest definition of an algorithm, courtesy of Slate, is a set of guidelines that describe how to perform a task. Algorithms have infiltrated nearly every aspect of our lives, particularly in the information that is made available to and about us. This, of course, makes it even more of an imperative to remove the bias that inevitably informed the development of these algorithms.

Unbiased algorithms require accurate data from people of all demographics by virtually every grouping imaginable. As of now, algorithms are mostly skilled at identifying “pale males” and for other groups — women, people from different ethnicities —they falter, as Brackeen explained.

“They can be putting you in a lineup that you shouldn’t be in. They could be saying that this person is a criminal when they’re not.”

Yet even a completely accurate model is not without pitfalls. The panel discussed the increased stakes in matters of liberty and freedom. An algorithmic error in a business case (e.g., a misidentified speaker at a event) is one thing, but the sam mistake made by a government program could have grave consequences.

“They can be putting you in a lineup that you shouldn’t be in," Bracken told the panel. "They could be saying that this person is a criminal when they’re not.”

For these reason, he asserts that law enforcement should never employ facial recognition.

Kristian Lum, the lead statistician for the Human Rights Data Analysis Group (HRDAG), pointed out that even a model generating 100 percent perfect mathematical predictions wouldn't be exempt from institutional bias.

“Usually, the thing you’re trying to predict in a lot of these cases is something like rearrest,” Lum said. “So even if we are perfectly able to predict that, we’re still left with the problem that the human or systemic or institutional biases are generating biased arrests. And so, you still have to contextualize even your 100 percent accuracy with, 'Is the data really measuring what you think it’s measuring? Is the data itself generated by a fair process?'”

Patrick Ball, director of research at HRDAG, concurred with Lum, adding the only way a fair machine learning system can be built is “in a society of perfect surveillance so that there is absolute police knowledge about every single crime so that nothing is excluded.”

Ruling that out as a chilling alternative, Ball suggested the answer may be less in machine learning and more in police reform.

“For fair predictions, you first need a fair criminal justice system," he said. "And we have a ways to go.”

Great Companies Need Great People. That's Where We Come In.

Recruit With Us