The Scottish Inverness Caledonian Thistle FC soccer club, more commonly known as Caley Thistle, plays in Scotland’s second-tier league, in a stadium that holds a bit more than 7,000 people. This isn’t a wealthy team, and it doesn’t get much TV coverage. To give its fans more access to on-field action, the team bought a camera system from Pixellot, a company based in Israel.

Pixellot developed what it calls “AI-automated video.” Mounted cameras around the field are driven by AI to follow the action, so no operators are needed, creating a broadcast-like experience while saving money. In soccer, Pixellot’s AI is trained to follow the ball, which helps make sure the camera always frames the action.

Once deployed at Caley Thistle’s stadium, the AI ran into an unintended consequence when the referee in a game was bald. The AI couldn’t tell the difference between a soccer ball and a bald head moving around the field. Pixellot told the press it could fix the problem with a tweak of the algorithm. And while mistaking a ball for a head didn’t really do much damage other than make for some weird soccer viewing, the story shows how AI can go awry because its creators never considered it a potential problem.

The ways AI can go wrong grow exponentially as [the technology] gets deployed in ever more complex ways. Consider language translation. AI is getting good at it, and already there are many AI translation tool companies, with Google leading the way with its service. There is no doubt AI can eventually obliterate language barriers. Encounter someone speaking a language you don’t know, and you’ll be able to hold up your phone, put in some earbuds and listen to a simultaneous translation. But it’s an enormously complex challenge. These AIs are trained by ingesting hundreds of billions of words from sources that range from dictionaries to YouTube videos and chatter on Reddit. Google has an advantage here because it sees countless billions of searches, emails and Google docs from around the world. The volume of data is so enormous, no human could know all of what’s in it.

What’s an Algorithmic Canary?

An algorithmic canary is like the proverbial canary in a coal mine, providing a warning that gives a company a chance to stop a disaster before it happens. It is AI built specifically to watch another AI – a software sentry that can do what humans could never do by watching billions of data points and looking for patterns that suggest something is amiss.

More From Built In ExpertsMy Secret to the Success of Amazon? Being OK with Spectacular Failure.

 

The Dangers of Biased AI

We’re already seeing an unintended consequence that has ramifications for inequality. Some languages, including English, don’t attach a gender to nouns. German and French and some other languages do. When translating English to German or French, the AI must decide which gender to assign to an English noun. As a study by University of Cambridge researchers found, the translations tend toward stereotypes. A “cleaner” becomes feminine; an “engineer,” masculine. The researchers found that the bias can be fixed by retraining the AI, but if such an AI were widely used before the problem was noticed, it could’ve caused hurtful consequences.

A botched soccer video or a biased language translator are relatively mild consequences when you realize that AI is this era’s electricity – it’s eventually going to power just about everything. An AI’s creators can be diverse and have all the best intentions and try to be diligent, yet they will inevitably miss something. Imagine the potential damage if an AI guiding someone’s healthcare goes wrong, or an AI operating a dam or helping weapons identify targets has a glitch.

Enjoying This Excerpt? Buy Hemant’s Book!Intended Consequences

 

Get Responsible or Get Regulated

In early 2021, the U.S. Federal Trade Commission warned businesses and health systems that discriminatory algorithms could violate consumer protection laws. “Hold yourself accountable — or be ready for the FTC to do it for you,” Elisa Jillson, an attorney in FTC’s privacy and identity protection division, wrote in an official blog post. The FTC prohibits unfair or deceptive practices, which could include the use of racially biased algorithms, Jillson wrote. In 2019, Congress introduced a bill, the Algorithmic Accountability Act, that would direct the FTC to develop regulations requiring large firms to conduct impact assessments for existing and new “high-risk automated decision systems” – in other words, AI. The European Union in the summer of 2021 was considering strict AI accountability rules. None of these proposals have yet been passed into law, but it’s obvious that companies must act quickly and responsibly or they will face regulation.

We can develop AI that can watch another AI for biases or bad actions.

Yet companies aren’t doing enough. In the spring of 2021, Boston Consulting Group released a report noting that more than half of enterprises overestimate the capabilities of their efforts to responsibly use AI, while just 12 percent have even fully implemented a responsible AI program.

Businesses need a reliable approach to avoid unintended consequences of AI — a technology mechanism that works alongside the business model to support and ensure a company’s responsible mindset.

I call such an approach “algorithmic canaries.”

More Book Excerpts on Built InUber Hit $10B in Gross Revenue When I Led Rider Growth. Here’s How.

 

Algorithmic Canaries

An algorithmic canary is (as you might imagine) like the proverbial canary in a coal mine, providing a warning that gives a company a chance to stop a disaster before it happens. It is AI built specifically to watch another AI – a software sentry that can do what humans could never do by watching billions of data points and looking for patterns that suggest something is amiss. Startups need to put these canaries in place to make sure their AI is tracking with the company’s intentions and sound an alarm if the technology is causing harm, or if others are hijacking the AI and using it in a harmful way (a la foreigners taking advantage of Facebook’s algorithm to influence elections). An AI canary can also keep an eye on all of a company’s business practices to help it stay responsible.

startups-need-algorithmic-canary
Intended Consequences: How to Build Market-Leading Companies with Responsible Innovation

The technology to do this can be built. One example is The Allen Institute for AI’s Grover, an algorithm that can find fake news generated by AI among real news reports and block it before it reaches a mass audience. In a machine sense of “it takes one to know one,” a Brookings Institute study found that since AI generates fake news, AI can also get familiar with the kind of quirks and traits an AI news story displays. The study concluded that Grover was 92 percent accurate at detecting human versus machine-written news.

A variety of companies already build AI that can collect billions of data points and look for patterns that predict problems. Noodle.ai, for instance, developed AI that can monitor signals from machines in a factory, trucks along the supply chain, changes in weather and even news reports to predict what might go wrong in a manufacturer’s operations from raw material to shelf, so the manufacturer can keep everything flowing smoothly. New York-based Sprinklr builds AI that captures signals from a company’s customers everywhere they might be chatting – Twitter, chatbots, reviews, emails, calls to customer service – to help the company spot a problem with its product or brand and fix it before it gets worse.

If AI can already do all of those things, we can develop AI that can watch another AI for biases or bad actions.

We need these AIs to anticipate unintended consequences, like when a business vacuums up data to be monetized through advertising. AIs need to identify second- or third-order consequences, such as what happens to a city when e-commerce forces numerous retail stores to close.

The best practice would be for founders to incorporate algorithmic canaries at the early stage of product development — bake it into the product and business.

Ultimately, while the types of unintended consequences will vary company by company, the industry must begin to develop a collective approach to what algorithmic canaries should guard against. This requires understanding all the stakeholders the company touches.

ESG (environmental, social and corporate governance) is a useful starting point. It encourages companies to think of unintended consequences across the environmental, social, and governance issues. But algorithmic canaries need to go much further. Such AIs should look for consequences such as misinformation campaigns, privacy intrusions, inequality, social isolation and racial discrimination.

The best practice would be for founders to incorporate algorithmic canaries at the early stage of product development — bake it into the product and business. If you’re doing this retroactively, it’s probably too late. Taking a systems design approach to responsibility and clearly articulating and measuring it allows engineering teams to embed canaries deeply into their technologies and track them as KCIs. In this way, companies can begin to measure what really matters beyond their own success: the potential unintended consequences of their technologies and their leaders’ responsibility to mitigate them.

* * *

Excerpt from Intended Consequences: How to Build Market-Leading Companies With Responsible Innovation by Hemant Taneja, pp. 98-104 (McGraw Hill, January 2022).

Expert Contributors

Built In’s expert contributor network publishes thoughtful, solutions-oriented stories written by innovative tech professionals. It is the tech industry’s definitive destination for sharing compelling, first-person accounts of problem-solving on the road to innovation.

Learn More

Great Companies Need Great People. That's Where We Come In.

Recruit With Us