When It Comes to AI, Don’t Get Trapped in the Magic Box

AI can be a powerful tool, but, ultimately, it’s just that: a tool. Make sure you understand its processes so that you’re the one in control of making product decisions.
Headshot of author Adam Thomas
Adam Thomas
Expert Columnist
August 27, 2020
Updated: September 2, 2020
Headshot of author Adam Thomas
Adam Thomas
Expert Columnist
August 27, 2020
Updated: September 2, 2020

Computers are not smart.

For me, the war against the myth of computer intelligence began in ninth grade when someone in my class started talking about how smart their computer was. This comment broke my mind. I thought, immediately, how does someone think a bunch of circuits are intelligent?

Ascribing intelligence to a bucket of bolts is a cognitive error in itself. The computer isn’t thinking; it’s processing instructions. Let’s be clear: human intelligence structures those instructions and operates outside of them; computers can’t. Computers can, however, accelerate.

If we aren’t careful, technology accelerates damaging practices. As product people, our job is to be stewards of technology. We can’t afford to let that harmful acceleration happen. As Aimé Césaire writes in his Discourse on Colonialism, “No one colonizes innocently, that no one colonizes with impunity either.”

Tech accelerates. Either you accelerate harm reduction or you are, as Césaire noted, an active participant in accelerating harm to fellow human beings. As tech professionals, we have to be mindful of what we are doing. The world at large still hasn’t moved on from the ongoing colonial project of exploiting and extracting labor from colonized groups. If tech is going to live up to its ideal of democratization, it can’t just be in slogans.

And that’s what I’m doing here: helping you get better outcomes by teaching you to operate from better principles. We will avoid sloganeering in favor of actionable solutions that separate intelligence from process. Once a month, I will identify some technology from a product management perspective and then give you some markers to move away from bias to make informed decisions. The ultimate goal is to help you, as a PdM, increase your odds of success at product development.

Speaking of “intelligent computers,” now is a great time to talk about AI. With GPT-3 all the rage, AI is continuing to thrive. With that fascination has come a bunch of ideas about how and why you should use AI to make product decisions in the name of efficiency.

The team behind the model took 175 billion parameters, many from Wikipedia and Reddit, and built a model that can predict text with stunning results. Granted, even though the team’s white paper does a good job of explaining the pros and cons of using the model, people haven’t stopped falling into the same trap as my classmate by thinking that the predictions are a form of intelligence, instead of what they truly are: a well-trained guess.

With so many people currently interested in the model and AI more generally, I want to warn against the perils of what I call the “magic box.” If you don’t deal with this problem now, you’ll find yourself in the middle of cognitive hell inside of your product development.

Related ReadingWhy GPT-3 Heralds a Democratic Revolution in Tech

 

The Magic Box

If you’ve ever looked at some AI process and wondered, “What’s happening in there?” and no one can answer your question, you might find yourself in the middle of what I call the “magic box” fallacy. This happens when someone feeds input into a tool/object/person and simply trusts the output without understanding the process.

A report from Oxford University, “AI @ Work,” highlights this fallacy in action. The authors identify three issues that have emerged from AI work:

1) Integration challenges happen when settings are not yet primed for AI use, or when these technologies operate at a disjoint between workers and their employers.

2) Reliance challenges stem from over and under reliance on AI in workplace systems.

3) Transparency challenges ... arise when the work required by these systems — and where that work is done — is not transparent to users. 

The report does a great job of exposing the “magic box” that AI can be, especially when it comes to AI and worker agency (i.e., you) with respect to decision making. The case studies in the report show the damage that happens as companies end up either wasting their resources on wild goose chases or end up building unscalable systems.

Remember, tech accelerates. 

I want to focus on the second conclusion in this article since it is a pretty good definition of how the “magic box” operates in the world of AI. Specifically, I want to look at how to identify these magic boxes in your company, how they affect your product development, and how you can avoid falling into the trap.

 

Identifying Magic Boxes

The easiest way to see if your company has “magic box” thinking is its decision fitness. How often do they check to see if their decisions are aligned with their goals? If asking the question, “How do we know this AI process is working to serve our customers?” brings nothing but strange looks, I can assure you that the AI is a magic box.

If people aren’t looking confused, ask further if there is some sort of process that sees if the major decisions the AI is making tracks with the expected behavior. Sounds like a lot of work? Well, here is the alternative.

 

How Magic Boxes Ruin Your Product Development

One thing I appreciate about “AI @ Work is how direct the report is when it comes to the harm AI systems can create when they are running unchecked. Here are a few things you may recognize.

Optimization Bias: People, and especially laypeople, believe the AI is always right without investigating it. Users blindly trust the AI, including those who are affected by it internally. Once the AI is set on a path, it becomes a “dumb missle” and will optimize without context. This structure is dangerous, because if things go up and to the right, so to speak, we have a tendency to see motion as progress. People will base their decisions on what this AI does. We exhibit bias toward the machine because it doesnt have pesky feelings, even though they may be making bad decisions.

Simplicity Bias: AI flattens decisions since it’s deciding between two paths. The more “open” the decision, the worse the AI gets. The GPT-3 paper does a great job of demonstrating this in its math section. The model is fine with adding two numbers; add a third, and it goes haywire. If you are using AI to make decisions, it’s critical you know what it’s deciding and if it isn’t oversimplifying.

Customer Disconnect: Your customer interfaces with the AI and stops talking with you completely. If you add the first two things I’ve mentioned here, you’ll realize how big of a deal this is. As product people, our job is to be proxies for the customer so we can ensure alignment in product development. “Magic box” AI takes us away from that.

 

Avoiding the Magic Box

Let’s circle back to who is responsible for outcomes.

If no one is raising their hand, well product manager, this one is on you. As the person responsible for alignment in an organization, aligning the use of a tool (AI) to output (business outcomes) pretty much falls on you.

So what can you do?

  • Assume AI is the dumb missile that it is. This is where the concept of Human in the Loop is so important. A human being must be someplace in any AI process. In fact, the human plus AI combination colloquially called an AI centaur, has been shown to be better than AI or human alone. Act as that human. Help steer your AI systems so that you can have higher confidence in hitting its point. If you don’t, you’ll more than likely be optimizing for the wrong thing.
  • Lay out the decisions the AI is making using a mapping process. Take journey mapping or service blueprinting, for example. Either work by “mapping” or laying out processes that are important to decisions. Both are tools designed to shine a light on which decisions customers (journey mapping) or the business (service blueprinting) are making.
  • Talk to your customers on a regular basis. Continually keep an eye on customers using your AI service and spot check with user interviews for customers that have both completed the process and failed (hello Facebook and other tools). This will keep you honest, since AI is a tool to solve customer problems and that alone.

 

AI Is Good, but Don’t Let the Magic Box Win

As has often been said, there’s no such thing as a free lunch. AI is no different. Its not a perfect solution for your product development with no drawbacks.

If you aren’t trying to understand how to use tools to service the customer, as well as building operations to understand, improve, and decide to align those needs with the business, you’ll be running into a high probability of wasteful optimization and customer disconnect.

These things are all tools.

AI, when used properly, is a fantastic way to make your operations more efficient. When you don’t understand what you are using, you’ll fall into traps that can sink your business, including the “magic box” that will kill your operations.

More From Our Expert ContributorsWhy Do Machine Learning Projects Fail?

Expert Contributors

Built In’s expert contributor network publishes thoughtful, solutions-oriented stories written by innovative tech professionals. It is the tech industry’s definitive destination for sharing compelling, first-person accounts of problem-solving on the road to innovation.

Learn More

Great Companies Need Great People. That's Where We Come In.

Recruit With Us