What Is Black Box AI?

Artificial intelligence often makes decisions we don’t understand.

Written by Ellen Glover
a black box
Image: Shutterstock / Built In
UPDATED BY
Matthew Urwin | Aug 06, 2024

Black box AI refers to the issue of an artificial intelligence system’s internal mechanics being unclear or hidden from users and even developers. You may be able to view how an AI model takes in inputs and produces outputs, but the logic and data used to reach those results are not accessible, making it difficult — or even impossible — to fully see how they operate.

In other words, “you can see what goes into it, you can see what comes out, but you can’t pop it open to see the inner workings,” explained Stephanie Pulford, head of data at AI company ON.

The opposite of black box AI is explainable AI, which is a set of principles and processes that seek to help users understand how AI models arrive at their outputs.

What Is Black Box AI?

Black box AI is a term used to describe artificial intelligence systems whose internal workings and decision-making processes are not transparent, making it unclear how they arrived at their conclusions.

AI models that can be characterized as “black box AI” are embedded in our everyday lives. They power facial recognition software used to unlock our phones, AI voice assistants like Alexa and Google Assistant, chatbots like ChatGPT and Gemini and hiring algorithms used to screen job candidates.

 

How Do Black Box AI Models Work?

Many of today’s powerful AI models rely on deep learning, a type of artificial intelligence that enables models to independently learn and continuously improve without any explicit programming. 

Deep learning systems use artificial neural networks, which mimic the structure and function of the human brain. When data is fed into the model, it is processed through layers of interconnected nodes (called neurons) within the network that perform increasingly complex calculations, identifying patterns in the data. The models then use the identified patterns to make predictions and decisions. It’s difficult to decipher what specific steps are taken to arrive at those outputs, though, which is why we call these systems black box AI.

  • Example: If a deep learning model is trained to identify images of panda bears, it’s unclear which characteristics the model uses to make its decision and how much weight it assigns to each of them. The model may have learned to focus on certain physical features of the panda — like its distinctive black-and-white fur patterns — or it may prioritize other characteristics that indicate a panda is there, such as the presence of bamboo. If the model strictly associates bamboo with pandas, it may incorrectly classify images with bamboo as containing pandas, or fail to identify pandas in images without bamboo.

“We can’t always trace why a particular answer was derived,” said Duncan Curtis, senior vice president of AI, product and technology at AI company Sama. “It could come from bias in the training dataset, it could come from things that aren’t necessarily obvious to a human.”

In contrast, white box AI models (also known as glass box AI) are transparent about how they arrive at their conclusions. These kinds of models rely on decision trees, a type of algorithm that uses a branching method to illustrate every possible outcome of a decision.

Models that use decision trees or similar algorithms allow users to track every step of the decision-making process, making it easier to understand how they arrived at their conclusions, said Gaurav Rao, CEO of explainable AI company Howso. For models that use more complex deep learning algorithms like transformers or recommendation systems, “the decision-making process is much more challenging to follow.”

Related ReadingAre You Sure You Can Trust That AI?

 

Why Are Black Box AI Models Used? 

While black box AI models pose several challenges, they also offer some unique advantages:

Higher Accuracy

Thanks to deep learning, black box AI models are typically able to process and analyze large volumes of complex data, and make predictions or decisions based on that data with a high degree of accuracy. Because of this, black box models often work better than more interpretable ones, according to Pulford.

“Some of our best algorithms require the computer to devise logic that’s not understandable by humans, but is still extremely effective — often more effective than we could get with a model we fully understood,” Pulford said. “The main incentive for using a black box model is that you often get good results fast.”

Good at Problem Solving 

Because black box AI models don’t process information the same way humans do, they can be effective at solving problems in new and innovative ways. They use complex algorithms to identify connections and relationships between large sets of data, allowing them to come up with novel solutions to problems that may not have ever been considered before.

“It’s really a complement to human thought, it’s not replicating human thought,” Pulford said. “It’s using a different kind of logic that we don’t have access to, and that often involves features that we can’t understand.”

Protects Intellectual Property

Using black box AI models can help developers protect their intellectual property by obscuring the details of their algorithms and training data, making it harder for competitors to replicate or reverse engineer their products.

“If I’m a provider that’s sharing my models more ubiquitously, I want to be able to protect my secret sauce,” Rao said. “From that perspective, there may be a commercial advantage for me to have a more black box model.”

Most of the models coming out of AI giants like OpenAI and Google are shrouded in secrecy — all the way down to their training data sources. At the same time, other companies like MetaMistral AI and xAI have also put out open source models, making their algorithms, training data and decision-making processes more transparent than most other commercial models. While open source AI isn’t entirely explainable, it does “shed some light” on what’s happening inside a black box model, Curtis said. 

Related ReadingHow to Build Trust and Transparency in Your Generative AI Implementation

 

Issues With Black Box AI

Black box AI models inherently lack transparency and interpretability, which can raise concerns about their reliability, fairness and accountability.

Lack of Transparency 

Black box AI models are notoriously opaque, so it’s impossible to know how and why a model produces a certain output. This has become a growing issue as more people demand to understand how black box AI works. After all, if AI can be used to deny someone a loan or screen them out of a job, they ought to know why the decision was made.

This increased scrutiny of AI models has compelled the United StatesEuropean Union and other jurisdictions to create regulatory frameworks that call for AI to be more understandable and interpretable, particularly when it’s used in high-stakes sectors like healthcare, finance and criminal justice. This will likely spur the development of more explainable AI, said Uri Yerushalmi, chief AI officer at AI company Fetcherr.

Companies have also joined the effort to develop more explainable AI. Most prominently, a team at Anthropic reverse engineered large language models and mapped out their neural networks to better understand why they come up with specific outputs, claiming significant progress

Difficult to Trust

Because their decision-making processes are challenging to follow and understand, it can be hard to trust that a black box model will consistently provide accurate answers. 

This makes it difficult for users to rely on a model and make decisions based on its predictions or recommendations, said Matteo Bordin, VP of product at AI HR company Oyster. “If you have an AI agent that gives you a suggestion, and there is no way to understand why that AI agent is giving you that answer, it’s difficult for you to be sure that you are following the right kind of advice.”

Hard to Fix Errors

Although black box AI models are known for their accuracy, they are still capable of getting things wrong. When they do, it can be challenging for developers to correct these mistakes because their inner workings are inaccessible, making it difficult to identify and correct errors or biases in their algorithms and training data. 

These models can also be difficult to validate and test, as it can be hard to determine how exactly they will behave in different situations or under different conditions.

Lack of Accountability

The inherent opaqueness of black box AI models makes it hard to hold them accountable for the mistakes they make or the harms they cause

This can be an especially big issue in more high-stakes fields like healthcare, banking and criminal justice, where the choices these models make can significantly impact on people’s lives. For instance, people of color seeking loans to purchase homes or refinance have been overcharged by millions of dollars by AI tools used by lenders. And several hiring algorithms used to screen job applicants had proven to be biased against people with disabilities and other protected groups. Facial recognition software used by some police departments has even been known to lead to false arrests of innocent people, particularly people of color.

Any implicit bias or errors created by a black box models often go unchecked, as it can be difficult to hold individuals or organizations responsible for the judgments made by AI systems that cannot be fully explained.

More on Artificial IntelligenceExplore Built In’s AI Coverage

Frequently Asked Questions

Black box AI describes artificial intelligence systems whose inner workings are not transparent to humans. The logic and data used to reach their outputs are not accessible, making how they operate unclear.

Generative AI is a type of artificial intelligence that is used to create new content (images, text, music, etc.) based on patterns learned from training data. These models are typically considered to be black boxes, because it can be difficult to understand how they arrive at their outputs.

Yes, ChatGPT can be considered a black box because its internal decision-making processes are not fully transparent or explainable, making it difficult to understand exactly how or why it generates specific outputs. 

Explore Job Matches.