What Is Black Box AI?

Artificial intelligence often makes decisions we don’t understand.

Written by Ellen Glover
What Is Black Box AI?
Image: Shutterstock / Built In
UPDATED BY
Ellen Glover | May 22, 2024

Black box AI refers to artificial intelligence systems whose internal mechanics are unclear or hidden from users and even developers. These AI models take in inputs and produce outputs, but the logic and data used to reach those results are not accessible, making it difficult — or even impossible — to fully see how they operate.

“You can see what goes into it, you can see what comes out, but you can’t pop it open to see the inner workings,” said Stephanie Pulford, head of data at AI company ON.

What Is Black Box AI?

Black box AI is a term used to describe artificial intelligence systems whose internal workings and decision-making processes are not transparent. They provide no explanation as to how they arrive at their conclusions.

The opposite of black box AI is explainable AI, which is a set of principles and processes that seek to help users understand how AI models arrive at their outputs. AI researchers and government officials around the world are taking steps to make AI systems more explainable and transparent, but there is still a lot of work to be done.


The Significance of Black Box AI

Black box models play an important role in the AI industry and society, offering remarkable predictive and decision-making capabilities. And yet, their lack of transparency and interpretability has raised concerns about their reliability and fairness.

They Are Commonplace and Highly Capable

Black box AI models power everything from the facial recognition software used to unlock our phones to the hiring algorithms used to screen job candidates. Because black box AI models can process and analyze large amounts of unstructured data, they’re especially adept at complex tasks like speech recognition, predictive analytics and natural language generation.

They’re the reason why AI voice assistants like Alexa and Google Assistant can understand what users are saying to them, and why chatbots like ChatGPT and Gemini are so good at carrying on fluent conversations.

Every day, these systems are used to detect anomalies in medical treatments, flag suspicious charges in a person’s bank account and power self-driving cars.

… But They Come With Transparency Concerns

Herein lies one of the AI industry’s biggest conundrums: The more these notoriously opaque models are used in daily life, the more people demand to understand how they work. After all, if AI can be used to deny someone a loan or screen someone out of a job, it stands to reason they’d want to know why the decision was made.

This increased scrutiny of black box AI has led to a push in the research and development of explainable AI — a field that seeks to understand how AI models arrive at their outputs, with the goal of making AI more trustworthy, accurate and fair. A robust solution is still years away, but several promising techniques have emerged. Some researchers tinker with inputs to see how they affect a model’s output in order to further illuminate its decision-making process. Others are using specialized algorithms to probe a model’s behavior. A team at Anthropic, the creator of AI chatbot Claude, is reverse engineering large language models and mapping out their neural networks to better understand why they come up with specific outputs — claiming significant progress. 

Meanwhile, the United States, European Union and other jurisdictions are creating regulatory frameworks that call for AI to be more understandable and interpretable, particularly when it’s used in high-stakes sectors like healthcare, finance and criminal justice. This will likely spur the development of more explainable AI, said Uri Yerushalmi, chief AI officer at AI company Fetcherr.

“Companies will need to comply with new standards focusing on ethical and clear AI practices,” Yerushalmi told Built In.

Related ReadingWhat Is Responsible AI?

 

 

How Do Black Box AI Models Work?

The enigmatic nature of black box AI models can be attributed to their use of deep learning, a type of artificial intelligence that enables models to independently learn and continuously improve without any explicit programming. 

To achieve this, deep learning systems rely on artificial neural networks, which mimic the structure and function of the human brain through a series of interconnected nodes, called neurons. When data is fed into the model, it is processed through layers of these neurons, which perform increasingly complex calculations to identify patterns in the data. We know that the patterns identified by the models help them make predictions or decisions, but the specific steps used to arrive at those outputs can be hard to decipher.

For example, if a deep learning model is trained to identify images of panda bears, it’s unclear exactly which characteristics the model is using to make its decision, and how much weight it assigns to each of them. The model may have learned to focus on certain physical features of the panda — like its distinctive black and white fur patterns — or it may prioritize other, less-obvious characteristics that indicate a panda is there, such as the presence of bamboo. If the model strictly associates bamboo with pandas, it may incorrectly classify images with bamboo as containing pandas, or fail to identify pandas in images without bamboo.

“We can’t always trace why a particular answer was derived.”

“We can’t always trace why a particular answer was derived,” said Duncan Curtis, senior vice president of AI, product and technology at AI company Sama. “It could come from bias in the training dataset, it could come from things that aren’t necessarily obvious to a human.”

In contrast, white box AI models (also known as glass box AI) are transparent about how they arrive at their conclusions. These kinds of models rely on decision trees, a type of algorithm that uses a branching method to illustrate every possible outcome of a decision.

Models that use decision trees or similar algorithms allow users to track every step of the decision-making process, making it easier to understand how they arrived at their conclusions, said Gaurav Rao, CEO of explainable AI company Howso. But, for models that use more complex deep learning algorithms like transformers or recommendation systems, “the decision-making process is much more challenging to follow.”

Related ReadingAre You Sure You Can Trust That AI?


Black Box AI Advantages

While black box AI models pose several challenges, they also offer some unique advantages:

Higher Accuracy

Thanks to deep learning, black box AI models are typically able to process and analyze large volumes of complex data, and make predictions or decisions based on that data with a high degree of accuracy. Because of this, black box models often work better than more interpretable ones, according to Pulford.

“Some of our best algorithms require the computer to devise logic that’s not understandable by humans, but is still extremely effective — often more effective than we could get with a model we fully understood,” Pulford said. “The main incentive for using a black box model is that you often get good results fast.”

Good at Problem Solving 

Because black box AI models don’t process information the same way humans do, they can be effective at solving problems in new and innovative ways. They use complex algorithms to identify connections and relationships between large sets of data, allowing them to come up with novel solutions to problems that may not have ever been considered before.

“It’s really a complement to human thought, it’s not replicating human thought,” Pulford said. “It’s using a different kind of logic that we don’t have access to, and that often involves features that we can’t understand.”

Protects Intellectual Property

Using black box AI models can help developers protect their intellectual property by obscuring the details of their algorithms and training data, making it harder for competitors to replicate or reverse engineer their products.

“If I’m a provider that’s sharing my models more ubiquitously, I want to be able to protect my secret sauce,” Rao said. “From that perspective, there may be a commercial advantage for me to have a more black box model.”

Indeed, most of the models coming out of AI giants like OpenAI and Google are shrouded in secrecy — all the way down to their training data sources. At the same time though, other companies like Meta, Mistral AI and xAI have also put out open source models, making their algorithms, training data and decision-making processes more transparent than most other commercial models. While open source AI isn’t entirely explainable, it does “shed some light” on what’s happening inside a black box model, Curtis said. 

Related ReadingHow To Build Trust and Transparency in Your Generative AI Implementation


Black Box AI Challenges

Black box AI models inherently lack transparency and interpretability, which can raise concerns about their reliability, fairness and accountability.

Difficult to Trust

Because their decision-making processes are challenging to follow and understand, it can be hard to trust that a black box model will consistently provide accurate answers. 

This makes it difficult for users to rely on a model and make decisions based on its predictions or recommendations, said Matteo Bordin, VP of product at AI HR company Oyster. “If you have an AI agent that gives you a suggestion, and there is no way to understand why that AI agent is giving you that answer, it’s difficult for you to be sure that you are following the right kind of advice.”

Hard to Fix Errors

Although black box AI models are known for their accuracy, they are still capable of getting things wrong. When they do, it can be challenging for developers to correct these mistakes because their inner workings are inaccessible, making it difficult to identify and correct errors or biases in their algorithms and training data. 

These models can also be difficult to validate and test, as it can be hard to determine how exactly they will behave in different situations or under different conditions.

Lack of Accountability

The inherent opaqueness of black box AI models makes it hard to hold them accountable for the mistakes they make or the harms they cause

This can be an especially big issue in more high-stakes fields like healthcare, banking and criminal justice, where the choices these models make can significantly impact on people’s lives. For instance, people of color seeking loans to purchase homes or refinance have been overcharged by millions of dollars by AI tools used by lenders. And several hiring algorithms used to screen job applicants had proven to be biased against people with disabilities and other protected groups. Facial recognition software used by some police departments has even been known to lead to false arrests of innocent people, particularly people of color.

Any implicit bias or errors created by a black box models often go unchecked, as it can be difficult to hold individual people or organizations responsible for the judgments made by AI systems that cannot be fully explained.

Frequently Asked Questions

Black box AI describes artificial intelligence systems whose inner workings are not transparent to humans. The logic and data used to reach their outputs are not accessible, making how they operate unclear.

Generative AI is a type of artificial intelligence that is used to create new content (images, text, music, etc.) based on patterns learned from training data. These models are typically considered to be black boxes, because it can be difficult to understand how they arrive at their outputs.

Yes, ChatGPT can be considered a black box because its internal decision-making processes are not fully transparent or explainable, making it difficult to understand exactly how or why it generates specific outputs. 

Hiring Now
Sojern
AdTech • Digital Media • Machine Learning • Marketing Tech • Software • Travel • Hospitality
SHARE