Strong AI vs. Weak AI: What’s the Difference?

Artificial intelligence has three widely accepted classifications — only one of them is actually possible right now.

Written by Ellen Glover
The digital brain of weak AI compared to a detailed brain of strong AI.
Image: Shutterstock / Built In
UPDATED BY
Brennan Whitfield | Apr 02, 2024

As artificial intelligence continues to grow more advanced, the threshold of what can be considered true intelligence continues to get more complicated to define.

Experts insist that these machines aren’t as intelligent as humans — at least not yet. The existence of strong AI, or artificial intelligence that is capable of learning and thinking like humans do, remains theoretical for now.

Until then, we will have to settle for the type of artificial intelligence that exists today: weak AI, which focuses and excels in completing very specific tasks.

Strong AI vs. Weak AI

Weak AI, also called narrow AI, is capable of performing a specific task that it’s designed to do. Strong AI, on the other hand, is capable of learning, thinking and adapting like humans do. That said, strong AI systems don’t actually exist yet.

Below we dive deep into how strong AI and weak AI compare — and what we can expect from these technologies going forward.

 

What Is Strong AI?

Strong AI, also known as artificial general intelligence or AGI, is capable of behaving and performing actions in the same ways human beings can. AGI mimics human intelligence, and is able to solve problems and learn news skills in ways similar to our own.

“The more an AI system approaches the abilities of a human being, with all the intelligence, emotion, and broad applicability of knowledge, the more ‘strong’ the AI system is considered,” Kathleen Walch, a managing partner at Cognilytica’s Cognitive Project Management for AI certification and co-host of the AI Today podcast, told Built In.

Strong AI can generalize knowledge and apply that knowledge from one task to another, plan ahead according to current knowledge and adapt to an environment as changes occur, she added. “Once a system is able to do all of this, it would be considered AGI.”

More on AIThe Future of AI: How Artificial Intelligence Will Change the World

 

Examples of Strong AI

Because it doesn’t actually exist yet, the only true examples of AGI are found in works of science fiction like Star Trek: The Next Generation, Wall-E and Her.

In practice, strong AI would be a game-changer for humanity. A computer with artificial general intelligence could scan all the world’s knowledge housed on the internet to solve some of the world’s most pressing problems, or even predict and address them before they come into existence in the first place.

“The dream of strong AI,” said James Rolfsen, the global head of data analytics at Rappi, “is this idea of building an intelligent system which is capable of mastering not just a prespecified skill or prespecified objective, but of actually engineering a system that can dynamically adapt to any decision-making environment.”

As for a timeline, Rolfsen added, “I think we’re a lot closer than everybody knows.”

The Promise and Perils of Strong AI

The prospect of strong AI becoming so advanced that it surpasses human intelligence and capabilities breeds quite a bit of fear. Some of these fears include AGI taking over the world, or that all our data and privacy will be lost to AI.

Walch insists that many of the fears people have regarding AI are “emotional more than [they are] rational.” However, she added, there are some valid concerns about AI that need to be dealt with, including lack of transparency (especially regarding deep learning), people misusing AI for nefarious purposes, bias, lack of security, and governments’ general ignorance when it comes to dealing with this technology legislatively.

Going forward, if people continue to feel threatened by increasingly sophisticated AI systems, there will be “major resistance,” according to Walch.

Are We Close to Having Strong AI?

While AI that is fully capable of learning, thinking and behaving like a real person doesn’t exist, there are some remarkably sophisticated systems out there approaching the AGI benchmark.

One of them is GPT-4, a multimodal large language model designed by OpenAI that uses deep learning to produce human-like text. GPT-4 and is not intelligent, per se, but it has been used to perform some extraordinary feats, including passing the bar exam, understanding image-based jokes and coding playable video games in seconds.

Another is MuZero, a computer program created by Google DeepMind that has managed to master games it has not even been taught how to play, including chess and an entire suite of Atari games, through sheer brute force in re-playing games millions of times.

At This Point You May Be WonderingWhat Can’t AI Do?

 

What Is Weak AI?

Weak AI is also known as specialized AI or narrow AI. It has the ability to perform very specialized tasks — much of the time even more successfully than humans.

Weak AI is easily the most successful realization of AI to date. Two of the four functionality-based types of artificial intelligence fall under its umbrella: reactive machines and limited memory machines. Reactive machines are the most fundamental kind of AI in that they can respond to immediate requests and tasks, but aren’t capable of storing memory or learning from past experiences. Limited memory is the next step in AI’s evolution, which allows machines to store knowledge and use it to learn and train for future tasks.

Weak AI’s limited functionality allows it to automate that specific task with ease, and its narrow focus has allowed it to power many technological breakthroughs in just the last few years.

More on AIAI Ethics: A Guide to Ethical AI

 

Examples of Weak AI

One of the first, most famous examples of weak AI is Deep Blue, a computer created by IBM that beat world chess champion Gary Kasparov in a six-game match in 1997. Deep Blue was able to choose from hundreds of millions of moves, and managed to “see” 20 moves ahead of its opponent — a feat no human to date has been able to achieve.

Examples of Weak AI

  • Chatbots
  • Email spam filters
  • Smart assistants like Siri and Alexa
  • Self-driving cars
  • GPS and navigation apps like Google Maps
  • Autocorrect features in Apple or Samsung products
  • Spotify shuffle

We see examples of weak AI all around us: Smart assistants like Siri and Alexa are able to set reminders, search for online information and control the lights in people’s homes thanks to its ability to collect users’ information, learn their preferences and improve their experiences based on prior interactions. Self-driving cars use weak AI’s deep neural networks to detect objects around them, determine their distance from other cars, identify traffic signals and much more. Every time you go to Amazon to shop, or scroll through your feed on Facebook, everything you see on a page is personalized so you take the actions the company wants you to, with the help of data and AI.

“People have no idea how much of their lives is actually governed by weak AI,” Rolfsen said.

More on AIWill We Ever See a Real-Life ‘Star Trek’ Universal Translator?

 

Strong AI vs. Weak AI vs. Superintelligence

If weak AI automates specific tasks better than humans, and strong AI thinks and behaves with the same agility of humans, you may be wondering where artificial intelligence can go from there. And the answer is: superintelligence.

A superintelligent machine would be completely self-aware, and surpass the likes of human intelligence in practically every way. And, like artificial general intelligence, it is complete science fiction (for now).

Nick Bostrom, a founding professor and leader of Oxford’s Future of Humanity Institute, appears to have coined the term back in 1998, and predicted that we will have achieved superhuman artificial intelligence within the first third of the 21st century. In some ways, Rolfsen said, it is already among us in the form of weak AI.

Deep Blue, for example, can play chess better than any human, making it a form of very specific superintelligence. But it’s still not generalized intelligence. Therefore, “we know that Deep Blue is not going to take over the world and enslave humans. We know that’s not going to happen because all Deep Blue knows how to do is play chess,” Rolfsen said.

But once we accomplish generalized superintelligence, then we’re getting into the possibility of computers potentially outwitting humans in every way — something truly nightmarish or incredibly exciting, depending on how you look at it. Either way, “it’s definitely going to happen,” according to Rolfsen.

More on AIWhat Is Transhumanism?

 

How Will We Know When We’ve Achieved Strong AI and Superintelligence?

If and when superintelligence — or even true artificial general intelligence — is achieved, how will we know when we’ve encountered an example of it? The answer is more complicated than you might think.

For a long time, a machine’s intelligence was evaluated with the Turing Test. Developed by Alan Turing in 1950, the test puts a human, a computer and an interrogator in a conversational setting. If the interrogator can’t distinguish between the human and the computer, then the computer has managed to convince the interrogator of its “humanness,” Rolfsen explained, thus rendering its intelligence indistinguishable from that of a human’s.

Nowadays, some very advanced chatbots can seemingly pass the Turing Test, despite being pieces of weak AI and, therefore, not as intelligent as humans. Walch said some experts believe that “once an AI system is able to really tell a joke, then you’ll know that it’s crossed over to AGI territory.”

“The immense flexibility of the human mind is what captivates us and makes us believe that humans are different from machines.”

For Rolfsen, the standards are a bit higher. Generalized intelligence — our ability to adapt in unfamiliar environments using context and previous knowledge — is “one of the things that makes humans magical beings,” he said. We’re not going to just hand that out to any old machine.

“The immense flexibility of the human mind is what captivates us and makes us believe that humans are different from machines,” Rolfsen said. “We’re holding all of these AI systems to a much higher standard than we even realize because we’re holding them to our highest standards [of ourselves].”

For instance, before Deep Blue, many other AI systems had managed to beat most humans at chess. But the general public wasn’t ready to call those other computers intelligent because, “most humans kind of suck at chess,” as Rolfsen put it. It took Deep Blue beating Gary Kasparov in that 1997 chess match for people to acknowledge that these systems could be considered intelligent in any sense.

Now, as we continue to attempt to capture and digitize all the complexities of human intelligence, we’re not only pushing the limits of what it means to be artificially intelligent, but also what it means to truly be human.

“I’ve actually learned more, a lot more, about humanity and what it means to be a human being by going through the process and attempting to replicate human decision making and human interaction,” Rolfsen said. “By working so intensely and advancing so quickly in our technology around AI, we’re actually learning so much more about what makes us human.” 

Explore Job Matches.