What does it mean to be intelligent? Does it mean winning a game of chess? Translating languages? How about recognizing the emotions of a person’s face? Or writing a piece of fiction? As artificial intelligence continues to grow more advanced, the threshold of what can be considered true intelligence continues to get more complicated to define.
Experts insist that these machines aren’t as intelligent as humans — at least not yet. The existence of strong AI, or artificial intelligence that is capable of learning and thinking like humans do, hasn’t arrived yet. But it certainly seems to be on the horizon.
Strong AI vs. Weak AI
“I think we’re a lot closer than everybody knows,” James Rolfsen, the global head of data analytics at last-mile delivery startup Rappi, told Built In. “Some people say we’re 20 years out. Some people say we’re 10 years out. I think we’re within 10 years.”
Until then, we will have to settle for the type of artificial intelligence that exists today: weak AI, which operates under far more constraints and limitations than even the most basic human intelligence in order to perfect very specific tasks.
Make no mistake though, weak AI is anything but weak. It has transformed virtually every industry, from healthcare to education to sports. And it is setting a foundation for AI’s potential to not only mimic and match human intelligence, but far exceed it.
What Is Weak AI?
Weak AI has many names. Rolfsen prefers the term “specialized AI” due to its ability to perform very specialized tasks — much of the time even more successfully than humans.
Kathleen Walch, a managing partner at Cognilytica’s Cognitive Project Management for AI certification and co-host of popular podcast called AI Today, prefers the term “narrow AI.” The word “weak,” she told Built In, “implies that these AI systems aren’t powerful and are not able to perform useful tasks, which is not the case. In fact, all of the current applications of AI we currently have fall into the category of narrow AI.”
Weak AI focuses on a specific task, operating under far more constraints than even the most basic human intelligence in order to perfect that task and perform it even better than humans. Its limited functionality allows it to automate that specific task with ease, and its narrow focus has allowed it to power many technological breakthroughs in just the last few years.
Indeed, weak AI is easily the most successful realization of AI to date. Two of the four types of artificial intelligence fall under its umbrella: reactive machines and limited memory machines. Reactive machines are the most fundamental kind of AI in that they can respond to immediate requests and tasks, but aren’t capable of storing memory or learning from past experiences. Limited memory is the next step in AI’s evolution, which allows machines to store knowledge and use it to learn and train for future tasks.
In general, weak AI’s limited functionality allows it to automate specific tasks with ease. And its narrow focus has allowed it to power many technological breakthroughs in just the last few years.
Examples of Weak AI
One of the first, most famous examples of weak AI is Deep Blue, a computer created by IBM that beat world chess champion Gary Kasparov in a six-game match in 1997 (Kasparov won their first match a year earlier). Deep Blue was able to choose from hundreds of millions of moves, and managed to “see” 20 moves ahead of its opponent — a feat no human to date has been able to achieve.
“Just the idea of being able to get a machine to actually do any task, even if it was a specialized task, in a way which was superior to human intelligence was mind blowing to people at the time,” Rolfsen said. Now, 25 years later, narrow AI has pervaded virtually every part of daily life.
Examples of Weak AI
- Spotify shuffle
- Email spam filters
- Smart assistants like Siri, Alexa and Cortana
- Self-driving cars
- Google maps
- Apple autocorrect
Smart assistants like Siri, Alexa and Cortana are able to set reminders, search for online information and control the lights in people’s homes thanks to weak AI and its ability to collect users’ information, learn their preferences and improve their experiences based on prior interactions. Self-driving cars use weak AI’s deep neural networks to detect objects around them, determine their distance from other cars, identify traffic signals and much more. And every time you go to Amazon to shop, or doomscroll through your newsfeed on Facebook, everything you see on a page is personalized so you take the actions the company wants you to, with the help of data and AI.
“People have no idea how much of their lives is actually governed by weak AI, or specialized AI. They just have no idea,” Rolfsen said. And there’s a reason it’s so widespread. “Weak AI is profitable AI,” he continued. “This is how companies make money, this is how companies have an edge over other companies nowadays.”
What Is Strong AI?
Like weak AI, strong AI has another name: artificial general intelligence, or AGI. This is artificial intelligence that is capable of behaving and performing actions in the same ways human beings can. AGI mimics human general intelligence, and is able to solve problems and learn news skills in ways similar to our own.
“The more an AI system approaches the abilities of a human being, with all the intelligence, emotion, and broad applicability of knowledge, the more ‘strong’ the AI system is considered,” Walch said.
Strong, or general, artificial intelligence can generalize knowledge and apply that knowledge from one task to another, plan ahead according to current knowledge and adapt to an environment as changes occur, she added. “Once a system is able to do all of this, it would be considered AGI.”
Examples of Strong AI
Because it doesn’t actually exist yet, the only true examples of AGI are found in works of science fiction like Star Trek: The Next Generation, Wall-E and Her — and most of the time they either depict a utopian version of this technology, or a dystopian one.
“The dream of strong AI,” Rolfsen said, “is this idea of building an intelligent system which is capable of mastering not just a prespecified skill or prespecified objective, but of actually engineering a system that can dynamically adapt to any decision-making environment.”
In practice, this would, of course, be an absolute game-changer for humanity. A computer with artificial general intelligence could scan all the world’s knowledge housed on the internet to solve some of the world’s most pressing problems, or even predict and address them before they come into existence in the first place.
The Promise and Perils of Strong AI
But the prospect of strong AI becoming so advanced that it surpasses human intelligence and capabilities also breeds quite a bit of fear. Some of these fears include AGI taking over the world, or that all our data and privacy will be lost to AI. And then there’s the ongoing debate over whether all of our jobs will be replaced by robots.
Walch insists that many of the fears people have regarding AI are “emotional more than [they are] rational.” However, she added, there are some valid concerns that need to be dealt with, including lack of transparency (especially regarding deep learning), people misusing AI for nefarious purposes, bias, lack of security, and governments’ general ignorance when it comes to dealing with this technology legislatively.
“For data-driven projects, especially AI projects, organizations need to keep humans in the loop.”
Going forward, if people continue to feel threatened by increasingly sophisticated AI systems, there will be “major resistance,” according to Walch. So it’s important to prevent these worst fears from being fulfilled. To do this, Walch said, it’s important that every advancement in this space has a purpose, and to always keep the human impact in perspective. This has been outlined in the Cognitive Project Management for AI methodology, which is largely considered to be a best practice in AI and ML projects.
“If you’re not solving a real problem for your organization then you probably shouldn’t be doing the project in the first place,” Walch said. “For data-driven projects, especially AI projects, organizations need to keep humans in the loop. By doing this, they will ensure that AI projects stay on scope.”
Tech That’s Close to Having Strong AI
Again, AI that is fully capable of learning, thinking and behaving like a real person doesn’t exist. Still, today’s AI is capable of incredible things, some of which even experts don’t fully understand. And there are some remarkably sophisticated systems out there now that are approaching the AGI benchmark.
One of them is GPT-3, an autoregressive language model designed by OpenAI that uses deep learning to produce human-like text. In other words: it’s an “autocomplete algorithm on steroids” as Rolfsen puts it. GPT-3 is not intelligent, per se, but it has been used to create some extraordinary things, including a chatbot that lets you talk to historical figures and a question-based search engine. “These systems are capable of incredibly, incredibly complex logic that was absolutely impossible two years ago,” he said.
Another is MuZero, a computer program created by DeepMind that has managed to master games it has not even been taught how to play, including chess and an entire suite of Atari games, through sheer brute force in re-playing games millions of times.
“[It’s] able to use the same intelligent system on all of these different environments, and the system is able to learn all these things automatically, without being engineered for any one of them in particular. That in itself is a mind-blowing concept,” Rolfsen said. “I would argue strongly that there is a degree of generalized intelligence which has been engineered into that system.”
Strong AI vs. Weak AI vs. Superintelligence
So, if weak AI automates specific tasks better than humans, and strong AI thinks and behaves with the same agility of humans, you may be wondering where artificial intelligence can go from there. And the answer is: superintelligence.
A superintelligent machine would be completely self-aware, surpassing the likes of human intelligence in practically every way. And, like artificial general intelligence, it is complete science fiction (for now).
Nick Bostrom, a founding professor and leader of Oxford’s Future of Humanity Institute, appears to have coined the term back in 1998, and predicted that we will have achieved superhuman artificial intelligence within the first third of the 21st century. In some ways, Rolfsen said, it is already among us in the form of weak AI.
Deep Blue, for example, can play chess better than any human, making it a form of very specific superintelligence. But it’s still not generalized intelligence. Therefore, “we know that Deep Blue is not going to take over the world and enslave humans. We know that’s not going to happen because all Deep Blue knows how to do is play chess,” Rolfsen said.
But once we accomplish generalized superintelligence, then we’re getting into the possibility of computers potentially outwitting humans in every way — something truly nightmarish or incredibly exciting, depending on how you look at it. Either way, “it’s definitely going to happen,” according to Rolfsen.
How Will We Know When We’ve Achieved Strong AI and Superintelligence?
If and when superintelligence — or even true artificial general intelligence — is achieved, how will we know when we’ve encountered an example of it? The answer is more complicated than you might think.
For a long time, a machine’s intelligence was evaluated with the Turing Test. Developed by Alan Turing in 1950, the test puts a human, a computer and an interrogator in a conversational setting. If the interrogator can’t distinguish between the human and the computer, then the computer has managed to convince the interrogator of its “humanness,” Rolfsen explained, thus rendering its intelligence indistinguishable from that of a human’s.
Nowadays, some very advanced chatbots can seemingly pass the Turing Test, despite being pieces of weak AI and, therefore, not as intelligent as humans. Walch said some experts believe that “once an AI system is able to really tell a joke, then you’ll know that it’s crossed over to AGI territory.”
“The immense flexibility of the human mind is what captivates us and makes us believe that humans are different from machines.”
For Rolfsen, the standards are a bit higher. Generalized intelligence — our ability to adapt in unfamiliar environments using context and previous knowledge — is “one of the things that makes humans magical beings,” he said. We’re not going to just hand that out to any old machine.
“The immense flexibility of the human mind is what captivates us and makes us believe that humans are different from machines,” Rolfsen said. “We’re holding all of these AI systems to a much higher standard than we even realize because we’re holding them to our highest standards [of ourselves].”
For instance, before Deep Blue, many other AI systems had managed to beat most humans at chess. But the general public wasn’t ready to call those other computers intelligent because, “most humans kind of suck at chess,” as Rolfsen put it. It took Deep Blue beating Gary Kasparov in that 1997 chess match for people to acknowledge that these systems could be considered intelligent in any sense.
Now, as we continue to attempt to capture and digitize all the complexities of human intelligence, we’re not only pushing the limits of what it means to be artificially intelligent, but also what it means to truly be human.
“I’ve actually learned more, a lot more, about humanity and what it means to be a human being by going through the process and attempting to replicate human decision making and human interaction,” Rolfsen said. “By working so intensely and advancing so quickly in our technology around AI, we’re actually learning so much more about what makes us human.”