If you’ve ever used Amazon’s Alexa, Apple’s Face ID or interacted with a chatbot, you’ve interacted with artificial intelligence (AI) technology.
There are a lot of ongoing AI discoveries and developments, most of which are divided into different types. These classifications reveal more of a storyline than a taxonomy, one that can tell us how far AI has come, where it’s going and what the future holds.
These are the seven types of AI to know, and what we can expect from the technology.
7 Types of Artificial Intelligence
- Artificial Narrow Intelligence: AI designed to complete very specific actions; unable to independently learn.
- Artificial General Intelligence: AI designed to learn, think and perform at similar levels to humans.
- Artificial Superintelligence: AI able to surpass the knowledge and capabilities of humans.
- Reactive Machines: AI capable of responding to external stimuli in real time; unable to build memory or store information for future.
- Limited Memory: AI that can store knowledge and use it to learn and train for future tasks.
- Theory of Mind: AI that can sense and respond to human emotions, plus perform the tasks of limited memory machines.
- Self-aware: AI that can recognize others’ emotions, plus has sense of self and human-level intelligence; the final stage of AI.
Capability-Based Types of Artificial Intelligence
Based on how they learn and how far they can apply their knowledge, all AI can be broken down into three capability types: Artificial narrow intelligence, artificial general intelligence and artificial superintelligence. Here’s what to know about each.
1. Artificial Narrow Intelligence
Artificial narrow intelligence (ANI), also known as narrow AI or weak AI, describes AI tools designed to carry out very specific actions or commands. ANI technologies are built to serve and excel in one cognitive capability, and cannot independently learn skills beyond its design. They often utilize machine learning and neural network algorithms to complete these specified tasks.
For instance, natural language processing AI is a type of narrow intelligence because it can recognize and respond to voice commands, but cannot perform other tasks beyond that.
2. Artificial General Intelligence
Artificial general intelligence (AGI), also called general AI or strong AI, describes AI that can learn, think and perform a wide range of actions similarly to humans. The goal of designing artificial general intelligence is to be able to create machines that are capable of performing multifunctional tasks and act as lifelike, equally intelligent assistants to humans in everyday life.
3. Artificial Superintelligence
Artificial superintelligence (ASI), or super AI, is the stuff of science fiction. It’s theorized that once AI has reached the general intelligence level, it will soon learn at such a fast rate that its knowledge and capabilities will become stronger than that even of humankind.
ASI would act as the backbone technology of completely self-aware AI and other individualistic robots. Its concept is also what fuels the popular media trope of “AI takeovers,” as seen in films like Ex Machina or I, Robot. But at this point, it’s all speculation.
“Artificial superintelligence will become by far the most capable forms of intelligence on earth,” said David Rogenmoser, CEO of AI writing company Jasper. “It will have the intelligence of human beings and will be exceedingly better at everything that we do.”
Functionality-Based Types of Artificial Intelligence
Functionality concerns how an AI applies its learning capabilities to process data, respond to stimuli and interact with its environment. As such, AI can be sorted by four functionality types.
4. Reactive Machines
The genesis of AI began with the development of reactive machines, the most fundamental type of AI. Reactive machines are just that — reactionary. They can respond to immediate requests and tasks, but they aren’t capable of storing memory or learning from past experiences.
“They cannot improve their functionality through experience, and can only respond to a limited combination of inputs.”
In practice, reactive machines can read and respond to external stimuli in real time. This makes them useful for performing basic autonomous functions, such as filtering spam from your email inbox or recommending movies based on your most recent Netflix searches.
Most famously, IBM’s reactive AI machine Deep Blue was able to read real-time cues in order to beat Russian chess grandmaster Garry Kasparov in a 1997 chess match. But beyond that, reactive AI can’t build upon previous knowledge or perform more complex tasks. In order to apply AI in more advanced scenarios, developments in data storage and memory management needed to occur.
5. Limited Memory
The next step in AI’s evolution is developing a capacity for storing knowledge. But it would be nearly three decades before that breakthrough was reached, according to Rafael Tena, senior AI researcher at insurance company Acrisure Innovation.
“All present-day AI systems are trained by large volumes of training data that they store in their memory to form a reference model for solving future problems.”
“There was a huge amount of progress in the 80s,” Tena said. But that eventually slowed. “There were small incremental changes …until deep learning came around.”
In 2012, the field of AI made major progress. New innovations from Google and Image Net made it possible for artificial intelligence to store past data and make predictions using it. This type of AI is referred to as limited memory AI, because it can build its own limited knowledge base and use that knowledge to improve over time. Today, the limited memory model represents the majority of AI applications.
“Nearly all existing applications that we know of come under this category of AI,” Rogenmoser said. “All present-day AI systems are trained by large volumes of training data that they store in their memory to form a reference model for solving future problems.”
6. Theory of Mind
In terms of AI’s progress, limited memory technology is the furthest we’ve come — but it’s not the final destination. Limited memory machines can learn from past experiences and store knowledge, but they can’t pick up on subtle environmental changes, emotional cues or reach the same level of human intelligence.
“Current models have a one-way relationship,” Rogenmoser said. “AI [tools] like Alexa and Siri don’t react with any emotional support when you yell at them.”
The concept of AI that can perceive and pick up on the emotions of others hasn’t been fully realized yet. This concept is referred to as “theory of mind,” a term borrowed from psychology that describes humans’ ability to read the emotions of others and predict future actions based on that information.
“Machines may work better than us 90 percent of the time, but that last ten percent, what you would describe as common sense, is really hard to get to.”
Tena provided an example to illustrate how a successful theory of mind application would revolutionize the technology: A self-driving car may perform better than a human driver the majority of the time because it won’t make the same human errors. But if you, as a driver, know that your neighbor’s kid tends to play close to the street after school, you’ll know instinctively to slow down while passing that neighbor’s driveway — something an AI vehicle equipped with basic limited memory wouldn’t be able to do.
Theory of mind could bring plenty of positive changes to the tech world, but it also poses its own risks. Since emotional cues are so nuanced, it would take a long time for AI machines to perfect reading them, and could potentially make big errors while in the learning stage. Some people also fear that once technologies are able to respond to emotional signals as well as situational ones, the result could mean automation of some jobs. But no need to worry just yet — Rogenmoser said that this hypothetical future, however, is still very far off.
“Right now, this intelligence is science fiction,” he said. “We’re not even close to developing this type of AI, so no one is getting their job stolen by AI.”
The stage beyond theory of mind, when artificial intelligence develops self awareness, is referred to as the AI point of singularity. It’s thought that once that point is reached, AI machines will be beyond our control, because they’ll not only be able to sense the feelings of others, but will have a sense of self as well.
“People both strive to create this type of AI and fear the consequences of its creation, worrying that this type of AI could steal our jobs or take over our world,” Rogenmoser said. “If this type of AI is successfully created, no one knows what the impact will be.”
“If this type of AI is successfully created, no one knows what the impact will be.”
Steps are being taken by researchers and engineers to develop rudimentary versions of self-aware AI. Perhaps one of the most famous of these is Sophia, a robot developed by robotics company Hanson Robotics.
While not technically self aware, Sophia’s advanced application of current AI technologies provides a glimpse of AI’s potentially self-aware future. It’s a future of promise as well as danger — and there’s debate about whether it’s ethical to build sentient AI at all. But for now, Rogenmoser said we don’t need to worry about AI conquering the world.
“AI is going to become much better at solving real use cases, but I want to express that I don’t think this [means] the end of humans and the end of work,” he said. “We will continue to see AI pop up in useful ways to amplify the great work that people are already doing.”