In 2011, factory floors across the country began welcoming a new employee: a six-foot-tall, 300 pound robot named Baxter equipped with two long, dexterous arms and a pair of expressive digital eyes that followed wherever its arms went.
Unlike other industrial robots, Baxter was collaborative, thanks to cognitive computing — an AI approach designed to simulate the human thought process. That’s how it was trained. Humans could grab its arms and show it how to do tasks more efficiently — essentially serving as a mentor to Baxter, which could then go on and perfect those tasks.
Unfortunately, Baxter was short-lived. After a lot of early fanfare, its creator Rethink Robotics spent years struggling to scale its operations and find enough buyers for Baxter. By 2018, the company went bankrupt and was bought out by German automation company Hahn Group, which continues to work on Baxter’s successor Sawyer.
What Is Cognitive Computing?
Nonetheless, Baxter was the first of a new generation of smarter, more adaptive robots, ushering in a new age of automation in which machines could work safely and harmoniously with humans. It pushed the limits of what humans and machines can accomplish together, thanks largely to its use of cognitive computing — the use of computerized models to emulate the human brain.
By mimicking human thought and problem solving, cognitive computing systems are designed to create a more “symbiotic relationship” between humans and technology, said JT Kostman, a leading expert in applied artificial intelligence and cognitive computing who has worked with everyone from Samsung to Barack Obama during his 2012 presidential campaign. He’s now the CEO of software startup ProtectedBy.AI. It allows people and machines to work together “integratively.”
“We are entering an age where cognitive computing in particular will unburden us and allow people to become more quintessentially human,” Kostman told Built In. “And I’m not talking about the far future. It’s already begun, and we’re going to see that accelerate rapidly.”
What Is Cognitive Computing?
Cognitive computing is the use of computerized models to not only process information in pre-programmed ways, but also look for new information, interpret it and take whatever actions it deems necessary. Systems are able to formulate responses on their own, rather than adhere to a prescribed set of responses.
This is meant to simulate the human thought process in complex situations, particularly where the answers may be ambiguous or uncertain, to provide decision-makers with the information they need to make better data-based decisions. It’s also used to build deeper relationships with people, whether they are customers, prospective employees or patients.
“Cognitive systems are probabilistic, meaning they are designed to adapt and make sense of the complexity and unpredictability of unstructured information. They can ‘read’ text, ‘see’ images, and ‘hear’ natural speech. And they interpret that information, organize it, and offer explanations of what it means, along with the rationale of their conclusions,” John E. Kelly, a senior vice president and director of IBM Research, explained in a 2015 white paper.
To do this, cognitive computing systems use artificial intelligence and its many underlying technologies, including neural networks, natural language processing, object recognition, robotics, machine learning and deep learning. By combining these processes with self-learning algorithms, data analysis and pattern recognition — and constantly ingesting new information in the form of vast amounts of data — computers can be taught and, by extension, “think” about problems and come up with plausible solutions.
Muddu Sudhakar, CEO of tech company Aisera, likens cognitive computing to the process of teaching a child. As children grow up, people teach them things with pictures and words. In cognitive computing, this is known as ontology, or the teaching of what is. People also use dictionaries and books to teach children not only what certain words mean, but the entire context of those words — a process known as taxonomy. For instance, “weather” relates to things like temperature, precipitation and seasons. People also teach children by exhibiting behavior they hope the child will replicate and deterring behavior they don’t like. In cognitive computing, that learning piece is called reinforcement learning.
“If you add all these three things — start with the basic information, ontology and taxonomy, and then add some aspect of learning with reinforcement — then you’ll have a pretty decent system, which can interact with humans,” Sudhakar told Built In. And this can lead to some pretty innovative solutions.
For instance, Sudhakar’s company Aisera has reportedly created the world’s first AI-driven platform to automate employee and customer experiences. Using cognitive computing, the platform can intuitively resolve tasks, actions and workflows across a variety of departments — automating what Sudhakar refers to as “mundane tasks.” It is also making advancements in the world of empathy and determining emotions, which can be especially useful in areas like HR, customer service and sales. For instance, if the platform can detect that a customer is confused based on their voice and language use, it can then give the customer service agent specific prompts to help clarify what might be confusing the customer.
Cognitive Computing vs. Artificial Intelligence
If all of this is sounding a lot like artificial intelligence, you’re not wrong. Cognitive computing and AI are often used interchangeably, but they are not one in the same.
AI is an umbrella term used to describe technologies that rely on large amounts of data to model and automate tasks that typically require human intelligence. Classic examples are chatbots, self-driving cars, and smart assistants like Siri and Alexa. While artificial intelligence uses algorithms to make its decisions, cognitive computing requires human assistance to simulate human cognition.
This means systems have to be adaptive and adjust what they are doing as new information arises and its environment changes. They also have to be able to retain information about situations that have already occured, ask clarifying questions, and grasp the context in which information is being used. AI is one of the building blocks to make all this possible.
“The question with cognitive is, can it have its own intelligence? That’s where the AI comes in. What intelligence can we add to the system?” Sudhakar said.
“It’s simply a more human-centric and human-compatible tool, and it can be a better companion to humans in helping them achieve their goals.”
Indeed, cognitive computing employs a lot of what makes up AI, including neural networks, natural language processing, machine learning and deep learning. But, instead of using it to automate a process or reveal hidden information and patterns in large amounts of data, cognitive computing is meant to simulate the human thought process and assist humans in finding solutions to complex problems.
In other words: Cognitive computing does not automate human capabilities, it augments them.
“It’s simply a more human-centric and human-compatible tool, and it can be a better companion to humans in helping them achieve their goals,” Gadi Singer, a VP and director of emergent AI research at Intel Labs, told Built In. The goal of cognitive computing, he added, “is not to become sentient and replace a human mind, but rather to interact with human-centric concepts and priorities more successfully.”
Cognitive Computing Applications
Some of the most recognizable examples of cognitive computing come in the form of single-purpose demos. In 2011, IBM’s Watson computer won a game of Jeopardy! while running a software called DeepQA, which had been fed billions of pages of information from encyclopedias and open-source projects. And, in 2015, Microsoft unveiled a viral age-guessing tool called how-old.net, which used data from an uploaded image to determine the subject’s age and gender.
Although these one-off demos are impressive, they do not capture the full story of just how much cognitive computing has become inextricably woven throughout our daily lives. Today, this technology is predominantly used to accomplish tasks that require the parsing of large amounts of data. Therefore, it’s useful in analysis-intensive industries such as healthcare, finance and manufacturing.
Cognitive Computing in Healthcare
Cognitive computing’s ability to process immense amounts of data has proven itself to be quite useful in the healthcare industry, particularly as it relates to diagnostics. Doctors can use this technology to not only make more informed diagnoses for their patients, but also create more individualized treatment plans for them. Cognitive systems are also able to read patient images like X-rays and MRI scans, and find abnormalities that human experts often miss.
One example of this is Merative, a data company formed from IBM’s healthcare analytics assets. Merative has a variety of uses, including data analytics, clinical development and medical imaging. Cognitive computing has also been used at leading oncology centers like Memorial Sloan Kettering in New York City and MD Anderson in Houston to help make diagnosis and treatment decisions for their patients.
Cognitive Computing in Finance
In finance, cognitive computing is used to capture client data so that companies can make more personal recommendations. And, by combining market trends with this client behavior data, cognitive computing can help finance companies assess investment risk. Finally, cognitive computing can also help companies combat fraud by analyzing past parameters that can be used to detect fraudulent transactions.
Although it is famous for its Jeopardy! appearance, IBM’s Watson is used in 70 percent of the world’s banking institutions. Another example is Expert System, which turns language into data for applications in virtually every facet of finance, including insurance and banking.
Cognitive Computing in Manufacturing
Manufacturers use cognitive computing technologies to maintain and repair their machinery and equipment, as well as reduce production times and parts management. Once goods have been produced, cognitive computing can also help with the logistics of distributing them around the world, thanks to warehouse automation and management. Cognitive systems can also help employees across the supply chain analyze structured or unstructured data to identify patterns and trends.
IBM has dubbed this corner of cognitive computing “cognitive manufacturing” and offers a suite of solutions with its Watson computer, providing performance management, quality improvement and supply chain optimization. Meanwhile, Baxter’s one-armed successor Sawyer is continuing to redefine how people and machines can collaborate on the factory floor.
Benefits and Risks of Cognitive Computing
For all the good cognitive computing is doing for innovation, ProtectedBy.AI CEO Kostman thinks it’s only a matter of time before bad actors take advantage of this technology as well. “Technology is morally agnostic. A hammer is a wonderful thing if you’re building a house. If you’re beating someone’s head in with it, not so much,” he said.
The same goes for cognitive computing systems. The amount of data collected by these systems presents a golden opportunity for malintended people to do some damage.
Like artificial intelligence, the possibilities for error and even bias are also strong in cognitive computing. Though these systems are designed to have machine precision, they are still the product of humans, which means they are not immune to making erroneous or even discriminatory decisions. Fortunately, these issues are top of mind for a lot of people working in this space.
Perhaps the most widespread concern regarding this technology has to do with what this technology means for the future of humanity and its place in society. Even though it is still in its “early innings” as Aisera CEO Sudhakar put it, cognitive computing is already challenging our perception of human intelligence and capabilities. And the development of a system that can mimic or surpass our own abilities can be a scary thought.
“Worrying about whether AI is going to take over the world is like worrying that we might overpopulate Mars. … It’s so detached.”
But Sudhakar said technological innovation does not mean we no longer need humans anymore. “When we invented tractors, we didn’t replace farmers. You still need farmers,” he explained. But tractors can do hard manual labor in a fraction of the time humans can, thus giving farmers the time and latitude to be more efficient elsewhere.
History is full of instances where people either wildly underestimated technology, or were certain it would bring about the end of humanity. The editor in charge of business books at Prentice Hall in 1957 famously called data processing a “fad that won’t last out the year.” And when we look back at the mass hysteria over the Y2K bug, it almost seems like a joke.
In reality, though, technology has largely “been the greatest boon of humanity,” Kostman said. “Technology has improved our lives by orders of magnitude. We live longer, we live better, we’re safer.” So, “worrying about whether AI is going to take over the world is like worrying that we might overpopulate Mars,” he added. “It’s so detached.”
In fact, the very nature of cognitive computing could solve some of the problems it currently has.
“[Humans] can come up with broad strategies. But the discreet solutions of how we operationalize, how we implement, on some of these solutions, tends to be beyond the keen of people. People are not very good at solving nonlinear, dynamically complex problems,” Kostman said. “[Cognitive computing systems] were built for that.”
Plus, cognitive computing systems are good at processing vast amounts of data from a variety of sources (images, videos, text, and so on), making it adaptable to a variety of industries. Its ability to “explain” is another exciting feature of cognitive computing, said Intel Labs’ Singer, which can be essential to further innovations in this space down the road.
“Today’s AI often makes mistakes because it doesn’t understand common-sense concepts,” he said. But the knowledge base and neural networks of cognitive computing systems will usher in the “next wave of AI to provide higher quality and dependability.”
The Future of Cognitive Computing
Singer refers to this next wave as “cognitive AI.” By combining the abilities of cognitive computing with the tremendous power that comes from neural networks, deep learning and the other technology powering AI, computers will improve their ability to understand events, potential consequences and common sense, he explained, which could be a real game changer for humanity’s relationship to technology.
He envisions the generic AI we have today evolving into more of a “personalized companion that continuously learns” and adapts to the changing context of the situation.
With cognitive computing as a backbone, these systems could do anything. They’d be able to predict threats with more accuracy, based on abnormalities in data. They could use their natural language intelligence and sophisticated data analysis capabilities to create completely personalized diagnoses and treatments for patients. And entire smart cities could be developed in order to organize resources based on people’s movements and consumption patterns.
“We are starting to move from the age of innovation to the age of implementation.”
The possibilities are seemingly endless. Now it’s a matter of taking what we have and making it work for us.
“We are starting to move from the age of innovation to the age of implementation,” Kostman said, likening where we are to where electricity was when Thomas Edison invented the lightbulb. “The light has been invented, it’s accessible, and now we have to go past that threshold from innovation to implementation and really make all this very useful.”