Both cognitive computing and artificial intelligence are part of the next big supercomputing developments. The two terms are different. Artificial intelligence (AI) is concerned with results – simulating how humans would make a decision. Cognitive computing is concerned with the process – how humans think and reason. Cognitive computing doesn’t make the final decision, the human does.
The origins of the distinction
IBM first came up with the idea of distinguishing cognitive computing from artificial intelligence in its development of supercomputers such as Deep Blue and IBM’s Watson.
Deep Blue was able to beat Gary Kasparov, the world chess champion, not because it understood how humans reason individual chess moves – but rather through sheer strength. Deep Blue was able to analyze millions of chess positions and moves in a second and to figure the consequences of each move 20 moves into the future. Deep Blue’s decision-making was based on result-driven AI and not the process driving cognitive computing.
IBM was able to use AI to beat two long-time champions on the game show Jeopardy. IBM’s Watson is now being used for many purposes including making medical diagnoses. In a classic example, a doctor was given a case study of a cancer patient. With the help of Watson, the doctor could review and rank the various treatment options (for effectiveness, speed, and patient convenience) such as chemotherapy. The AI approach would be for Watson to make the decision for the doctor. The cognitive computing approach would allow the doctor to interact with Watson to get helpful guidance – but the doctor would make the final treatment decision(s) on his/her own. IBM also calls cognitive computing “insights-as-a-service” and “platforms-as-a-service.”
The concept of artificial intelligence was first developed by Alan Turing who posited the “Turing Test” in 1950, which stated that if a human evaluator viewed a conversation between a human and a computer and couldn’t reliably know which one was the human and which one was the machine, then the machine would be considered to exhibit intelligent behavior. The test is a result-oriented test. The Turing Archive for the History of Computing uses this definition for AI – “the science of making computers do things that require intelligence when done by humans.”
The key difference is one of approach
According to Brian Krzanich, CEO of Intel, cognitive computing systems use thinking, reasoning, and remembering (human approaches) to solve problems. Cognitive computing allows systems to “learn and adapt as new data arrives” and to explore in the ways that humans explore. AI tests what humans can accomplish.
Artificial intelligence, according to Peter Norvig, Director of Research at Google Inc., “decides what actions to take and when to take them.” AI is when computers, typically supercomputers, solve difficult problems that, if done by humans, would require some degree of intelligence – as opposed to being reflexive.
According to IBM, “cognitive computing refers to systems that learn at scale, reason with purpose and interact with humans naturally.” Cognitive science is about having the computer simulate an approach to solving problems that comport to the way humans would have reasoned a solution.
Both artificial intelligence and cognitive computing are being used in a variety of fields including
- Medicine and healthcare
- Information management
- Analysis of data
- General security and cybersecurity
- Consumer applications
- Face recognition technology such as that used in Google Photos
- Driverless cars
Cognitive computing is based on analyzing large amounts of data on all connected devices – including those connected to the Internet of All Things (IoT).