Learn about the new age of cognitive computing and integrating its concepts into two decades of semantic web growth.
Yet another attempt of engineers to make machines “think” and design smarter systems, this type of computing permeates the fields of machine learning, natural language processing, sentiment analysis, risk assessment, fraud detection, speech and face recognition, to name just a few.
Many threads converge to weave the history of cognitive computing and one of them is called Shakey – the “first electronic person” as Life magazine referred to it in 1970.
Buzz headlines aside, Shakey was the first mobile robot and was able to perceive and reason about its environment, to find a route or arrange simple objects. Actually, Shakey didn’t have much of a brain. However, he was a brave predecessor of systems that we find today autonomously cooking, driving and even performing surgeries.
As the field of artificial intelligence is making quantum leaps, Shakey’s descendants (and the cognitive computing systems related to designing those descendants) are getting substantially smarter. As well as way more aware of their environment, ready to independently perform various tasks and operations.
Which brings us to questions like: What does “to know” imply when it comes to machine intelligence? What does it mean for a system to understand or to be aware of something?
By linking things together we can go a very long way toward creating common understanding.
Tim Berners-Lee, Weaving the Web, Chapter 13, Machines and the Web.
Speaking of awareness, understanding and knowledge in the context of cognitive systems, it is sobering to realize how misleading our own language can be. When we use verbs like “know” and “understand” we actually are referring to machine jobs that are more or less a matter of computation.
As authors Brachman and Levesque write in Knowledge Representation & Reasoning:
We might think of a thermostat, to take the classic example, as “knowing” that the room is too cold and “wanting to warm it up. But this type of anthropomorphization is typically inappropriate: there is a perfectly workable electrical account of what is going on. […] Moreover it can be quite misleading to describe an AI system in intentional terms: using this kind of vocabulary, we could end up fooling ourselves into thinking that we are dealing with something much more sophisticated than it actually is.
Cit. Knowledge Representation & Reasoning by Brachman & Levesque (p. 5)
This is a really humble point to start with when it comes to seeing a cognitive system as a machine in the know.
As Tim Berners-Lee once put it, machines won’t be able to translate poetry, but “[a]s long as the information is in machine-understandable form, a Semantic Web program can follow semantic links to deduce that line 2 on the European [tax] form is like line 3A on the U.S. [tax] form, which is like line 1 on the New York State tax form” (cit. Weaving the Web, p. 186-187).
With this in mind, we can go from imagining cognitive computing as creating thinking machines to understanding cognizant systems as mere linking machines.
In his The Periodic Table of AI, Professor of Electrical Engineering and Computer Science – Kristian J. Hammond writes:
The core technologies of machine learning, cognitive computing and speech recognition are genuinely brilliant. But we have to see them for what they are so that we can have the appropriate expectations (and make the appropriate decisions) about what they can and cannot do for us. Even more importantly, we must understand what needs to happen next.
When it comes to machines intelligence, it is the mere fact of creating associations that makes or breaks a system’s smarts.
Integrating information into a coherent whole so it can become knowledge might still be more of a human job. But when it comes to doing the legwork or, in other words, the repetitive tasks along the DIKW pyramid, cognitive systems can be of immense help.
Although they are far from the whole story of intelligence, well-designed cognitive systems have a lot to offer. They can dig relationships for us, sift through massive sets of documents (i.e., legal acts, enterprise documentation, regulations, scientific research), investigate connections, keep up with compliance standards, track consumer behavior. As well as many more tasks that don’t require the human brain’s ability to synthesize information but rather computer powers and formalized logic to integrate data on a scale.
Arguably, knowledge will always be complex and more than just about digesting data. But as computation becomes ubiquitous in every field, the field of knowledge work included, we need to understand what cognitive systems are capable of doing and what they are not.
Only when we acknowledgе that there are tasks at which a cognitive system performs more efficiently and tasks that are still better undertaken by us, humans, we will be ready to use their analytical powers to the fullest.
To begin with and see what a machine can and cannot do, let’s play an awareness game.
I was introduced to it by Ontotext’s CEO Atanas Kiryakov during one of his presentations about Ontotext’s next frontier – cognitive technologies.
Luring us into an awareness game, as he called it, he compared the kinds of connections we would make in our heads to the computational results three of Ontotext’s demonstrators would serve. All three – data.ontotext.com, rank.ontotext.com and factforge.net – are based on Ontotext’s years of experience in the field of Semantic Technology and Linked Data.
This was a very good game for three reasons.
First, it showed us, the audience, some examples of how a “cognizant” system operates. Second, the game allowed us to understand how a system can serve a human in a range of jobs related to getting a fine-grained picture of integrated information. And, finally, it showed us some of the limitations of a “computer brain”.
As we saw, by playing this awareness game, machines’ powers to link information and integrate data are surprisingly good. Not surprisingly, for the time being, they depend on our abilities to interact with them (think the not-so-common digital literacy of writing SPARQL queries).
But despite some of the shortcomings (e.g., insufficient data, lack of provenance, noisy data, etc.), the implications of such a technology are multifold:
As Amit Sheth of Knoesis puts it in Semantic, Cognitive, and Perceptual Computing: Advances toward Computing for Human Experience:
Cognitive systems learn from their experiences and then get better when performing repeated tasks.
In other words, when it comes to understanding the benefits cognitive computing can bring to an organization, grey matter competing with digital matter is out of the question. In the very near future, our ability to make associations is not to be replaced but only complemented by our 5000 GB-memoried friends.
Can you imagine the future of your enterprise with this new breed of computing?
Let’s talk about what a cognitive system can do for your organization.
Dive in our webinar: Graph Analytics on Company Data and News and play the game yourself!