Economy.bg talks to Milena Yankova about how machines can and do help people to be more efficient and more creative.
Defining artificial intelligence is tricky. Firstly, we aren’t agreed on what intelligence actually is, natural or artificial. The most famous test for artificial intelligence, the Turing Test, is little more than a process where, after asking a series of questions, a person decides if the thing they are interacting with seems intelligent or not. Who or what is in actual possession of intelligence is a concept fraught enough to make a philosopher swallow his own Gauloises.
The term ‘artificial intelligence’ further complicates the matter, because humans aren’t comfortable with the idea of anything but them having intelligence. A ‘computer’ used to be a job title. 17th century astronomers often employed men to be their computers for the calculations of planetary positions. In the subsequent centuries, the demand for human computers increased and in the early 20th century, it was one of the few careers women were able to pursue. Once humans automated calculations and a computer became an object rather than a line on someone’s CV, we seemed to have collectively decided that calculating didn’t quite count as intelligence.
We have been pushing back what intelligence is every time a machine is better than us at some task. Game playing is a good example of this. Deep Blue made chess no longer a task requiring intelligence; brute computing power was sufficient. However, I suspect if we ever found a dolphin who knew the Ruy Lopez opening, we would say that it was an astonishingly intelligent dolphin. This AI effect is best illustrated by Tesler’s Theorem, “AI is whatever hasn’t been done yet.”
Ignoring the philosophical complexity, AI has become a generic term to describe computers deriving non-trivial conclusions or making decisions, seemingly with intelligence. It looks like we haven’t strayed far from Turing’s understanding after all. AI in the everyday is not as sexy as Hollywood would have us believe, but computers are regularly performing tasks we previously considered impossible without humans such as driving cars and increasingly AI is solving problems beyond the capabilities of humans alone.
For Artificial Intelligence to make its intelligent decisions, it is necessary to emulate ‘context’. It must be possible to represent the relationships, complexity and, most importantly, the meaning of data in such a way that the AI’s seemingly intelligent conclusions and decision are sensible in the real-world. Within Ontotext’s GraphDB, knowledge graphs represent data and the meaning of that data. This is how information is transformed into the ‘knowledge’ in knowledge graphs. GraphDB’s use of Linked Data captures the relationships between the data, which is the ‘graph’ part. This representation of data’s semantics and its relationships create a contextual understanding that is used by AI to make those seemingly intelligence decisions.
By storing information as a graph, namely points of information that are interconnected, context is an inevitable consequence compared to information stored in a traditional relational database where our chaotic multi-dimensional world is crammed into a 2D representation of rows and columns. Instead, by connecting a piece of information to the graph, you are immediately putting it into context by virtue of the information you have connected it to. If a piece of information is quite a number of hops away from another, then it can be understood to be less contextually relevant than information only one hop away.
Rather than binding our data into the artificial and simplified format of a relational database, data stored in graphs mimic the intuitive way humans understand information. We inevitably connect information. We make generalisations and contextualise our information. As we learn new information, we relate it to the knowledge we already have. Having data at a higher level of abstraction, closer to human understanding, avoids conceptual ‘translation errors’ between the solution and the real-world problem.
Human intelligence is non-linear and able to take multiple perspectives. So must be any AI system tasked to make difficult decisions. The schema-less nature of knowledge graphs creates flexibility by not being rigidly enforcing types on data. Knowledge graph are able to represent data according to a schema, and often do, but it is not compulsory as in traditional relational databases.
As new data is discovered or becomes irrelevant, the knowledge graph is designed to grow or be pruned automatically. The real-world changes and so will the problems it presents to an AI system. The relevance of the context will change and a knowledge graph is capable of changing with it. It would be impracticable if not impossible to be able to add information manually as the system needed it. The knowledge graph will have a means via inference to make the connections for any newly added information.
Ontotext is continuing to push the boundaries of what a database can do to support the increasing uptake of AI technology with plugins such as Semantic Similarity. This heuristic-driven approach will enable clients to discover various kinds of similarities that exist within their content in a scalable way. A capability that is essential considering the volumes of content within even small enterprises and across the wider internet.
The documents we create are made of words. Words have meaning. Those meanings together across the document create context. Through some near-AI cleverness, the plug-in finds similarities and the relevance of words within a document to other content within an organisation producing a broader understanding of the information at hand. The development of plug-ins like Semantic Similarity shows how tricky it is to point to one programme or a part of a system as being the ‘artificial intelligence’. It is usually the integration of systems that are, clever in their own way, solving a bit of the bigger problem to produce that seemingly intelligent decision. There are more technical details of the Semantic Similarity plugin in the newest version of GraphDB.