Read about the knowledge graph and about how many enterprises are already embracing the idea of benefiting from it.
There was a time in the not so distant past when a convoluted question like “How many calories are in Tesco’s best-selling soup?” was very unlikely to bring anything meaningful, notes Phil Lewis in Smarter enterprise search: why knowledge graphs and NLP can provide all the right answers. In that time, continues Lewis, we all had to transform our searches to keywords in the hope that the systems we queried would give us better results and we would ultimately find out what we were looking for.
Could it be that this time is over? Or, at least, that it is about to fade away, opening the way for technologies that not only connect data in meaningful ways but also speak the language of the system user and not the other way round?
It is tempting to imagine a future where a knowledge graph powered semantic search will become so sophisticated that we could even ask: “Is soup one of life’s great mysteries?” (And if you haven’t watched BBC’s What makes a soup soup?, please take a moment to appreciate the life of a taxonomist.)
But we are not there yet. We are still wrapping our heads (and processors) around the knowledge soup we are all in.
“Knowledge soup” is a term J. Sowa, computer scientist and inventor of conceptual graphs, introduced about back in 2004 to characterize “the fluid, dynamically changing nature of the information that people learn, reason about, act upon, and communicate.” (ref. The Challenge of Knowledge Soup by John F. Sowa). The most important requirement for any intelligent system, argues Sowa, is the flexibility to accommodate and make sense of the knowledge soup. And that is a tough task to tackle.
How do we translate the complex nature of things, their properties and their connections into information that is convenient to manage, transfer and use? What lies behind building a “nest” from irregularly shaped, ambiguous and dynamic “strings” of human knowledge, in other words of unstructured data? There is no easy answer to these questions but we still need to make sense of the data around us and figure out ways to manage and transfer knowledge with the finest granularity of detail.
Knowledge graphs, the ones with semantically modeled data even more so, allow for such a granularity of detail. Providing a data architecture for our information and observations, a knowledge graph allows us to be at least a trillion triples ahead in our attempt to codify knowledge.
When it comes to the world’s food knowledge and the world of nutrition, dietary and culinary complexity, the knowledge soup can get really literal. In the context of an enterprise trying to organize knowledge and offer a way out of the vast information woods, Edamam is a good example of an attempt to deconstruct the (knowledge) soup to the smallest possible detail. Not so much to understand what a soup is ontologically but rather to cook it and eat it better informed about it. Then, the many philosophical questions about what a soup is get in the background and there comes the need to tackle the knowledge soup challenge by entering into the pragmatics of having a good dinner, suitable for our diet and day to day context.
This is exactly what Edamam did. To help people make better food choices, Edamam, a US start-up decided to build a platform for fast and easy navigation of nutrition data. To do that Edamam, together with Ontotext, worked to develop a knowledge graph with semantically enriched nutrition data.
It is with Edamam that the seemingly innocent question “What does a broccoli soup contain?” became a data integration and semantic annotation matter stemming into questions like: how do you represent a soup in a system so that its interconnections are mapped in a way that will allow good (unobstructed) information flows?
To allow hungry users (and, in the future, their healthy eating applications, shopping applications, cooking robots and smart fridges) to get access to nutritional details, Edamam put to work an RDF-based knowledge graph in parallel with techniques such as web mining, text analysis and semantic search. Thus the company opened up an information treasure trove of over 700 000 foods. The foods are described by more than 2 million tripes and one can search through then by diets, calories and nutritional ranges,
Edamam’s live food knowledge graph of semantically interlinked data from various sources took shape in several steps to further give users the ability to have information such as cooking time, dietary details, serving and nutrition details. First, a web mining technology was put to work for crawling websites to extract recipes and relevant data. Text analysis and semantic annotation were further used for mapping the extracted data to various domain-specific databases and to available data in more general databases such as DBpedia. This allowed Edamam to connect ingredients to nutrition information. Last, in order to make the available information even richer, Edamam used an ontology that allowed them to add specific meaning to the measures, the ingredients and the way they were described.
And if all of the above sounds a bit dry and not very exciting, here’s how a Broccoli soup’s recipe turns into a rich information card with the vitamins in the soup, its calories, its ingredients:
You can try it for yourself to give your recipe a knowledge graph at https://wizard.edamam.com/free/#nutrients:1
The knowledge soup might be one of life’s great mysteries, never to be solved but when it comes to preparing a soup and translating its ingredients into nutrition data, knowledge graph technology can help. Whether you are trying to translate the essence of the soup into machine-readable data or are simply looking for the nutrition value of your next broccoli soup, chances are you will benefit from a semantic knowledge graph.
Additionally, having a handful of state-of-the-art technologies and a richly interconnected data architecture is no small step. With a knowledge graph, our capacity to define what things are and further codify them into an interconnected searchable platform grows. And in the not so distant future, we will definitely need this capacity. Especially given that one day we will not only get recipes and nutrition data with a click but also will need interoperable data about our products to feed our smart fridge (or if we are lucky have our fridge detect and order them) for its next purchase.