Read about how knowledge graphs such as Ontotext’s GraphDB provide the context that enables many Artificial Intelligence applications.
Organizations that invest time and resources to improve the knowledge and capabilities of their employees perform better. The risk is that the organization creates a valuable asset with years of expertise and experience that is directly relevant to the organization and that valuable asset can one day cross the street to your competitors. The loss of ‘Institutional Memory’ (also referred to as corporate or organizational memory) is a perennial issue for organizations and even whole civilizations.
Humans forgot how to make concrete for over a thousand years. The Romans perfected the recipe around 150 BCE. This technology enabled strategic and commercial supremacy for hundreds of years until the recipe was lost. Plenty examples of the durability of this material still exist across Europe, the Middle East and Africa. From the famous Roman roads to the concrete dome of the Pantheon in Rome. It was a catastrophic failure of ‘Institutional Memory’. It wasn’t until the invention of Portland cement in the 19th century that humanity remembered how to make concrete. Arguably we still don’t know how to make it as good as the Romans especially in marine construction. Modern concrete is quickly worn down by erosion and corrosion of the sea. This is in contrast to numerous examples of two thousand year old Roman harbors that are still standing.
Institutional memory loss creates significant and measurable inefficiencies. In the case of concrete, it meant Visigoths in your forum running off with the imperial treasure and the inability to create long-lasting vital infrastructure like aqueducts and harbors for a millennium. For most of today’s organizations, the inability to access institutional memory leads to billions of dollars in losses, because of mistakes, waste and missed business opportunities.
There are numerous reasons that an organization will inevitably suffer memory loss. Staff turnover is the most obvious reason, but it might also be because management has new priorities resulting in skills and knowledge developed previously degrading. Even if the organization has implemented a knowledge management solution, it might be the system is siloed within departments rather than a more useful comprehensive approach for the organization as a whole. The challenge for retaining institutional memory is twofold, preserving the knowledge and recalling the knowledge. Click To Tweet
Knowledge graphs can help do both.
Preserving Knowledge. There must be a strategy and a practice for preserving the knowledge, to create institutional memory. There are a multitude of recommendations such as creating internal wikis to record policy and procedures, document templates, exit interviews, job shadowing, digitizing employee training programs, etc. The problem with these approaches is that they take time away from actually doing the job and bringing value to the company. Knowledge graphs coupled with text analysis (TA) is a proven solution for preserving institutional memory through a passive collection of knowledge, enabling users to bring value to the company and preserve institutional memory without the burden of an additional and highly tedious data entry task.
Recalling Knowledge. It’s great if you can develop a workable strategy for your organization to capture the knowledge within an organization and preserve the institutional memory, but knowledge management is making sure the right people have access to the right information when they need it in a format at a level of details that is most useful to make the best decision for the organization. No employee wants access to a database. What they want is the information or knowledge that happens to be held within that database. Any knowledge management solution should understand that subtle but important difference. Knowledge graphs provide the foundation to create intuitive, human-friendly mechanisms enabling information discovery by storing institutional memory at a higher-level of abstraction, which is not beholden to the lower-level ICT infrastructure of the organization.
Capturing institutional knowledge that is machine-readable is vital. It’s also important that it will be accessible to those that need it. The use of open source technology and open APIs avoid data silos and ensure institutional memory is shared across departments.
By creating a common semantic description, a knowledge graph creates a higher level of abstraction that does not rely on the physical infrastructure or format of the data. Instead, it creates a unified way, sometimes called a data fabric, of accessing an organization’s data as well as 3rd party or global data in a seamless manner. Data is represented in a holistic, human-friendly and meaningful way. Knowledge graphs help to provide high-quality data based on enriched and linked metadata, involving different people and roles. To create the knowledge graph, all of a company’s available databases are typically linked and stored in a graph database, where they are then enriched with additional knowledge.
This enrichment can happen without additional work and as part of everyday operation of the business. This is especially valuable for clients interested in content packaging and reuse. Ontotext has been working with clients from internationally renowned names in news such as the BBC, nationally vital wire services such as the Press Association and respected STM publishers and specialist investment information agencies. Knowledge graphs enable content, data and knowledge-centric enterprises to improve repeated monetization of their assets by optimizing their reuse and repurposing as well as creating new products such as books, apps, reports, journal articles, content, and data feeds.
These publishers, media and business information agencies can produce content for multichannel delivery and increase engagement with personalized recommendations. New offerings by exploiting their existing content becomes possible with lower costs for rearranging and packaging due to highly precise, but automated, discovery and classification. Publishers’ content is no longer bound by a document or page but individual content is treated as elements that can be separated and reused for a single purpose and a specific audience. The results are content of a higher quality created at lower costs and delivered at a higher speed. The finer-grain of content increases user engagement through dynamic user stories and customer journeys and better associated related content and search.
Ontotext’s collaboration with Euromoney is an award winning example of bringing innovative products and services to publishing for content packaging and reuse. By coupling semantic annotation of content as part of the editorial process, the company is preserving institutional memory of the macroeconomic concepts, relationships, and events (e.g markets, financial instruments, indicators, currency pairs, economic conditions, etc.).
The huge amount of content produced by Euromoney is now semantically analyzed, enriched and integrated into a streamlined production environment with complete visibility across the various business units. The automatic recognition of concepts leads to improved author productivity and context-based search results, which provides rich content analytics and interaction to subscribers. The cost savings from streamlining the content for over 100 publications automatically instead of manually are dramatic.
Information that is easily searchable has been proven to reduce long-term costs and increase reuse. The question of how to improve search was answered years ago. It’s semantics and it’s knowledge graphs. This started with Google and the search engines. These organizations reached the limit of optimization relying on keywords long before anyone else. Semantics enabled them to anticipate the intent of the search query rather than the query itself. They interpreted what the searcher meant rather than what they typed. That’s an incredibly powerful conceptual shift.
The polynomial explosion of data has ensured every organization now faces the scale of information within their own systems that search engines faced nearly ten years ago. The vast majority of that information is unstructured and unstructured means most likely a loss of institutional memory. Relying on luck and google-fu to stumble across the right combination of keywords is wasteful. If an organization’s content goes beyond text such as video, sound, or raw instrument data, the problem is exacerbated.
Knowledge graphs represent our information at a higher level of abstraction and closer to how the organization understands the information rather than how an application requires the information represented. Tools and applications can be built to allow non-technical users to create more sophisticated queries of content and data, which lowers costs and frees up the time of valuable and scarce technical staff. The newest iteration of Ontotext Platform includes support for GraphQL and Semantic Objects enabling developers and business users to enrich the knowledge graphs.
By classifying, tagging, and labeling content with other data, the relationships between them become clear. The context is better understood and as a consequence the content itself is better understood. Patterns and trends across the organization’s content emerge. The answers to important and holistic questions for the business are revealed and are no longer dependent on the accidents of ICT architecture across the organization and what format the data happens to be. Emails, spreadsheets and presentations uploaded to shared drives, technical documentation, company website, internal wikis, and meeting minutes of an organization are no longer locked away in unstructured formats. The use of knowledge graphs makes sure that all that institutional knowledge that an employee contributed is in a format that is searchable and discoverable even after the employee has left.
The value of discoverability is exemplified by Ontotext’s clients in the healthcare sector. Healthcare industries including pharma, biotech, agrochemical and insurance have been using knowledge graphs to improve discoverability in a number of new and innovative ways. Ontotext’s technology has been used to monitor competitor R&D activities, new products, patent filings, mentions in the news, etc. Pharma clients have used the technology to identify drug safety signals from diverse data streams and enabled company-wide knowledge sharing through implementing F.A.I.R. data principles to foster better collaboration and information reuse. By maintaining a referential knowledge bases, health insurance agencies have been able to automatically discover inconsistencies within insurance claims.
One of the most exciting opportunities for knowledge graphs has been in drug discovery during clinical trials. For efficient drug discovery, linked data is key. The actual process of data integration and the subsequent maintenance of knowledge requires a lot of time and effort. Semantic knowledge graphs can help in all those phases of the data life cycle by providing stable identifiers and referential integrity for data integration and harmonization. With automated inference mechanisms it is possible for an application to deduce that all proteins falling into a pathway leading to a disease can be identified as targets for drugs. Knowledge graphs can also make it easier to find and address requests from regulatory agencies or design your research activities.
Having data from different sources interwoven in a knowledge graph creates the possibility to interpret the data, infer new facts and discover new relationships and patterns while maintaining a traceable provenance. Semantic metadata for unstructured documents interlinks mentions of concepts with their descriptions in the knowledge graph, enables semantic search that combines full-text search with structured queries, inference and graph analytics. In this way a knowledge graph-based system delivers richer sets of relevant results in less time.
Beyond improving institutional memory, knowledge graphs open an organization's data to the growing sophistication of Artificial Intelligence. Click To TweetAI can augment the tools and interfaces that make it easier for human users to find and integrate information quickly and sufficiently so that they can make informed business decisions. It is increasingly the case that AI is making the rote decisions such as Ontotext’s NLP plugins for automating semantic tagging.
With knowledge graphs, automated reasoning becomes even more of a possibility. Machine learning coupled with knowledge graphs is already collecting, categorizing, tagging and adding the needed structure to the endless (and useless) swathes of unstructured data and creating predictive analytic tools. Knowledge graphs arrest institutional memory loss but also make existing products and services cheaper and your organization won’t forget how to make concrete just when you need to keep the barbarians out.