Designing a SemTech Proof-of-Concept: Get Ready for Our Next Live Online Training

August 16, 2019 5 mins. read Nikola Tulechki

Next month marks the twelfth edition of our live online training Designing a Semantic Technology Proof-of-Concept.

The journey, which started more than two years ago, has been long and bumpy. But it has enriched us in terms of identifying key needs for those looking to build a simple prototype in order to demonstrate the power of semantic technology, linked data and knowledge graphs. Some of that journey has been recorded in a previous blog post.

What This Training Is

So, what is this live online training about, you might wonder? Let’s start with a quick definition of the basics. Semantic technology is a broad technological term that covers specific technological approaches, principles and methodologies for managing data and knowledge. If we have to boil it down to the essentials – it deals with the meaning rather than the structure of the data.

The most important question our training tries to answer, both in theory and in practice, is how to approach a use case that is a good fit for semantic technology.

Of course, this can mean a variety of things but we can start by thinking about:

  • how to model the data to allow for its dynamic integration and reuse in the future;
  • how to automate knowledge discovery;
  • how to interlink data with other data sources (think Linked Open Data);
  • what kind of auxiliary resources should we use, if at all;
  • what kind of databases support and enable such complex data structures.

The answers to these questions are presented in the course of week-long, self-paced sessions and a 4.5-hour live online practice session. The training is structured to follow the steps of building a simple prototype to test the feasibility of the technology with hands-on guidance by experienced instructors. After a mandatory introduction to the broader theory of semantic technology, participants are taken on a live journey where they:

  • interact with some publicly available datasets and convert them from common tabular formats, such as CSV, to a semantic data representation using OntoRefine;
  • model the data against some auxiliary resources such as ontologies;
  • create some classes and properties and develop a data modeling scenario;
  • interlink the data with other structured, semi-structured and/or textual sources;
  • explore the interlinked datasets with SPARQL;
  • interact with the semantic graph database from an external project.

In the end, for those who have advanced in conceptualizing their prototype and would like to go beyond, there is an opportunity for a one-on-one meeting with the instructors. There, they can turn the acquired knowledge into a practical solution to their specific business case and strategize about its implementation.

What This Training Is Not

What This Training Is Not

This training will not make a SPARQL master out of anyone. Ideally, the training is available to semantic technology beginners. This includes people who are not versed in its concepts and theory but have some basic understanding of the technology and are not afraid to dive directly into the practice. For a small part of the training, this means SPARQL queries.

In order to feel comfortable and keep up with the training, participants need to have at least a basic understanding of the SPARQL query language and the underlying graph-based data model. Therefore, we provide a theoretical overview of both, including some practical exercises in SPARQL. Still, newcomers are advised to dedicate some time to any of the excellent SPARQL tutorials out there, some of which are referred to in the FAQ section of the training page.

Another thing you should not expect from this training is presenting use cases from different domains. Instead, we work on one factitious use case and by thoroughly developing it, we address fundamental questions about how to work on a typical semantic technology use case.

Most important in terms of setting the expectations straight is that this training does not aim to immediately deliver a ready-to-pitch project. It will, however, teach participants how to approach a data challenge that might be a good candidate for semantic technology and will show them how to put together a simple prototype. This will lay solid foundations for the rest of the work necessary for implementing a full, working knowledge graph solution.

Why You Should Give it a Try

As always, the most convincing is to see how the knowledge gained from our training can lead to a successful solution.

One of the best success stories as a result of our training is Culture Creates. The aim of this project is to make cultural events findable for voice-powered and AI-powered search assistants. This becomes possible thanks to metadata enrichment, integrating and linking data from various data sources and, ultimately, dumping, structuring and querying that data with the help of a semantic graph database. All these steps and techniques are addressed in theory and in practice throughout the training.

You should also keep in mind that more and more reports estimate that until 2022 the annual growth of the graph database market will be 100%. In addition, according to Gartner’s report, knowledge graphs are “ideally suited to storing data extracted from the analysis of unstructured sources”. So, if you’re dealing with massive amounts of data, especially unstructured and locked in silos, it is inevitable that you will encounter complex questions across that data. And very often it is not practical or even possible to answer them using SQL databases.

This is where semantic technology and graph data stores come into the picture with their capability to efficiently model, explore and query data with complex interrelationships across data silos.

Interested? Go to our training page where you can learn details or contact us for specific questions.

Article's content