How to Design a Successful Semantic Technology Proof-of-Concept

This is a recurring webinar. Stay tuned for the next session.

Semantic Technology is becoming a household term, albeit an umbrella one, for more and more enterprises that are looking to improve the way they manage structured, semi-structured and unstructured data. The ability to tap into the exponentially growing amounts of both internal and publicly available data sources is mission-critical for business intelligence, enterprise data analysis or any data-driven strategy. Semantic Technology provides the necessary set of tools, practices and standards for implementing such data-driven applications.  

However, what is often lacking is a clear understanding of how this whole process happens and what it takes. How does an enterprise come to the decision of adopting Semantic Technology? What are the evaluation steps that it should consider in the long run? And ultimately, how would it approach the task of building a small prototype to demonstrate the utility of Semantic Technology?

In this webinar, we have prepared a short introduction to the process of developing a simple proof-of-concept to test the feasibility of some Semantic Technology applications such as graph databases and Linked Open Data. The presentation is based on our many years of practice, which have evolved into a full-blown live online training on the same subject. This webinar provides just a taste of the training*, which dives into the practical aspects of building a SemTech PoC in greater depth and with hands-on guidance.

Webinar Topics

We will present a simple mock Semantic Technology use case. First, we will start with a dataset and a couple of initial questions to explore the data. We will then build a data model that captures the questions our initial data should be able to answer. It is often the case that the initially selected dataset is incomplete, deeming our questions unanswerable. Therefore, we will need to look at how to enrich the incomplete dataset with external data sources. We will then clean up and transform our initial data from a tabular to a graph format and integrate it with the Linked Open Data sources we have chosen. Finally, we will query the newly created dataset and see if we can now better answer our initial set of questions.

Sign-up for this webinar

About The Speaker

Andrey Tagarev

Andrey Tagarev

Data Scientist, ML Engineer & Training Instructor

Andrey Tagarev has an MSc degree in Computing Specialism (Machine Learning) from the Imperial College London. He joined Ontotext in 2015 and he has since participated in several EC funded research projects, such as PHEME and EHRI, where he developed machine learning algorithms for document classification, sentiment analysis, rumor detection and claim identification. Andrey is also a Trainer in Innovation & Consultancy Services at Ontotext.