Check out what people who already took part in our training Designing a Semantic Technology Proof-of-Concept have to say about their experience.
Everything should be made as simple as possible, but not simpler.
December 2016. This is when the first live online training in Semantic Technology organized by a bunch of enthusiasts at Ontotext took place. It was certainly not the first training on the subject in the world. But it was an important step in our efforts to proactively communicate the advantages of Semantic Technology.
Ever since Ontotext started piling up knowledge in the field, we have been sharing our expertise through trainings. However, these have always been created as a result of specific requests from clients.
But in the winter of 2016, we decided to change that. Instead of waiting for people to approach us, we wanted to be at the forefront of educating them about how certain semantic tools could help them solve various data analytics challenges. Here’s our story.
For more than 20 years, Ontotext has been in the business of developing and providing Semantic Technology-based solutions for various domains. As a result, we’ve accumulated (often after some trial-and-error) a lot of valuable experience. This has helped us understand which solutions would work best depending on specific domains and use cases.
It has also made us aware of the major pain points that people could come across when adopting Semantic Technology solutions. So, it was almost a matter-of-course that the idea of organizing all this knowledge into a coherent and easy-to-grasp learning journey was born.
There are three things to remember when teaching: know your stuff, know whom you are stuffing, and then stuff them elegantly.
Lola J. May, Mathematics Educator
Once you know your destination, you just need to start walking, right? Wrong. Before we could share our experience, first we needed to clearly define the audience that would benefit from such broader training courses. To do that, we had to decide which Semantic Technology adoption pain points we wanted to focus on to best meet people’s needs.
Equally important, we needed to determine the format and the kind of content coverage that would be most useful for it. We didn’t just want to go in front of people and talk about what we knew and found interesting. We wanted to shape our expertise to fit what people actually needed to know. And do it in a way that made it easy and intuitive for them to apply when dealing with their own use cases.
After some consideration, we decided that we wanted to focus on two areas where we thought our impact on people’s needs would be most significant.
The first was to improve people’s understanding of the importance and the practical advantages of using Semantic Technology when solving various knowledge management and data analytics problems. This is how our Designing a SemTech PoC training was born.
The second was to provide the best (and quickest) way for people to delve in and start working with our leading semantic graph database, GraphDB. This is how our GraphDB for DevOps training came about.
Once the scope of the new training courses was defined, we looked at the pros and cons of different formats.
Based on our experience from the tailored trainings, we decided that the best setup for sharing our practical experience and nurturing a community of like-minded Semantic Technology enthusiasts was to offer a live online format to a small group of participants. In this way, we could have more space for interchange and we could keep a close eye on people’s progress.
We also anticipated that if our guidance proved beneficial for people, we could establish a schedule of recurring trainings. As an additional benefit, having recurring courses would also allow us to check if we were on the right track.
To teach is to learn twice over.
When we designed the original curriculum for our training courses, we had a pretty good idea of what we wanted to demonstrate – the content coverage, the exercises, the individual assignments, etc.
However, each iteration of a training was a chance to change and, hopefully, improve important details – from content to delivery. This was mostly thanks to people’s feedback, but also the selected format allowed us to see what worked best and what didn’t.
The most important lesson we have learned is that a training is always a work in progress. Each following edition has the potential to be better than the previous one – both in terms of what we teach and how we teach it.
Our Designing a SemTech PoC training has evolved significantly from the first basic demonstration of what we considered to be the most important techniques.
In the beginning, our live sessions focused on cramming as much useful information as possible about data integration (through OntoRefine) and data exploration (with SPARQL). We presented a sample workflow that illustrated how existing resources could benefit from Linked Open Datasets and how to best analyze the available data. As a successful PoC is, in our view, the result of efficient data analytics, we wanted to let the trainees get the feel of such a workflow with the help of some hands-on experience.
However, we found that many people struggled with SPARQL. So, with the help of the participants’ feedback, we went through several iterations of redesigning the way we introduced it. We focused on the value of good technical skills in the process of solving an interesting problem. We also introduced a variety of means for data analysis. But most importantly, we decided to simplify the live session as much as possible to make it less difficult for trainees to follow what was covered and give them more chances to interact.
We reworked our use cases and interwove everything we demonstrated into a single story to engage our participants. This made all components of building a PoC easier to understand and refer to at a later stage.
We also exported all non-interactive content to the off-line set of additional materials so that participants could study them at their own pace. Instead, we engaged everybody with quick quizzes and discussions on the covered topics.
Now, we have a smooth workflow involving a real PoC design that covers all the necessary steps and demonstrates best practices. Shifting our focus to best practices around a single use case story and providing more space for exchange resulted in more lively and productive sessions and happier students.
The GraphDB Training for DevOps is a new course we have designed and started offering this year. It targets developers and operations specialists who interact with GraphDB on a daily basis and need a more in-depth understanding of our semantic graph database.
So far, we’ve had only one internal and one public edition of this course, so it’s too early to talk about its progression. But already the internal training has contributed significantly to the improvement of the initial materials.
A teacher is one who makes himself progressively unnecessary.
A year and a half later, Ontotext has offered a total of 14 training courses. This includes our two recurring courses: Designing Semantic Technologies PoC and GraphDB for DevOps. It also includes some custom courses for clients such as Semantic Technologies & GATE; Linked Data Sources, Robust Text Analysis and Semantic Search; Gold Standard Corpora; etc.
We’ve also collaborated with some universities. We’ve provided materials for the courses Knowledge Modelling and Representation and Knowledge Bases for FMI, University of Sofia. We’ve participated in the preparation of the course Knowledge Representation and Reasoning for the University of Aberdeen and we have contributed to ESSLLI 2018.
Our training courses have had participants from many geographical locations.
They have also attracted people with various job titles.
The use cases of the participants who came to improve their understanding of Semantic Technology in order to further their projects have also been quite diverse. Some were developing flexible data models for representing unstructured or semi-structured data from various domains. Others were working in the area of predictive analytics, or were trying to match and structure information across various sources, or were focused on cancer research.
Or, as in the case of Culture Creates, the aim was to make cultural events in Canada findable for voice-powered and AI-powered search assistants. This is possible thanks to metadata enrichment, integrating and linking data from various data sources and, ultimately, dumping, structuring and querying that data with Ontotext’s GraphDB.
All in all, our experience has shown that by sharing our expertise in Semantic Technology, we can make a difference. Accommodating all types of busy schedules and non-work related commitments, our training courses allow you to join us from any location.
The only disadvantage is that you don’t get to have any of our fragrant coffee and delicious cakes while you follow our demos on your screen at home, but, hey, maybe that’s not such a bad deal either?