Bloomberg TV talkes to Ontotext’s CEO Atanas Kiryakov about fake news, human behavior analysis and prediction, filtering and serving content with semantic technology and more.
In today’s digital age, access to information is considered a basic human right, and the ease of finding information has never been greater. However, the ease of access to information has also increased the risk of disinformation, which can have serious consequences. To address this issue, Ontotext, an AI company and a member of the vera.ai consortium, has developed a platform called Database of Known Fakes (DBKF) and showcased it at the Data Technology Seminar 2023 held in Geneva from 21-23 March 2023.
The seminar was organized by the European Broadcasting Union (EBU) and the respective community that is dedicated to foster knowledge sharing and learning on data-related projects, such as metadata and AI.
The event itself, formerly known as the MDN Workshop, is an annual event and the perfect opportunity to network with like-minded professionals, learn about the latest trends and techniques in data-related projects, and set up collaborations with peers in the industry.
Eneya Georgieva, the product owner of Ontotext’s disinformation-related products, was one of over 40 speakers at the event, representing organizations from across Europe. During her presentation, Ontotext demonstrated their dedication to fighting disinformation in the digital age.
Ontotext’s innovative DBKF platform was initially prototyped in the EU-funded project WeVerify, which began in December 2018 and concluded in late 2021. Since then, Ontotext has expanded its efforts through vera.ai, which aims to provide spatio-temporal representations of disinformation campaigns and narratives.
The DBKF platform is a graph database of debunking content that has been semantically enriched with metadata using visual similarity and natural language processing systems. It addresses the need for spatio-temporal representations of disinformation campaigns and narratives. Fact-checkers and journalists can easily double-check whether a new image, video, or claim has already been verified, including by whom, when, and how, using the integrated graph database. The platform is automatically populated from ClaimReview-based fact-checks, enriched with NLP techniques to distinguish between false content being debunked and evidence from reliable sources cited in the debunk.
Through close cooperation with fact-checking organizations and verification professionals, and by leveraging the tens of thousands of debunks in the graph database, Ontotext is developing a hierarchical semantic model of disinformation that provides representations of disinformation at scales ranging from individual claims to long-running narratives and campaigns.
Experts at the event were particularly interested in the multilingual coverage provided by the DBKF platform, which includes content from Central and Eastern Europe, as well as debunks in all other EU languages and by major global fact-checking organizations. A cross-lingual, multimodal near-duplicate search feature helps to link debunks into spatio-temporal models of disinformation, using various methods to enhance metadata on tampered/fake content.
Effective solutions to combat disinformation require collaboration and cooperation among different organizations and sectors. With the growing problem of disinformation in today’s digital age, there is no ‘silver bullet’ to combat disinformation.
The vera.ai consortium, which brings together experts from academia, civil society, and industry, is one such example of organizations working together to develop cutting-edge tools and strategies to identify and counter disinformation. By working together, these organizations can harness their collective expertise and resources to build more robust and comprehensive solutions against disinformation that are grounded in evidence-based research and best practices.
You can view the slides of Eneya’s presentation here.
If you want to learn more about vera.ai’ commitment to the topic of fighting disinformation, follow the consortium’s Twitter channel.
vera.ai has received funding from the European Union’s Horizon Europe research and innovation programme under grant agreement No:101070093. Views and opinions expressed are however those of the author only and do not necessarily reflect those of the European Union. Neither the European Union nor the granting authority can be held responsible for them.