WeVerify (Wider and Enhanced Verification for You) is an EU co-funded Horizon 2020 project that deals with algorithm-supported verification of digital content.
Contact: Zlatina Marinova
Online disinformation and fake media content are a serious threat to democracy, economy and society. Content verification in the highly complex online information landscape is a challenging task, even for experienced journalists, human rights activists or media literacy scholars.
WeVerify aims to address the challenges of content verification through:
The participants in WeVerify analyze extensive social media and web content and contextualize it within the broader online ecosystem. The goal is to expose fabricated content through cross-modal content verification, social network analysis, micro-targeted debunking and a blockchain-based public database of known fakes.
A key outcome of the project is the WeVerify platform for collaborative, decentralized content verification, tracking and debunking of incorrect information. The platform is open-source to engage not only newsroom and freelance journalists, but also different communities and citizen journalists. A premium version of the platform is also offered to facilitate low-overhead integration with current in-house content management systems as well as to support more advanced newsroom needs. It is supplemented by a digital companion to assist with verification tasks.
The results of the project are validated by professional journalists and debunking specialists from project partners (DW, AFP, DisinfoLab), external participants (e.g., members of the First Draft News network), the community of more than 2,700 users of the InVID verification plugin as well as by media literacy, human rights and emergency response organizations.
Sirma AI, trading as Ontotext, is the coordinator of the consortium, which includes leading universities and some of the biggest news agencies in Europe. We lead the work on using blockchain technology to confirm the authenticity of judgment calls by journalists when they perform data verification and expose fake news.
Ontotext uses the database of known fakes as a demonstrator for cross-media metadata integration. The fake content is linked from news articles in Ontotext’s free public service NOW (News On the Web) by integrating the database of known fakes with another of Ontotext’s demonstrators – FactForge. The near-duplicate search module, which is intended for identifying recurring but possibly tempered with content, is also in line with our efforts to develop a generic entity reconciliation tool.
Ontotext uses the examples of near-duplicate images, videos and text as a training dataset for our machine learning algorithms for entity matching. Integrating Ontotext’s Hercule fact-checking assistant within the WeVerify platform and solutions is also a great opportunity for us to make it even more beneficial for users.
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 825297.