Read our guest post, in which Marc Isop, Chief Revenue Officer at Ontotext's partner ONLIM, talks about ChatGPT and Knowledge Graphs.
It was the 15th of September 2022 when the vera.ai project officially started and sent 14 partners from across Europe on the mission to develop and build trustworthy AI (Artificial Intelligence) solutions in the fight against disinformation. A couple of days later, the consortium got together for the first time in Marathon, Greece. That is how the vera.ai-journey started.
What happened within vera.ai’s first year? What has the project accomplished and where is it heading? Take a look with us at 365 days of vera.ai and beyond.
Just a couple of months after the project kicked off, AI was – and still is – on everyone’s lips. The reason for that? The advent of so-called advanced generative AI systems, foremost ChatGPT, at the end of 2022.
While those systems offer unexpected possibilities and can provide – undoubtedly – impressive outcomes, they, at the same time, make the generation of mis- and disinformation a lot easier and more realistic. In consequence, the verification of digital content becomes even harder. Accordingly, the need of verification professionals for new and enhanced tools to support verification workflows is higher than ever.
Drawing from that: vera.ai has been a highly relevant project from the beginning and gained even further importance throughout its first year!
With the project goals and the potential it offers in mind, the consortium partners started to gather user requirements at the outset.
To do so, several methodologies were pursued. This included various methods, ranging from a survey to participatory design sessions with end-users to an ethnographic study in one of Europe’s fact-checking newsrooms.
All of these approaches resulted in the consortium getting an extensive and better understanding of what potential users of tools and services need, and how they presently work. This is to serve vera.ai tool developments and tool enhancements as well as possible improvements in verification workflows as well as gains in efficiency.
Based on the user requirements identified, research and development partners intensified their respective work and efforts. Also, and not surprisingly, they took and will continue to take the developments in (generative) AI into account.
However, the recent rush about AI models is only the peak of what became apparent for years. Corvi et al. (2022) point out that “over the past decade, there has been tremendous progress in creating synthetic media, mainly thanks to the development of powerful methods based on generative adversarial networks (GAN). Very recently, methods based on diffusion models (DM) have been gaining the spotlight.”
Fortunately, it is one of the great benefits of the vera.ai consortium that it combines a variety of expertise and enriched knowledge to conduct meaningful studies and development efforts in the fight against disinformation. Within vera.ai’s first year, research teams have already published (or are about to publish) a total of 25 scientific publications. Almost all of them are shedding light on disinformation diffusion and the countering thereof, its different facets and related questions.
Research is conducted and focusses, among other, on the individual and “how fast a user can get into a misinformation filter bubble” and what it takes to “burst the bubble” (Srba et al., 2023), or on the aggregate level of how to “establish trust in media production, distribution and consumption” against the background of “advances in media content manipulation and artificially generated content” (Temmermans et al., 2023).
Detection of (synthetically) manipulated content is another strand of research in the project. Either, it is directed at the testing of existing tools (as done by e.g., Corvi et al. (2022) who “expose the forensics traces left by diffusion models, then study how current detectors, developed for GAN-generated images, perform on these new synthetic images, especially in challenging social-networks scenarios involving image compression and resizing” or Serfaty et al. (2023) asking: “Are classic forensic tools effective on satellite imagery?”.
Additionally, new detection approaches are developed, such as “Deepfake audio detection by speaker verification” (Pianese et al., 2022).
Researchers at CERTH-ITI, in turn, work – among other – on “Improving Synthetically Generated Image Detection in Cross-Concept Settings” (Dogouli et al., 2023), another facet of the vera.ai research work.
A third focus within the research of vera.ai is on disinformation campaigns and (inauthentic) coordinated behaviour in this regard. For example, Giglietto et al. (2023) published “A Workflow to Detect, Monitor, and Update Lists of Coordinated Social Media Accounts Across Time: The Case of the 2022 Italian Election”.
While doing research is at the core of vera.ai, it is likewise important to spread information about the project work, its activities and respective results. Therefore, presentations in various settings were given, many conferences were visited (as speakers and attendees) and even co-organized by vera.ai itself.
Regarding the latter: the conference “Meet the Future of AI: Countering Sophisticated & Advanced Disinformation” needs to be highlighted. It took place on 29 June 2023 in Brussels and was co-organised with Horizon Europe research projects AI4Media, AI4Trust, TITAN and the European Commission. About 90 participants attended on-site, many more participated virtually to discuss and showcase various aspects surrounding the development and use of Artificial Intelligence and its relationship to the disinformation sphere.
Additionally, and looking at outreach activities, a successful X- (beforehand Twitter) Account was set up and is being supplied with regular activity updates and relevant topical information (where X is heading under Elon Musk remains to be seen …). So was a Mastodon presence. You can also find vera.ai on YouTube where we share project-related videos.
In the context of disseminating and communicating the project’s aims and its outcomes, vera.ai travelled the world both physically and virtually: It made it to Global Fact 10 in Seoul, the world’s most important fact-checking summit, the ICA pre-conference in Toronto and to the Queensland University of Technology in Brisbane, Australia.
The above are just three occasions and events at which the project was presented to interested parties – such as potential end-users, like journalists, fact-checkers, human rights activist and other researchers. To find out where we have been throughout the first project year, visit the presentation section of the vera.ai website where all publicly available presentations are available (and to avoid misconceptions: the vast majority of events visited were in Europe).
Summing up, vera.ai has spread the word about its activities wide and far.
Looking back, the vera.ai consortium can confidently say that it was a successful first project year. Obviously, everyone involved is eager to continue on this path.
As its core ambition, vera.ai aims to develop tools that support in digital content analysis and its verification, also enhancing solutions and services that exist already. This is largely done based on the requirements that were gathered at the outset of vera.ai. Therefore, one of the main tasks that lies ahead is evaluating the developments coming out of the project in the coming months. This will go hand in hand with continuous development work.
Several evaluation cycles throughout the upcoming two years are planned. Tools and services will be iteratively tested and improved in different stages.
User partners of the project, who bring respective networks of journalists, fact-checkers and beyond into all this, ideally will make use of what is coming out of the project work. Before this happens we aim to involve potential end users in the co-development and design process. This is to ensure that developers obtain well-founded feedback on early-stage results in order to come up with solutions that meet clear demands and needs of the community, supporting their workflows.
In addition to all the above, research partners of vera.ai are aiming to gather and spread further cutting-edge expertise and know-how in research domains centered around areas such as machine-learning, human language technologies, multimedia access and retrieval, network analysis and much more – all areas that can support in the fight against disinformation.
The vera.ai project team is looking forward to making all the above – and more – a reality and hope you accompany us on this journey!
This project has received funding from the European Union’s Horizon Europe research and innovation programme under grant agreement No:101070093. Views and opinions expressed are however those of the author only and do not necessarily reflect those of the European Union. Neither the European Union nor the granting authority can be held responsible for them.