PHEME: Computing Veracity Across Media, Languages, and Social Networks

  • Completed
  • Programme: FP7
  • Start date: 01.01.2014
  • End date: 31.03.2017

PHEME (Computing Veracity Across Media, Languages, and Social Networks) aims to compute veracity of rumors across media, languages, and social networks. A good reputation is hard-won and easily lost. Every other day we see how information and news affect every level of our society: Company stocks, sovereign debt, politicians. This fact is especially true in the Internet era where rumors go viral. The new research project Pheme, co-funded by the European Union’s ICT FP7 programme, aims to compute veracity of rumors across media, languages, and social networks.

Contact: Georgi Georgiev

Project Overview

The Pheme project creates a computational framework for automatic discovery and verification of information at scale and fast. This framework allows a better understanding the impact of rumors in social media: where they originate, how they spread, who and why spreads them, with multiple uses in many domains. Pheme started in January 2014, running over the course of three years. The Pheme consortium consists of 9 European partners from England, Germany, Austria, Spain, Bulgaria, Switzerland and Kenya. It is coordinated by the University of Sheffield.

Ontotext’s Role

Ontotext participates in a core role by providing a LOD-based reasoning about rumors, i.e. a domain-independent model of the four kinds of rumors addressed: misinformation, disinformation, unverified information, and disputed information. Modelling and reasoning about rumous is particularly challenging as it is necessary to represent multiple possible truths (e.g. superfoods may cause vs. prevent cancer) and to take into account the temporal validity of information, in other words to accommodate the possibility that two otherwise contradictory statements can be both valid at different points in time.

Moreover, Ontotext’s GraphDB™ knowledge base is adapted as a semantic repository with scalable light-weight reasoning. Knowledge from relevant Linked Open Data (LOD) resources are used such as Ontotext’s FactForge, DBpedia, OpenCyc, and Ontotext’s LinkedLifeData. These datasets provide invaluable large-scale world-knowledge of meronymy, synonymy, antonymy, hypernymy, and functional relations, all of which are essential features for classifying entailment and contradictions.

Ontotext Newsletter