Event Extraction Based on Fine-Tuned Text2Event Transformer Speeds up the Fact-checking Process

EDNA (Event Detector for Narrative Analysis) extracts events from text to find out who did what, when, where, and why. EDNA untangles patterns in disinformation related to a particular event and uncovers links between events based on actors, places, time, and concepts.

March 22, 2024 10 mins. read Petar Ivanov

This is part of Ontotext’s AI-in-Action initiative aimed at enabling data scientists and engineers to benefit from the AI capabilities of our products.

Disinformation is on the rise and fact-checkers struggle to keep up with the amount of data that is to be analyzed. We’ve adapted and extended a state-of-the-art event extraction algorithm to disinformation content. The resulting solution detects a broader set of event types and improves the overall recall of events. Consequently, it helps fact-checkers save time by automating the extraction of information relevant to specific event occurrences.

Professionals working on disinformation detection in domains such as Media and Publishing, social studies, and civil security have to sift through large quantities of online content in order to identify mentions of events of interest and extract the key information about them. Automating this process allows them to focus on analyzing the extracted information, rather than looking for it.

One group of events that fact-checkers are actively looking for in posts is climate change events, such as wildfires, hurricanes, or earthquakes. The discussion around such events is often muddled with inaccurate claims about what caused the event, the extent of damage, or the number of affected people.

Automating event extraction

Event extraction is the process of extracting information about events mentioned in text by employing various AI techniques. As a first step, event detection is performed to identify the span of text describing an event and to determine its type. Next, additional details called event arguments are extracted from the identified span of text. Arguments describe the actors involved or affected by the event, the place and time, and more.

The model and the schema

Our work on automating event extraction from disinformation related content started with a review of the state-of-the-art solutions. We shortlisted several approaches and selected the Text2Event¹ model. Text2Event is a transformer model by Lu et al.² trained to detect the 33 event types defined in the ACE³ event annotation guidelines. These 33 event types span multiple aspects of a person’s life that are grouped into 8 categories:

  • life (be-born, marry, divorce)
  • movement (transport)
  • transaction (transfer money, transfer ownership)
  • business (start organization, declare bankruptcy)
  • conflict (attack, demonstrate)
  • contact (meet, phone/write)
  • personnel (start position, nominate, elect)
  • justice (arrest/jail, fine, appeal)

Consider the following example:

Heidi Cruz, Ted’s wife, became managing director of Goldman Sachs. They oversee the Texas utilities.”

Here, the word “became” is indicative of a Start-Position event spanning the first sentence of the text. From that sentence, we can extract additional details about the event in the form of arguments Person (“Heidi Cruz“), Position (managing director“), and Entity (Goldman Sachs“).

Extending the schema

In order to address the specific needs of professionals who deal with disinformation content, we had to cover additional types of events. To this end, we extend the above schema with 3 new event types, while ensuring that they didn’t have significant overlap with those existing in ACE:

  • Cure-Claim – the act of stating whether something is a cure for a given medical condition or disease
  • Severe-Weather – the occurrence of weather conditions hazardous to human life and property 
  • Rule-Change – the creation, removal, or other change of a law, legislation, or some other societal rule 

With the newly added event types we aim to address some of the most common narratives found in disinformation content, which are otherwise not covered by the ACE event schema. For one, the Covid pandemic and surrounding confusion about the efficacy of treatments emphasized the need to address statements pertaining to claims of miracle cures and diets. 

Climate change is another topic that has been consistently plagued with misinformation over the last few decades. 

Finally, an evergreen topic in misinformation content is about rules and legislation that affect the broader population. The ACE event schema contains 13 different justice event types, but they describe events affecting individual persons or organizations. The newly introduced Rule-Change event fills that gap and acts as a catch-all event type for any rule that impacts a broader subset of society, such as the Paris accords, introduction of vaccine passports, or EU sanctions against Russia.

From raw data to annotated samples

In order to fine-tune the selected model on the new event types, we sampled and annotated disinformation claims in English from the Database of Known Fakes (DBKF). It contains over 100 thousand claims in 30 languages. It interlinks them (along with related claim-reviews by fact-checkers, appearances, and news articles) in one unified knowledge graph that is stored in a GraphDB instance. 

Rather than using all English claims available, we pre-selected candidate samples for annotation to contain viable triggers for the particular event. For example, words like “cure”, “heal”, and “treat” are often indicative of a Cure-Claim event.

Out of the pre-selected claims, we annotated about 80 per event type, making sure to include some negative samples. These are claims containing viable triggers, but no events, such as statements about curing wood or Halloween trick-or-treat. For comparison, the original ACE 2005 dataset averages about 44 positive samples per event type across the least common 20 event types. 

As we know, AI models are only as good as their data. To ensure the high quality of the annotations, we followed principles towards designing a Gold Standard corpus. Each sample was annotated by three independent annotators using Ontotext Metadata Studio (OMDS). OMDS allowed us to set a fixed schema, according to which all annotations were performed. This ensured that, for each event type, only viable argument types could be annotated. In addition, when using OMDS, each annotator only views their own annotations and isn’t influenced by the annotations of the others. The result is a Gold Standard dataset of 248 annotated samples for the 3 new event types.

Model fine-tuning

Whenever a pre-trained model is fine-tuned for a new task, it’s expected that the performance on previous tasks would decrease. Since we are interested in identifying both the old and new event types, careful selection and elimination of fine-tuning data was pivotal in preserving the model’s prior abilities to the best extent possible.

We fine-tuned and evaluated numerous variations of the model trying to strike a balance between learning new event types well and retaining prior knowledge. Depending on the number of samples and epochs, fine-tuning took up to a few hours on a single NVIDIA RTX A5000 GPU.

The result is EDNA – Event Detector for Narrative Analysis. The model learned to detect the 3 new event types as reliably as the ones it was initially trained on. In terms of predicting the original event types, our efforts resulted in more than doubling the recall compared to the original model with only a minor drop in precision.

Structured data = better insights

The extracted events conform to a structure defined by the event schema. We use a GraphDB instance to store and interlink them in a knowledge graph. We also store the raw texts in the same knowledge graph and link them to the corresponding events extracted from them. This offers a lot of flexibility in terms of designing complex faceted searches. Analysis of results from such searches can offer additional insights about narratives within and across documents that were previously hidden.

EDNA provides fact-checkers with time-saving means to sift through large quantities of online content and focus their efforts only on the content that is relevant to their task. Highlighting the sentence that discusses an event of interest within a large document directs an expert’s attention to the span of text that is most relevant. Meanwhile, the identified event arguments provide a structured representation of said event, which further helps the expert to prove or disprove the whole or parts of the particular claim. 

Event arguments can also be used to filter the identified events. A climatologist might want to address severe-weather claims about forest fires in California, a virologist would focus on cure-claims about a new type of vaccine, and a jurist might be interested in Rule-Change events discussing the Data Act’s impact on SMEs.

Comparing EDNA to RED

EDNA isn’t the only event detection solution by Ontotext. Last month, we introduced the Relation and Event Detector (RED). The two solutions were developed in parallel as they serve different use cases and requirements. While RED focuses on helping users make informed investment decisions and perform risk assessment, EDNA addresses the needs of fact-checkers. RED leverages the abilities of an LLM to enable non-technical users to define new event types. On the other hand, EDNA helps fact-checkers quickly identify whether a narrative is centered around a pre-set number of events and explore links between narratives based on the events in them.

When compared to RED, EDNA has several advantages:

  • Privacy – EDNA runs on CPU and can be fine-tuned on a single consumer-grade GPU. Your model and your data never have to leave your premises.
  • Efficiency and speed – RED uses an LLM for the extraction of events. LLMs are by definition larger than other language models and consequently require more computational resources, which takes more time and energy.
  • Consistency – LLM outputs aren’t deterministic, which makes them an unviable solution when reproducibility matters.

On the other hand, RED has advantages as well: 

  • In RED, the user can easily extend the list of event types, adding a new event type, while EDNA requires proper selection and annotation of gold standard samples, followed by fine-tuning and evaluating a new model. 
  • Unlike RED, EDNA is not yet integrated with an entity linking solution. 

What’s next

The extracted information about a particular event, such as the location, time, and agents involved, can be further enriched if the extracted event arguments are disambiguated via entity linking to known concepts. Our current event detection model works on English data and as such can be combined with Ontotext’s CEEL, which connects mentions of Common English Entities to Wikidata.

Another direction of improvement is adapting the model to other languages as well. Disinformation spreads rapidly in all languages, but most fact-checking organizations lack the tools to process such content in large quantities in languages other than English. Ontotext’s state-of-the-art Multilingual Entity Linking solution can be combined with the multilingual event detection solution, once it becomes available.

Do you want to see what Ontotext’s event detection for disinformation is capable of?

New call-to-action

 

Ontotext’s work on event detection for disinformation has been carried out as part of the Vigilant project.

This project has received funding from the European Union’s Horizon Europe research and innovation programme under grant agreement No:101073921. Views and opinions expressed are however those of the author only and do not necessarily reflect those of the European Union or the European Research Executive Agency. Neither the European Union nor the granting authority can be held responsible for them.

References:

¹ https://github.com/luyaojie/Text2Event
² https://aclanthology.org/2021.acl-long.217/
³ The ACE Event Corpus is one of the most widely used benchmark datasets for event extraction

Article's content