Read about how you can create systems capable of discovering relationships and detecting patterns within all kinds of data.
(This post was originally published at Linked Data Benchmarking Council’s blog)
Publishing and media businesses are going through a transformation.
I took this picture in June 2010 next to Union Square in San Francisco. I was smoking and wrestling my jetlag in front of the Hilton. In the lobby inside the SemTech 2010 conference, attendants were watching a game from the FIFA World Cup in South Africa.
In the picture, the self-service newspaper stand is empty, except for one free paper. It was not long ago, in the year 2000, that this stand was full. Back then, the people in the Bay area were willing to pay for printed newspapers.
But this is no longer true.
What’s driving this change in publishing and media?
Alongside other changes in the industry, publishers figured out that it is critical to add value through better authoring, promotion, discoverability, delivery and presentation of precious content.
While plain news can be created repeatedly, premium content and services are not as easy to create. Think of an article that not only tells the new facts but refers back to previous events and is complemented by an info-box of relevant facts. It allows one to interpret and comprehend news more effectively.
This is the well-known journalistic aim to put news into context. It is also well-known that producing such news in “near real-time” is difficult and expensive using legacy processes and content management technology.
Another example would be a news feed that delivers good coverage of information relevant to a narrow subject – for example, a company, a storyline or a region. Judging by the demand for intelligent press clipping services like Factiva, such channels are in demand but are not straightforward to produce with today’s technology.
Despite the common perception that automated recommendations for related content and personalized news are technology no-brainers, suggesting truly relevant content is far from trivial.
Finally, if we use an example in life sciences, the ability to quickly find scientific articles discussing asthma and x-rays, while searching for respiration disorders and radiation, requires a search service that is not easy to deliver.
Many publishers have been pressed to advance their business. This, in turn, had led to a quest to innovate. And Semantic Technology can help publishers in two fundamental ways:
In this post, I will write about semantic annotation and how it enables application scenarios like BBC’s Dynamic Semantic Publishing (DSP). I will also present the business case behind DSP. The final part of the post is about triplestores – semantic graph database engines, used in Dynamic Semantic Publishing. To be even more specific, I will write about the Semantic Publishing Benchmark (SPB), which evaluates the performance of triplestores in DSP scenarios.
The most popular meaning of semantic annotation is the process of enrichment of text with links to (descriptions of) concepts and entities mentioned in the text. This usually means tagging either the entire document or specific parts of it with identifiers of entities. These identifiers allow one to retrieve descriptions of the entities and relations to other entities – additional structured information that fuels better search and presentation.
The concept of using text-mining for automatic semantic annotation of text with respect to very large datasets, such as DBpedia, emerged in early 2000. In practical terms, it means using such large datasets as a sort of gigantic gazetteer (name lookup tool) and the ability to disambiguate.
Figuring out whether “Paris” in the text refers to the capital of France or to Paris, Texas, or to Paris Hilton is crucial in such context. Sometimes, this is extremely difficult – try to instruct a computer how to guess whether “Hilton” in the second sentence of this post refers to a hotel from the chain founded by Paris Hilton’s grandfather or that I had the chance to meet Paris Hilton in person on the streets of San Francisco.
Today, there are plenty of tools and services that offer automatic semantic annotation. Although text-mining cannot deliver 100% correct annotations, there are plenty of scenarios, where technology like this would revolutionize a business. This is the case with the Dynamic Semantic Publishing scenario described below.
Dynamic Semantic Publishing is a model for using Semantic Technology in media developed by a group led by John O’Donovan and Jem Rayfield at the BBC. The implementation of the Dynamic Semantic Publishing behind BBC’s FIFA World Cup 2010 website was the first high-profile success story for usage of Semantic Technology in media. It is also the basis for the SPB benchmark, which is a sufficient reason to introduce this use case at length below.
The BBC Future Media & Technology department has transformed the BBC relational content management model and static publishing framework to a fully Dynamic Semantic Publishing architecture. With minimal journalistic management, media assets are being enriched with links to concepts, semantically described in a triplestore.
This novel semantic approach provides improved navigation, content re-use and re-purposing through automatic aggregation and rendering of links to relevant stories. At the end of the day, Dynamic Semantic Publishing improves the user experience on BBC’s website.
A high-performance dynamic semantic publishing framework facilitates the publication of automated metadata-driven web pages that are light-touch, requiring minimal journalistic management, as they automatically aggregate and render links to relevant stories.
Jem Rayfield, Senior Technical Architect, BBC News and Knowledge
The Dynamic Semantic Publishing architecture of the BBC curates and publishes content (e.g., articles or images) based on embedded Linked Data identifiers, ontologies and associated inference. It allows journalists to determine levels of automation (“edited by exception”) and support semantic advertisement placement for audiences outside of the UK.
The following quote explains the workflow when a new article gets into BBC’s content management system:
In addition to the manual selective tagging process, journalist-authored content is automatically analysed against the World Cup ontology. A natural language and ontological determiner process automatically extracts World Cup concepts embedded within a textual representation of a story. The concepts are moderated and, again, selectively applied before publication. Moderated, automated concept analysis improves the depth, breadth and quality of metadata publishing.
Journalist-published metadata is captured and made persistent for querying using the resource description framework (RDF) metadata representation and triple store technology. A RDF triplestore and SPARQL approach was chosen over and above traditional relational database technologies due to the requirements for interpretation of metadata with respect to an ontological domain model. The high level goal is that the domain ontology allows for intelligent mapping of journalist assets to concepts and queries.
The chosen triplestore provides reasoning following the forward-chaining model and thus implied inferred statements are automatically derived from the explicitly applied journalist metadata concepts. For example, if a journalist selects and applies the single concept “Frank Lampard”, then the framework infers and applies concepts such as “England Squad”, “Group C” and “FIFA World Cup 2010” …
Jem Rayfield
Image via BBC
One can consider each of the “aggregation pages” of BBC as a sort of a feed or a channel serving content related to a specific topic. If you take this perspective, with its World Cup 2010 website BBC was able to provide more than 700 thematic channels.
The World Cup site is a large site with over 700 aggregation pages (called index pages) designed to lead you on to the thousands of story pages and content
…we are not publishing pages, but publishing content as assets which are then organized by the metadata dynamically into pages, but could be re-organized into any format we want much more easily than we could before.
… The index pages are published automatically. This process is what assures us of the highest quality output, but still save large amounts of time in managing the site and makes it possible for us to efficiently run so many pages for the World Cup.
John O’Donovan, Chief Technical Architect, BBC Future Media & Technology
To get a real feeling about the load of the triplestore behind BBC’s World Cup website, here are some statistics:
LDBC’s Semantic Publishing Benchmark (SPB) measures the performance of an RDF database in a load typical for metadata-based content publishing such as the BBC Dynamic Semantic Publishing scenario. Such load combines tens of updates per second (e.g., adding metadata about new articles) with even higher volumes of read requests (SPARQL queries collecting recent content and data to generate web pages on a specific subject, e.g., Frank Lampard).
SPB simulates a setup for media that deals with large volumes of streaming content, e.g., articles, pictures, videos. This content is being enriched with metadata that describes it through links to reference knowledge:
In this scenario, the triplestore holds both reference knowledge and metadata. The main interactions with the repository are of two types:
SPB v.1.0 directly reproduces the Dynamic Semantic Publishing setup at the BBC. The reference dataset consists of BBC Ontologies (Core, Sport, News), BBC datasets (list of F1 teams, MPs, etc.) and an excerpt from Geonames for the UK. The benchmark is packed with a metadata generator that allows one to set up experiments at different scales. The metadata generator produces 19 statements per Creative Work (BBC’s slang for all sorts of media assets). The standard scale factor is 50 million statements.
A more technical introduction to SPB can be found in this post. Results from experiments with SPB on different hardware configurations, including AWS instances, are available in this post. An interesting discovery is that given the current state of the technology (particularly the GraphDB v.6.1 engine) and today’s cloud infrastructure, the load of BBC’s World Cup 2010 website can be handled at AWS by a cluster that costs only $81/day.
Despite the fact that SPB v.1.0 follows closely the usage scenario for triplestores in BBC’s Dynamic Semantic Publishing incarnations, it is relevant to a wide range of media and publishing scenarios, where large volumes of “fast flowing” content need to be “dispatched” to serve various information needs of a huge number of consumers. The main challenges can be summarized as follows:
We are in the final testing of the new version 2.0 of SPB. The benchmark has evolved to allow for retrieval of semantically relevant content in a more advanced manner and at the same time to demonstrate how triplestores can offer simplified and more efficient querying.
The major changes in SPB v.2.0 can be summarized as follows:
Want to learn more about RDF triplestores like Ontotext’s GraphDB?