Menu

Ontotext

Benchmark Results

Adequate benchmarking of semantic repositories is a complex exercise involving many factors. The benchmark results presented here aim to provide sufficient information on how the different OWLIM versions perform important tasks (such as loading, inference and querying) with variations in size of data, complexity of data, number of clients and other relevant factors.

You can view the results of several major benchmarks below. The tasks and factors, covered by each benchmark, are provided in square brackets:

Descriptions of the test environments used for benchmarking are provided here.

Benchmarking: Tasks and Factors

The following is an overview of the different tasks for benchmarking semantic repositories:

  • Data loading, including parsing, persistence, and indexing of both instance data and ontologies. The performance depends on several factors:
    • Materialization - whether and to what extent forward-chaining is performed at load time;
    • Complexity of the data model - support for extended RDF data models, i.e. named graphs, is computationally more 'expensive' compared to the simple triple model;
    • Data specifics - sparse versus dense datasets; presence and size of literals; number of predicates used; use of owl:sameAs and other alignment primitives;
    • Indexing specifics - repositories may or may not create a variety of different indices depending on the datasets used, the intended usage patterns, hardware constraints, etc.
  • Query evaluation. There are several factors that affect processing time and memory requirements:
    • Deduction - whether and to what extent backward-chaining is involved, whether the search tree is recursive, etc.;
    • Size of the result-set - fetching large result-sets can take considerable time;
    • Query complexity - the number of constraints (i.e. triple-pattern joins), the semantics of the query (e.g. negation and disjunction related clauses), the use of operators that are difficult to support through indexing (e.g. LIKE);
    • Number of clients - the capability of handling large numbers of simultaneous client requests and running simultaneous query evaluation processes is a key feature for any database engine.

Modifications to the data and/or schema, e.g. updating and deleting values or changing class definitions, represent another important class of tasks performed against a semantic repository. Most of the contemporary RDF-related benchmark suites do not cover modification tasks, because their specifics, complexity, and importance can change considerably across applications and usage patterns.

Similarly, transaction size and level of isolation may also have a serious impact on performance. Although many semantic repositories provide some sort of transaction isolation, it is usually less comprehensive than the corresponding mechanisms in mature relational databases.

Performance Dimensions

There are several parameters that affect the speed of a semantic repository:

  • Scale - the size of the repository in terms of the number of RDF triples (more generally, facts or atomic assertions). For engines using forward-chaining and materialization, the volume of data they have to manage includes the inferred triples, i.e. after materialization, the RDF/OWL representation of WordNet expands from 1.9 million to 8.2 million statements;
  • Schema and data complexity - the complexity of the ontology/logical formalism, the specific ontology (or schema) and the dataset. Even when two datasets are encoded against the same ontology, if one of them is highly interconnected, with long chains of transitive properties, inference will be much more challenging;
  • Hardware and software setup - the performance can vary considerably depending on the version and configuration of the compiler or the virtual machine, the operating system, the configuration of the engine itself, and the hardware configuration.

More information about benchmarking RDF repositories can be found at the RDF Store Benchmarking page of the ESW wiki of W3C.