Data Mesh 101: What it is and Why You Should Care

This article was originally published in TDWI.org

February 12, 2024 6 mins. read Sumit Pal

Although the enterprise data landscape is littered with new data technology and offerings, the most pressing problem data teams face today isn’t a lack of technology or skills; it’s not knowing how to create a modern data experience. Data teams are struggling to tame the chaos and control the entropy of the data ecosystem and are evaluating modern approaches such as the data mesh to manage the confusion.

Why Data Mesh?

With the disaggregation of the data stack and profusion of tools and data available, data engineering teams are often left to duct-tape the pieces together to build their end-to-end solutions. The idea of the data mesh, first promulgated by Zhamak Dehghani a few years back, is an emerging concept in the data world. It proposes a technological, architectural, and organizational approach to solving data management problems by breaking up the monolithic data platform and de-centralizing data management across different domain teams and services.

In a centralized architecture, data is copied from source systems into a data lake or data warehouse to create a single source of truth serving analytics use cases. This quickly becomes difficult to scale with data discovery and data version issues, schema evolution, tight coupling, and a lack of semantic metadata.

The ultimate goal of the data mesh is to change the way data projects are managed within organizations. This enables organizations to empower teams across different business units to build data products autonomously with unified governance principles. It is a mindset shift from centralized to decentralized ownership, with the idea of creating an ecosystem of data products built by cross-functional domain data teams.

In a data mesh, each domain implements its own products and is responsible for its pipelines from ingestion, storage, and processing to delivery. This approach facilitates communication between different parts of the organization to distribute data sets across different locations. The responsibility for generating value from data is delegated to the people who understand the business and domain-specific rules. Organizations such as ABN AMRO, Roche, JP Morgan Chase, Intuit, and many others have embraced the intrinsic principles of the data mesh to model, build, and govern data platforms faster by decoupling domains.

Components of a Data Mesh

The data mesh introduces new concepts such as data contracts and data domains, but it also leverages existing components such as data catalogs and data-sharing platforms to accomplish its principles. Important concepts to understand include:

A data domain is a fundamental concept for a data mesh. A domain has a functional context and is assigned to perform a certain task, and that is the reason it exists. Subject to organizational constraints, think of a data domain as a logical grouping of organizational units to fulfill a functional context. Once these domains interact and share data with each other, the mesh emerges. For example marketing, sales, and products can be different data domains in an organization.

A domain distributes responsibility to people who are closest to the data, know the business rules, and understand the semantics of the data for that domain. Each domain consists of a team of full-stack developers, business analysts, and data stewards who ingest operational data, build data products, and publish them with data contracts to serve other domains. Examples include a marketing domain, sales domain, customer domain, etc.

Data products are nodes on the mesh that encapsulate structural components such as code, data, metadata, and infrastructure. Data products are carefully crafted, curated, and presented to consumers as self-service, providing a reliable and trustworthy source for sharing data across the organization.

Some examples of data products are data sets, tables, machine learning models, and APIs. A data domain owns its data products and is accountable for managing its service-level agreements (SLAs), data quality, and governance. In order to build data products, the domains need data catalogs, data-sharing platforms, and data contracts.

Data contracts enable domain developers to create products according to specifications. These contracts ensure interface compatibility and include terms of service and an SLA. They cover the utilization of data and specify the required data quality.

The primary objective of data contracts is to establish transparency for data usage and dependencies while also outlining the terms of service and SLAs. However, when implementing them, users need time to familiarize themselves with and understand the importance of data ownership. Data contracts should also include information schema, semantics, and lineage.

Data sharing is another important component of the data mesh. This allows data domains to share data products rather than copying them. It minimizes data movement and discrepancies across systems and ensures that the data contracts are obeyed and adhered to when sharing the data products in a secure, scalable, and zero-copy way.

Data Mesh: When to Use and When Not to Use

Today, there is no generally accepted standard for implementing a data mesh — adopting a data mesh isn’t a one-size-fits-all solution and requires a fundamental shift in how distributed data teams across the organization think about data platforms. General indicators of when data mesh is the right choice for an organization include:

  • Global deployments of data platforms span multiple business units across multiple domains
  • Autonomous teams work independently with diverse data and analytics, each with their own full-stack tools that are disparate from other units
  • Long delays, iterations, and scaling bottlenecks are due to centralized IT and data engineering teams who are responsible for implementing new data sets and building data pipelines for the entire organization
  • Domain-specific data has nuanced semantics and complex preparation needs that require domain specialists rather than engineering specialists
  • The overall business philosophy defaults to decentralizing control to business units and domains, with a relatively thin layer of shared services provided by centralized corporate services

On the flip side, data mesh is not a fit for enterprises with a fully decentralized setup, as it needs centralized coordination to align, enable, and support the decentralized data teams. Likewise, data mesh is recommended only if an organization has a critical mass of data talent or where data teams’ engineering is mature.

Data mesh is not a silver bullet for data management problems. However, large organizations looking to scale their data platforms — including both operational and analytics — eventually adopt some form of data mesh. Data mesh with a data product approach is emerging as a compelling way to manage data ecosystems within enterprises with the goal of bringing agility and velocity by creating a self-serve approach to data management.

In the next installment of this three-part series, I will delve deeper into how organizations should go about adopting the data mesh paradigm in their organizations.

 

Article's content

Strategic Technology Director at Ontotext

Sumit Pal is an Ex-Gartner VP Analyst in Data Management & Analytics space. Sumit has more than 30 years of experience in the data and Software Industry in various roles spanning companies from startups to enterprise organizations in building, managing and guiding teams and building scalable software systems across the stack from middle tier, data layer, analytics and UI using Big Data, NoSQL, DB Internals, Data Warehousing, Data Modeling, Data Science and middle tier.

Migrating From LPG to RDF Graph Model

This post discusses the LPG-RDF vis-a-vis graph models and why enterprises should invest in knowledge graphs with RDFs for their data management practices

How Knowledge Graphs Power Data Mesh and Data Fabric

Learn from Sumit Pal, Strategic Technology Director at Ontotext, how knowledge graphs power data mesh and data fabric.

Choosing A Graph Data Model to Best Serve Your Use Case

This post discusses the LPG-RDF vis-a-vis graph models and why enterprises should invest in knowledge graphs with RDFs for their data management practices

Data Mesh 101: How Data Mesh Helps Organizations Be Data-Driven and Achieve Velocity

Ontotext’s Sumit Pal talks to TBWA about how data mesh enhances performance and helps organizations and data teams work more effectively.

Data Mesh 101: How Data Mesh Can Be Used in an Organization

Ontotext’s Sumit Pal talks to TBWA about the best practices for successfully adopting the data mesh paradigm in your organization.

Data Mesh 101: What it is and Why You Should Care

Ontotext’s Sumit Pal talks to TBWA about the current state of data management and how people are addressing these data management issues.

What is Data Mesh?

Ontotext’s Sumit Pal talks to Big Data Quarterly about the “why” and “what” of data mesh and the role knowledge graphs play when considering adopting a data mesh strategy

Ontotext – Onto The Cloud

Read about Ontotext’s offering of GraphDB on AWS Marketplace and why enterprises should leverage its capabilities for building their semantic applications