• March 2026

Meet Hanna Traaholt, a consultant within the Strategy and Operations team at BearingPoint Sweden. Hanna has been with the firm for two and a half years and is currently engaged in a sourcing and procurement transformation, where the focus is on realizing identified savings in external spend.

The context

The project began with an assessment to identify potential savings across external spend categories. Now in the realization phase, the focus has shifted from analysis to execution – ensuring that identified opportunities translate into tangible results. A central part of this work is building data models that create transparency and support fact‑based decision making.

What we see in practice

Many organizations struggle with limited transparency across their data and spend categories. Information is often fragmented across systems, making it difficult to build a shared understanding of where money is spent and where improvement opportunities exist.

This is where data models play a crucial role. By creating structure and a shared view of the data, they enable organizations to move from fragmented information to a common version of the truth. In practice, this clarity is often a prerequisite for effective transformation and informed decision making.

What really makes a difference

A well‑designed data model reduces manual work, minimizes errors, and streamlines reporting processes. This lowers operational effort and cost while improving the quality of insights across the business.

In sourcing and procurement specifically, strong data models also strengthen negotiation positions. Access to accurate, comparable data on volumes, pricing, and supplier performance provides an evidence‑based foundation that supports better negotiation outcomes and long‑term cost reduction strategies.

When building a data model, the starting point is always the business need. Understanding which decisions the model should support guides both structure and level of detail. Reviewing source data quality and availability early is essential, followed by iterative testing and validation to ensure the model is both accurate and useful.

Typical challenges

Poor data quality is a common challenge, as it directly affects the usability of the model. Another frequent issue is over‑complexity. There is often a temptation to include too much too early, which can reduce adoption and clarity.

A “simplicity first” approach tends to work best in practice – starting with a minimal, business‑driven structure and adding complexity only when it creates clear value.

One takeaway from the field

Keep data models simple, scalable, and business‑driven. Strong documentation and clear governance are just as important as the technical structure. When the model is easy to understand and trusted, it becomes a powerful enabler for better decisions and more effective transformation.

Looking ahead: the role of AI and new technology

AI and modern technologies can significantly streamline the data modelling process. They can support structural suggestions, identify relationships between datasets, and detect data quality issues early.

New tools also lower the barrier to working with data by enabling more intuitive interaction. For example, AI‑powered agents can allow users to explore and query data models using natural language, making insights more accessible across the organization and supporting broader adoption.

 

Operations

Transforming your supply chain into the digital age