Home / Blogs / What we’ve learned about...

What we’ve learned about data-driven decision making

How do we ensure data is interpreted in a meaningful way to make the best possible decisions?

By Rosanna Hardwick | Published 11 June 2020

As a physicist by background, ‘data’ and ‘evidence’ were key building blocks of my education, and in transitioning to the world of social change, the idea of data-driven decision-making felt like home territory.

However, part of what interests me about using data to inform decisions is the inherent nuance: what data do we use, and how do we ensure it is interpreted in a meaningful way to make the best possible decisions?

A recent project with Public Health England to redevelop their Spend and Outcomes Tool (SPOT) brought these questions to light.

The tool gives local authorities in England an overview of spend and outcomes across different programmes, enabling benchmarking against peers to identify areas of significant variance for further analysis. User research revealed a key theme around how data from the tool is interpreted and used to inform decisions. This linked to a range of themes, from transparency of input data and methodology, to the difference in framing between a ‘decision-making tool’ versus a tool that is intended to prompt further analysis and exploration.

This blog picks out a couple of our key reflections on the nuances of using data to inform decision-making.

Interpreting data is often complex

Sometimes the intuitive interpretation doesn’t tell the whole story. For example, if local authority A spends less on a given programme than local authority B, and achieves better outcomes, is this a good thing?

You might say local authority A is getting better value for money, which would seem to be a good thing. However there are important nuances to consider:

  • Can we be sure this isn’t simply reflecting a time difference in spend vs. outcomes? If spend has recently been cut and there’s a time lag in outcomes responding to this, we may be about the see outcomes worsen. If so, it’s vital to understand the trends over time before we make any decisions.
  • Is the data captured in exactly the same way? Sometimes metrics are interpreted differently in different areas: programmes may be allocated to different budget lines, or there may be differences in data collection and reporting. It’s important to understand whether you’re comparing like-for-like.
  • Do the areas have exactly the same level of need? We can go some way towards mitigating this by selecting comparators carefully, for example by comparing ‘statistical neighbours’ which have similar characteristics, but in general it’s important to remember that the areas themselves may differ. If local authority B has higher levels of deprivation and greater levels of need, we might reasonably expect it to both spend more and achieve ‘worse’ outcomes (in absolute terms).

This is true of data analysis in almost any context: interpretation is often nuanced and inherently complex. In the world of social change, the decisions we make impact on people’s lives, and it’s vital that we take the time to understand the data we use to evidence our decisions.

To help address this in our redevelopment of the Spend and Outcomes Tool, we used the insights from our user research to identify key considerations for interpretation which needed to be built into the design of the tool, from adding functionality to view trends over time, to including guidance text that highlights key considerations around input data, methodology and interpretation.

Data isn’t just about numbers

If we talk about data-driven decisions, there’s often a temptation to think of this as decisions driven by numbers and graphs. But ‘data’ in its true sense is much broader: the Oxford English Dictionary defines data as

facts or information, especially when examined and used to find out things or to make decisions.

This means our data might include user research, patient / user feedback, case studies and other types of qualitative information.

Whilst user research is increasingly being recognised as a critical part of the process for digital product development and service design, there can still be a tendency to base certain decisions purely on numerical data, even where broader qualitative data could be equally valuable. There are also difficult decisions to be made if what the quantitative data tells us doesn’t seem to match the insights from qualitative data. How do we approach the process of triangulating between them, and do both types of information carry equal weight? There’s no single answer to these questions; what’s important is that we see these as active decisions to be taken, rather than leaving them to chance or individual preference.

An example of a framework that addresses this is Public Health England’s Prioritisation Framework, which draws on multi-criteria decision analysis to support local authorities in making evidence-based spending decisions across different public health programmes. The tool guides users through a process of defining decision criteria and assigning a weighting to each, gathering and analysing relevant evidence, and then triangulating findings based on the decision framework agreed at the outset. In this way, local authorities are able to come to an evidence-based decision that incorporates both numerical cost-benefit data, as well as broader considerations such as alignment with local need, contribution to reducing health inequalities, or ‘acceptability’ in relation to any political, cultural or moral considerations.

What does this mean for data-driven decisions?

It’s important to be careful of comparisons at face value, whether as a developer designing a visualisation or data tool (‘am I providing the user with sufficient context to interpret the data and draw meaningful conclusions?’), or indeed as a consumer of data (‘can I be sure this graph / statistic is telling me what I think it’s telling me?’) Here, we’ve captured just a few ideas of how we might begin to approach this, but at its core, it’s about considering the ‘so what’: how do we design data tools in the context of how they will be interpreted, and used to inform decisions?

At the same time, it’s important not to let the risk of misinterpretation be a reason not to use data. Sometimes the ‘correct’ interpretation is the obvious one, and even where it isn’t, we learn something valuable from the process of interpretation and triangulation. Not only does the triangulation process improve our understanding of the situation, it also prompts us to find ways to drive up data quality, and to create tools and frameworks that support users in gaining meaningful insights, which take account of the nuances inherent in interpreting data on complex systems.

Take a look at the redeveloped PHE Spend and Outcome Tool.

If you would like to discuss this blog or our data analysis work in health and social care, please email rosanna.hardwick@socialfinance.org.uk.

 

Subscribe to our newsletter

Enter your email address:

View previous newsletters