Blog

Why charities should collect less impact data

I work in NPC’s measurement and evaluation team so it might seem odd for me to be advocating for less impact data. But while we believe in the benefits of good impact measurement across the charity sector, we also believe in quality over quantity and think that data collection should be focused on what we need to know to tackle social issues, rather than on the needs of individual organisations.

To explain what I mean, it’s useful to go back to the question of why we spend our time helping charities to use evidence. Ultimately it has to be about charities having an increasingly positive impact for beneficiaries. We believe that evidence helps with this by encouraging services to be designed and delivered according to an understanding of ‘what works for whom in what circumstances’—and by helping funding go to the services that are most likely to help.

This ‘evidence utopia’ is sometimes represented by analogy to the medical sector, which has a long tradition of collecting and using evidence. I’d agree that we should be emulating the medical sector’s spirit of enquiry; the way that the whole profession is encouraged to test, learn and challenge results. But the analogy is inappropriate in other ways; our methodologies will never provide their levels of certainty and replicability, and we will never have the same level of funding for research.

In the charity sector we have to be more realistic about what a good evidence system could look like. As outlined in our recent paper on evidence in  the criminal justice sector, we have identified six ‘ingredients’ for the effective use of evidence:

  • Services based on good theories of change that reflect the latest academic research and evidence.
  • Providers collecting and analysing the right information to monitor their performance, including access of outcomes data from official sources like the Justice Data Lab.
  • A common language of intermediate outcomes and measures across the sector, including standard approaches that can be used by organisations in different settings.
  • Commissioners and funders choosing services on the basis of evidence.
  • An open culture of publishing findings and learning from one another’s work.
  • Fewer, but higher-quality evaluations, particularly focused on innovative services that might help us learn something new.

The last of these bullet points brings us back to the contention of this blog. It bothers me that so many charities collect data to show they make a difference, while we conspicuously fail to accumulate and share this knowledge in the way the medical profession does. To my mind, this is because the incentives are wrong. Organisations feel compelled to collect impact data to validate themselves and persuade funders to keep giving them money—so they become trapped in a cycle of self-justification and pointless data collection (we call this the ‘beauty parade’).

A solution could be for organisations to split their evidence requirements into two distinct questions:

  1. What is the evidence for the thing that we do?
  2. Do we deliver the thing effectively?

To answer the first question we need to do fewer, higher quality studies, that are published and disseminated freely. Ideally, it’s not a question that individual charities should pose about themselves. Rather they should see themselves as delivering a service model and then work alongside others delivering similar models to collaboratively test their effectiveness and agree how to improve.

To answer the second question, charities need to do limited routine data collection that helps check quality and user engagement and assures funders and commissioners they are doing a good job. Note that this is not about impact; if you know the service model works you do not need to test it again and again.

Ultimately this way of thinking should lead to a smaller amount of impact data being collected and the data that is collected being more focused on questions that help beneficiaries rather than the perceived needs of individual organisations.

We know a lot needs to happen for this to become a reality and our paper outlines what we think the priorities should be. What do you think would help your sector make this shift? Let us know in the comments below.

3 Comments

  1. As an analyst who’s just recently started working in the charity sector I found this really thought provoking.
    I’m not sure the sector can really be accused of collecting too much impact data. Too much data yes. I’m struggling to find examples of impact evaluations that live up to the name – or impact reports that contain much impact data. While I agree with the six ingredients for the effective use of evidence, the evidence has to be good quality in the first place. In particular that means some kind of counterfactual. It strikes me charities could (with care) make much more use of naturally occurring experiments, given that most do not have the resources to help everyone.
    I think funders and commissioners could be more demanding as well – not just asking for an ‘evaluation plan’, but specifying the ‘currency’ in which outcomes should be measured, and being interested in effect size, scalability, and value for money.

    • Hi Ian. Thanks for your reply and glad you found you found the blog helpful. Absolutely agree with you about quality; there is far to much poor quality data that doesn’t answer any interesting questions. We might differ on the value of the counterfactual. I understand the logic but external validity is a problem. I also worry that focusing on it distracts us from more basic improvements (I wrote about this here http://www.thinknpc.org/blog/evidence-athletics/). With you 100% on your comments on natural experiments and funders. I would like the ‘currency’ to include: how has your evaluation helped you improve and how have you communicated this learning externally? Keep in touch!

  2. I also got involved in the charity sector about 15 years ago (international development) after a long background in business and consulting in the private and public sectors. The lack of curiosity about impact and effectiveness was something that struck me early on, but I’d agree with your contrarian challenge that people might be collecting too much data.
    Questions I’d ask would include ‘what are you going to do with this data and what might change as a result? Just as in other sectors, collecting data costs money so it needs to be worthwhile. Not just filed on the shelf or used to justify someone’s PhD! Though finance directors asking for reams of financial data don’t always accept that argument
    That leads to another point which is why and for whom is it being collected? And perhaps by whom. If it is being collected by a third party, as part of an process thats external to the project or activity, for the benefit of others – be they auditors or donors for example, it adds no value to the project or process being monitored
    On the other hand, if its an integral part of the project or process, being owned by those involved, and used to learn whats working and needs improving, then it adds value. Even more so if that learning is shared proactively with others. Something else that is all too rare in this sector
    These are well established principles of quality programmes and performance improvement in other sectors. I see no reason at all why they do not apply here

Leave a Reply

Your email address will not be published. Required fields are marked *

*

Back to top