Charity Support, Advice, Impact Measurement, Philanthropy Impact – NPC

Evidence athletics

By James Noble 30 January 2014

Evaluators like to develop ‘standards of evidence’ which rate studies in terms of quality and validity. Examples are discussed in our recent guide, and a more comprehensive review was conducted by the Alliance for Useful Evidence. While many of these are intended to be the last word on what makes good evidence, they still tend to reflect the priorities of their authors.

For example, a common theme is the importance of ‘robust counterfactual studies’, where service users are compared to non-users (who ideally have been distributed randomly to create both a ‘treatment’ and ‘control’ group, like a drug trial). The argument is that these studies allow us to be certain of what impact can be attributed to a project and are the best way to obtain ‘clinching evidence’, which is highly prized by funders and policymakers.

However, running good counterfactual studies is difficult, especially for charities that have little or no resources for research. To use the obvious analogy, it’s the first round of the high jump and the bar is already set at of the top competitor’s personal best, while in the third round the bar  moves to world record height.

Another consequence of focusing on counterfactuals is that other, more achievable, quality improvements can be overlooked.

So maybe charities need different evaluation principles that acknowledge some of the more modest progress to be made? A preliminary suggestion is shown below. I can’t call them ‘standards’ because they’re not really a hierarchy, but I have tried to show them in the order in which they might be tackled.

Principle 1: You describe what your projects do in terms of long-term impact, intermediate outcomes, outputs and activities. This means developing some kind of ‘project theory’ using an approach like theory of change, logic modelling or a planning triangle.

Principle 2: Your project theory is supported by a review of the relevant academic literature and other research on your client group and your intervention.

Principle 3: You collect good quality qualitative evidence from your users and stakeholders and can show that the people you’ve spoken to are representative and have not been cherry picked.

Principle 4: You measure the progress of service users longitudinally (ie, before and after your intervention) using a tool to record both the soft and hard outcomes you have decided are relevant to the project theory. You collect this data in a database and analyse it.

Principle 5: You have considered the counterfactual for your project and compared your results to any data available which helps you understand this. Any opportunities to actually conduct a comparison group study have been considered and pursued.

Principle 6: You publish a report of your results which impartially  presents the best quality evidence you have on: a) your impact; and b) what you have learned (along with an honest description of methods).

We’ve been helping a number of charities in criminal justice move through these as part of our Improving your evidence project with Clinks. If you’re unfamiliar with the project or would like to find out more, please visit our webpage and do get in touch—we’re keen to hear your views.

Categories:

Footer