Evidence: in the eye of the beholder?

By James Noble 14 June 2013 4 minute read

This week we held the first of our ‘Improving Evidence’ conferences for criminal justice charities. We heard from many delegates engaged with the idea of testing services against evidence, but uncertain about the best way to go about it, or what to prioritise.

With this in mind, we asked delegates to discuss the question: ‘what makes good evidence?’. On the face of it, the answer is simple: good evidence should tell us what works to reduce reoffending. But reflecting further on the discussions, it takes on a slightly different complexion depending on who’s doing the talking.

Firstly, there’s the commissioner perspective. Here, good evidence offers unambiguous assurances that proposed services will deliver the desired outcomes (at a certain price). Following commissioning, it’s also about having an up-to-date and accurate picture of how much is being delivered and what the quality is.

Secondly, there’s the researcher perspective. This tends to focus on the validity of the methods used and is reflected in evaluation standards such as the Maryland Scale and Project Oracle standards. In these, the high point is achieved by research designs with counterfactuals or control groups; from those derived statistically (such as the Justice Data Lab), through to genuine randomisation and repeated trials in different settings. Thus, the question of what makes good evidence is answered by asserting the need for valid and reliable methodologies which increase the standard of proof.

And there’s also what we might loosely call the practitioner perspective. Leaving aside what they need to do to keep researchers and commissioners happy, practitioners are engaged with ”how things work” as well as ”what works”. For example, while robust evaluations may indicate that victim-offender conferencing reduces reoffending, how is this best achieved, for whom and under what circumstances? Practitioners are also interested in what researchers refer to as ‘fidelity’, meaning how well services are delivered and what they need to do to maximise their effectiveness. Hence from the practitioner perspective, the methodological rigour of evidence is perhaps less important than how accessible and applicable it is.

In an ideal system these three perspectives should be complementary. But making such distinctions helps us understand some of the tensions that tend to emerge, and also highlights how an abstract idea, like evidence, takes on a different meaning depending on how it’s being used. Developing an understanding of the different perspectives should therefore help us towards the common language we aspire to, so we can better determine what evidence we actually need to collect (and what we don’t).

This is where the theory of change approach comes into its own. As we have described before, a theory of change shows a charity’s path from needs to activities to outcomes to impact, and its value is recognised by all three perspectives.

For commissioners, it clearly articulates what potential providers are trying to achieve and how success will be monitored. For researchers, it’s an important  element of good practice in all evaluations. For example, the very first step in the Project Oracle standards is to have a “project model”  that shows: long-term goals, measurable outcomes and a description of interventions/activities – which a theory of change model provides. Finally, for practitioners it helps staff to think through what they do, and to describe what it is about what and how they deliver a service that really has an impact.

Also, developing a theory of change is relatively quick and inexpensive to do. So, as part of our Improving Evidence project we will prepare step-by-step guidance, and will be working with a small number of charities to create case studies. In the mean time, if you would like to come along to one of our remaining three conferences and join the debate, please sign up here.