decorative

Picking out the signal from the noise

By David Pritchard 9 February 2012 3 minute read

Evaluating social programmes can resemble a dark art: the terms and methods can be obscure; evaluators don’t like making definitive claims; evaluations often appear to support contradictory conclusions; and the strongest recommendations are usually the need for more research! All this can give the impression that researchers are engaged in a mysterious set of self-serving practices that throw up more questions than answers and hence make the case for more evaluations. Which means evaluation reports are all too often read once then put on the shelf to collect dust.

The tragedy is that a well-conducted evaluation can add a lot to knowledge and understanding of what works and what does not. But it is often difficult to tease this out. The Ministry of Justice’s (MoJ) evaluation of  the HM Prison Service’s Enhanced Thinking Skills (ETS) programme provides a case in point.

Based on promising research from other countries, ETS was a programme of exercises, assignments, role playing and discussions developed by the Prison Service to help change how offenders think about their criminal behaviour. It was accredited for use in custody in 1996.

The MoJ conducted evaluations of the programme in 2002, two in 2003 (study one, study two), 2006, 2009, and 2010. Most of these evaluations used a robust “matched pair” approach that compared the reconviction rates of programme participants against similar offenders who did not participate in the programme. The 2009 study used a randomised control trial—the “gold standard” of evaluation.

On first glance the results seem inconclusive:

  • Consistent with international research, the first evaluation found that two years later male offenders who participated in the programme had lower reconviction rates than those who did not.
  • Two evaluations (one published in 2003 and the other in 2006) found no difference in the two year reoffending rates of  people who participated in the programme compared to those who did not.
  • The 2009 evaluation found that adult male offenders who completed the programme became less impulsive, and hence less likely to reoffend, than those who did not complete the programme.
  • The 2010 study found a reduction in one year reconviction rates, as did one of the 2003 studies but only when programme dropouts were excluded.

Based on this summary, these results do not seem very helpful is answering the main question of whether ETS works. But there are answers in the details. For example:

  • The 2010 study should count more than the others because it was the most robust of the matched pair evaluations as it controlled for factors that may influence reoffending that were missed out in the previous studies.
  • The 2010 study confirmed that proper targetting makes a difference. The programme was found to be more effective for offenders identified as most suitable for the programme than for participants deemed less suitable.
  • The 2003 study found that offenders who dropped out of the programme had higher reconviction rates than people who did not participate in the programme at all. It appears that for some offenders (those least suited to ETS) the programme may have increased their risk of reoffending, while decreasing the risk for others, at least in the short-term.
  • The 2006 study focused on female offenders, most likely a partial explanation for some of the different results because originally ETS had been developed predominantly by men for men.

Good evaluations provide important and useful findings even if they do not provide unambiguous answers to the central question, does the programme or service work? There is rarely a simple answer to that because different things work for different people. This may be frustrating but should not be surprising.

Few charities can match the evaluation and research resources of the MoJ, but all charities can collect some data to help identify what works, when, and for whom. As the ETS case study shows, this cannot be done in a one-shot deal. Evaluation is not so much a dark art but a process of building your knowledge base brick by brick. The payoff is continuous improvement and greater impact. The alternative is not being sure whether you make much difference or, potentially, cause harm.

Footer