Nice work if you can get it

By Dan Corry 24 February 2012

This week’s news has been full of stories questioning the operation and even morality of social policy initiatives, from the Work Programme to Nick Clegg’s ‘pay-per-NEET’ scheme. Is it ok to ‘force’ unemployed people onto programmes if it is believed to be for their own good? Do Payment by Results contracts inevitably incentivise fraud?

But just as important is the issue of how we know what social policy initiatives actually work. A few weeks ago Cabinet Secretary Sir Jeremy Heywood floated the idea of creating an institution to research and highlight interventions that work. Since then there have been active discussions within government about how to take these ideas forward.

The thinking behind the idea is that we would make more progress if there was a body that could tell us which interventions are effective—to solve issues like reoffending or drug addiction—and which are not. If we had a kitemark for good interventions wouldn’t that help ensure that investment was directed to the successful ones and away from the unproven or, worse, from approaches already proven to be ineffective? The whole thing is often summed up as a social policy equivalent to the medical advisory body, Nice.

But we need to be clear of the issues that lie in the way of making this a success and avoid excessive optimism and hype.

First, hard evidence of the ‘success’ of social interventions is difficult to come by. That is true even where the aim is clear (like getting young people back to work); there is good, accessible data; and academics have been investigating the area for years with robust methodologies. It is even harder when the outcome is hard to define, the data non-existent or poor, and there has been little thorough research on the topic. We may be able to work out which interventions are not worth pursuing any more but it will always  be difficult to precisely rank them. In that sense the analogy with Nice is misleading as the health sector lends itself more easily to rigorous and repeatable testing and evaluation.

In any case, designing a social intervention that works at all times in all places may be searching for the impossible. Our experience at NPC suggests that, for example, if you want to help children with mental health issues the way organisations need to work is different from school to school and from year to year depending on local circumstance, the economic climate and so on. An intervention that works well in Minnesota or in Kampala in one decade won’t necessarily work well in Luton, the Gorbals or Great Yarmouth in another. So maybe the kitemark we want is about good organisations that are learning and adapting to evidence rather than backing a particular intervention.

In addition the ‘only commission what works’ approach feels at odds with the other great current public policy trend towards Payment by Results. Here one does not care how the provider produces the results—indeed one is looking for innovation and new ideas—the antithesis surely of mandating certain approaches. Interestingly the government is currently going big on Payment by Results in the Work Programme, and the inability to handle risk and cash flow has meant that demonstrably effective voluntary sector providers have been badly squeezed in a way I suspect nobody intended. Would a kitemark have helped them there?

There are lots of complications with the kitemark idea. But the basic principle that it is helpful for us all to share knowledge on what interventions work and try and push commissioners towards using those, rather than ones we know don’t, is one that should be pursued. Perhaps if government helped create such a body, donations to charities might also start to be based a bit more on the outcomes they achieve—then we would really see some big improvements in the use of increasingly tight resources.

 A version of this blog appeared in  the local government weekly The MJ recently.

Categories:

Footer