Last night I went to the launch of a 3-year evaluation of St Giles Trust, a UK charity that runs peer mentoring schemes for prisoners, at the new Supreme Court in Westminster. Impressive surroundings, impressive guests (it’s not every day Sir Gus O’Donnell holds the lift doors open for you) and compelling research.
The report, from the Institute for Criminal Policy Research, evaluates St Giles’ peer advice model that trains prisoners to NVQ level 3 in Advice and Guidance (equivalent to two A-Levels). Prisoner peer advisors then identify those who are homeless or have resettlement problems while they are in prison and help to find housing to ensure they have somewhere to go on release. The report explores how being a peer advisor not only boosted mentors’ own confidence and employability but also created a ‘multiplier effect’ as their advice helps huge numbers of prisoners to find homes upon release.
The evaluation is strictly qualitative and conversation inevitably strayed into the old quantitative versus quantitative debate. It always amazes me how this dichotomy continues to plague the charity sector. The qual camp stress the importance of theory and are suspicious that numbers are a substitute for thoughtful consideration. The quant camp want to know ‘how much’ and ‘how many’ and are suspicious that case studies are cherry-picked to show charities in a positive light.
The problem is, in a way, they are both right. If done poorly, any research can be biased, simplistic and not worth the paper the research application was written on.
But at their best, qualitative and quantitative research can combine to become far more than the sum of their parts. The report on St Giles presented last night will help the charity understand why what it does works and spot areas for improvement. Any quantitative work could use this study to understand what it is important to measure and how to make sure any data is robust and high quality. Such research could help to understand the magnitude of St Giles’ impact and help the charity to track its performance over time.
At NPC we believe that it is impossible to ever know with certainty what impact a charity is having no matter what approach you take—the world moves too fast and people are far too unpredictable for that. But this is not a reason to give up, it is a reason to be thoughtful and consider as many sources of evidence as possible. That is why we believe that a charity’s ‘results culture’ (how it uses and acts on evidence) can be as important as the results themselves. What this research tells St Giles about how it can improve, and what it tells other charities interested in the peer advice model, is far more important than just showing that its peer advice model works.