Einstein once said “Everything should be made as simple as possible, but not simpler”. Never is this statement more true than when applied to charity research – when trying to measure their impact charities should start simple, and here’s why.
Happily more and more charities are wising up to the huge potential that measurement can bring. Particularly in the current economic climate, charities realise that good research not only singles them out amongst stiff competition, but can also help direct services more efficiently.
This is fantastic news, yet with this new enthusiasm comes problems. Many smaller charities don’t have expertise in research methods and dealing with data, and cannot afford to employ someone who does.
As a result there is some scepticism of charity research amongst academics. Many have concerns that the methods used can be flawed, and that charities will only report positive findings that support their cause, ignoring those that don’t.
Yet this is no reason for charities not to measure. I have been working recently with Guide Dogs for the Blind and have been impressed both with their enthusiasm to conduct high quality research, and willingness to accept that you don’t always get the results you want.
But it also made me realise that sometimes simple is beautiful. The more ambitious the research design the more that can go wrong.
Guide Dogs used NPC’s Well-being Measure to evaluate their emotional well-being programme with visually impaired children. The research design was ambitious, involving a control and two follow-ups. The results indicate that the programme improves children’s self-esteem and emotional well-being. However, because the control was not properly randomised, and the timing of the questionnaires not standardised as intended, the results were difficult to interpret.
So on that note I have a few tips for charities having a first crack at research and evaluation:
1) Start with a small number of outcomes: Think hard about what your two or three key outcomes are. Don’t try to measure everything – it is better to measure one or two outcomes well, than many badly.
2) Measure what is easy first: I am not saying charities should just measure what it is easy, but it is a good place to start. For example, if you are working in a school you could use data it already collects on attendance.
3) Don’t worry about control groups: Controls groups need to be either randomised or well-matched to provide a good comparison, so only have one if you are going to do it properly. By just measuring the people you work with before and after, if you consistently see the same change, you can build up a good argument that this change is due to your intervention.
4) Where possible use existing tools: Especially when measuring soft outcomes like well-being, use robust tools like NPC’s Well-being Measure which are recognised by academics, rather than designing your own.
5) Don’t worry too much about long term follow-up: I feel a bit bad saying this because long term follow-up is important for outcomes like reduced offending. Yet, it takes time, and the longer after an intervention you measure, the harder it is to disentangle your impact from other environmental factors (this is especially true of well-being outcomes). Start by trying to capture your impact over a short period, first.
6) Invest in training: Even a day or two of training in research methods be very useful. NPC can provide bespoke training, and we recommend courses run by CES, and CASS for more advanced research methods.
Finally, it is essential to be honest about the results you get. Research is not just about proving what does work, but also what doesnt, and only through frank and open dissemination of its results, will charity research gain credibility.
I will end with another quote from Einstein; “Any intelligent fool can make things bigger, more complex, and more violent. It takes a touch of genius – and a lot of courage – to move in the opposite direction.”