Getting the right evidence

By Eibhlin Ni Ogain 1 June 2011 2 minute read

Yesterday, I was discussing the topic of evidence with a colleague. An unsurprising choice of topic, given that one of the things we do at NPC is to look at charitable interventions and find out which one gives you most bang for your buck – or most impact for your money. We do this by trying to understand  ‘what works’ in an entirely rational, empirical and analytical way.

This may sound obvious: in science and business, finding solutions to complex problems usually involves empirically rigorous methods of investigation. The Greeks thought the atom was indivisible but in the 20th century new theories and experiments opened it up (and helped create nuclear power). 18th and 19th century medical analysis often required cutting open the body; today magnetic resonance imaging  provides the clearest and most accurate pictures yet inside the human body. In business, Henry Ford produced 1 million cars in 1920, all black, all Model Ts; today, consumer marketing firms like CapitalOne can run thousands of new product ‘experiments’ every month, testing what consumers like best.

These developments in business and science are the result of investing the time and money to investigate what works and what doesn’t. Imagine a world where we applied the same rigorous approach to solving social problems. How many lives would be irrevocably changed, how much misery would be averted? For this to happen, we need to get a lot more sophisticated about good data and robust analysis. We need to take the best bits from science and business and go about building similar structures in the charity world. We need to invest in analysing and researching interventions and unearth those with the best results.

Unfortunately, we are a long way off the rigour of the scientific or business world. Charities are organic developments—they often grow from within a community, and respond to specific needs in unique and creative ways. They do not (for the most part) develop out of years of empirical research. This is both their advantage and disadvantage. The lack of a body of research to assess their efficacy means that we have very little idea about the results that a lot of charities achieve. However, charities’ organic and home-grown nature means that they often come up with innovative and effective ways of tackling social problems, well-suited to their community of need.

But without reliable results how do we know which interventions to back? Where should we put our money?

This is why the idea of a “shared measurement” agenda is so important. Having comparable if necessarily simple measures would at least allow us to compare the results of different interventions if not be completely rigorous. This foundation could then be gradually built up to the level of empirical sophistication found in other disciplines.

We need to recognise where we are with evidence in the charity sector and aim our sights on what is achievable. And right now to simply compare different charitable interventions in the same sector would be a watershed.

Footer