autumn leaves

Everything in moderation—including impact measurement

By Peter Harrison-Evans 25 August 2016

The idea of balance has guided great thinkers for centuries: from Aristotle to Buddha, to modern day muse and 1980s comedian, Bobby Davro who once said ‘When work is going well, your home life struggles and vice versa. I strive for balance in my life, though’. Wise words.

At NPC we also strive for balance, but one area in which our views may seem slightly off-kilter is measurement and evaluation. We do tend to bang on about it quite a bit—you may have noticed—and we have been called, among other things, ‘impact puritans’.

As the ‘impact agenda’ gathers momentum, charities may feel that they need to be collecting data on everything. But while collecting data on outcomes and producing evidence to a higher standard is important,  there can be a misalignment between this imperative and the realities of squeezed budgets, limited in-house methodological expertise, and complex settings where isolated impacts aren’t easily discernible. It’s true, we do place a great deal of value in using evidence and evaluation to adapt services and improve beneficiary outcomes. But within that, we do have a bit more of a nuanced view.

Last year my colleague James Noble wrote about why charities should collect less impact data. This quickly became our most read blog of 2015. It made the point that many charities should be concentrating on more targeted data collection; collection focused on asking questions that help beneficiaries, rather than always trying to prove and re-prove their impact.

Since then, we’ve tried to take this thinking forward and more clearly spell out what we think a proportionate approach to evaluation looks like, ie, one that balances the competing tensions of an organisation’s evidence needs, available resources, and the context in which it is operating. Our recent paper Balancing act: A guide to proportionate evaluation, which was commissioned by the Money Advice Service, sets out four steps that charities should take to develop a proportionate approach:

  1. Start with theory: Start by thinking about the theory behind your programme—your ideas or assumptions about why it works. Map your theory of change and use this as a base to guide evaluative questions. This step alone makes data collection much more efficient.
  2. Look for relevant external evidence: This doesn’t need to be a systematic or literature review (although you can draw on these where others have done them). It’s about having a good scan of available evidence, without cherry-picking, mapping what you find to your theory of change. The weight of existing evidence will help shape the kinds of questions you need to ask.
  3. Balance your evidence needs with available resources: On the basis of the previous steps consider what you and your stakeholders need to know. Then assess your capacity and capabilities to answer these questions. Taking shared approaches to measurement is one way that organisations can pool resources to answer questions that are more costly to evaluate alone.
  4. Tailor evaluation to the nature of your intervention: This is about understanding the inherent ‘measurability’ of the outcomes you’re looking to support. ‘Simple’ interventions (where the is a clear relationship between a single intervention and beneficiary outcomes) may lend themselves well to some form of comparison group study. However, complex settings will mean you need to use several data sources.

By thinking through each of the points above, charities can begin to develop proportionate approaches to evaluation that are both effective and sustainable. Rather than racing to the extremes of the impact puritans, these organisations can find their inner evaluative balance.

As Bobby Davro would say: μηδὲν ἄγαν—‘nothing in excess’. Or was that the Greeks?

Why not take a look at the paper for more detailed explanation of these four points, as well as example case studies.

Categories:

Footer