Generic NPC banner

Standards for social impact measurement

By Matt Robinson 5 August 2015

There has been a recent flurry of discussions on impact measurement and social investment. Oranges and Lemons, a review of the impact measurement practices of leading UK social investment intermediaries, was released in June and revealed a diverse spread of different approaches and methodologies in reporting social impact. Then the first real analysis of the performance of FutureBuilders was published; one of the major recommendations of the report was that social impact needs to be tracked alongside financial impact from the outset.

In July, the G7 Social Impact Taskforce rolled (back) into town. Alnoor Ebrahim, Associate Professor at the Social Enterprise Initiative at Harvard, gave one of the sharpest presentations detailing the varied state of impact measurement among global impact investors. And then the OECD—masters of all things to be organised and classified—hosted an expert meeting focused on improving impact data collection, following their 2014 report Building the Evidence Base.

What have we learnt from all these meetings, reports and the jaw-jaw in general?

On the one hand, social impact investing activity around the world seems to be genuinely mushrooming—two years ago events like G7 plenaries were mainly about the UK, and to an extent the US, outlining their developments and the rest of the world listening. Now Portugal, Japan, Brazil and several other countries are reporting a rapid roll-out of impact investment deals and even plans for more ‘wholesalers’ along the lines of Big Society Capital.

On the other hand, we still know very little about what impact is actually being created by social impact investing as a whole. Alnoor Ebrahim identified a paradox: most impact investors seem satisfied with the impact they are having, but very few impact investment funds report on social outcomes (as opposed to inputs or outputs). More prosaically, as one well-known charity CEO recently suggested to me, ‘most impact investing impact reports are a celebration not an analysis’.

This all matters because, one-day, as has now happened to a degree in microfinance, someone will wake-up and say ‘the emperor has no clothes’ in terms of the impact this growing industry is creating. So why exactly might the emperor be, if not fully naked, at least somewhat scantily-clad?

  1. We cannot compare outcomes across different impact investments.

By this I don’t mean ranking education versus health outcomes. That’s for moral philosophers, elected politicians (or SROI-practitioners) to attempt. But it is surely a bit daft if two different interventions targeted at, for example, improving educational attainment of children at risk of becoming Not in Education, Employment or Training (NEET), do not track at least some of the same outcomes. Or likewise for two interventions targeting re-enablement amongst older people discharged from hospital. Or two interventions targeting reduction in obesity among teenagers.

‘But how can we agree on the right outcomes to measure?’ is the common retort. Well, others outside the social impact investment world have managed it. Education economists have used standardised tests to compare performance between different educational systems (eg, OECD’s PISA tests) for some years. Health economists have long-used Quality Adjusted Life Years or QALY’s to benchmark the effectiveness of health treatments. Social care economists also have a similar measure (the disability-adjusted life year or DALY). Why don’t social impact investors just look to these metrics when appropriate?

The reluctance of impact investors to require at least some standard metrics is understandable. But without any solid basis for comparing impact across investments, it will be difficult to accurately channel capital towards the most impactful interventions. And if we really are an industry driven primarily by achieving impact, this matters.

  1. We don’t have good tools for determining how ‘vulnerable’ or ‘deprived’ a person or community is.

Helping such groups is of course a primary objective of most social impact investors. But what is meant by this? One measure of vulnerability or deprivation is of course just an absolute or relative absence of a specific outcome—like the absence of good health. In which case if you solve my first point, you solve my second point. But with the term ‘deprivation’ most impact investors imply a contextual socio-economic type of disadvantage—as well as poor or missing specific outcomes. So not just the absence of good health, but the absence of good health for someone with low income and/or poor educational qualifications.

But how do social impact investors define deprivation? Do we mean people with low income, and if so what measure of income do we use (the Institute of Fiscal Studies uses net household income, ‘equivalised’ to take account of differences in household size and composition)? Do we mean those living in relative poverty (in the UK that’s if you you live in a household with a net income of less than 60% of median household income)? But relative poverty can be ‘fixed’ by reducing the income of those not in poverty—which doesn’t seem very desirable either—so perhaps we mean absolute poverty, or in other words households that earn less than a fixed level? But then what about inequality? What about material deprivation such as fuel poverty? What about non-consumption measures of deprivation, including a myriad of ways of accounting for missed life chances and unlucky life events? I’ve never heard impact investors be very precise on any of this when they talk about deprivation.

Maybe it is easier to understand a deprived community, rather than a deprived person or household. Here there is a handy index: the Index of Multiple Deprivation or IMD. This does crop-up in many impact investment policies and impact reports, particularly around community finance. But the IMD is quite flawed too: it’s a relative index not an absolute measure, it’s based on tiny electoral ward areas, and arguably measures concentration and thus is less good at pinpointing rural deprivation over urban deprivation.

In the face of such complexity, perhaps it is not surprising that impact investors shy away from articulating what precisely they mean by deprivation, or reach for simple indices. But again, one day, someone will want to know exactly what we mean when claiming impact for deprived individuals or places, and we don’t have many answers yet.

  1. There is insufficient attention given to how ‘robust’ a claimed impact is.

Even if we are clear on what we are aiming to achieve (point 1) and for who (point 2), the level of robustness of any claim in impact performance matters a lot. With what degree of confidence do we claim an impact, and do we know what would have happened without the intervention? It is a bit of a dirty little secret, for example, that several of the Social Impact Bonds currently running in the UK do not have a real counterfactual to compare against claimed results. I am not arguing for one moment that all impact investments should look to establish ‘Rolls-Royce’ evaluations, generating for example statistically valid counterfactuals via randomized control trails. This would be clearly inappropriate for most delivery bodies and beneficiaries. I am simply suggesting we get clearer on the level of robustness with which we claim any positive impact.

So how do we get from where we are today, to a better place, before someone shouts ‘you’re not wearing any clothes’? It is important to state at this point I don’t think social impact investors are as a whole are particularly behind most public agencies or grant-makers in measuring impact. And public agencies and grant-makers have far greater pro-social resources at their disposal, and have been doing this for much longer. But social impact investors often make a great deal of their rigour and robustness, and need to be driven by data, so arguably we need to walk the talk prouder than anyone else.

I think we need a rapid evolution of standards around impact measurement, and then some prescription around their use.

First, we need an established standard menu for the major outcome metrics that are relevant to social impact investing in the UK. The Global Impact Investing Network’s IRIS taxonomy is something of a standard and is gaining momentum of use, but it is mainly focused on developing country markets, and on inputs and outputs, not outcomes. In the UK, there has been plenty of work on outcomes (eg, Big Society Capital’s Outcome matrix or the Inspiring Impact shared measurement collaboration, with some standard outcomes in employment) but so far there is less commonality of use.

Second, a debate needs to emerge about what vulnerability or deprivation means for impact investors. There are no easy answers here, but we need to recognise at least that some public agencies are probably ahead of impact investors in at least knowing the distributional impact of their interventions across different socio-economic groups (even if they care less about the results).

Third, we need to be more transparent about the robustness of claimed impact. Not actually more robust (yet), just more transparent. Here, more impact investors and the interventions they finance could start using NESTA’s standards of evidence, which are simple and intuitive.

This covers the ‘standards’ bit. What about the ‘prescription’ bit? Once fit-for-purpose standards of impact measurement emerge, major investors—including wholesalers like Big Society Capital—could gently nudge users of capital to start using such standards. This could be through a more coercive comply-or-explain approach. Or it could be, yes really, positively incentivized by offering a small discount on the cost of capital for those who start to use recognised impact measurement standards. This may seem radical, but the difference will be between making a financial return plus hoping you’ve had impact, and making a teeny-bit lower a return plus knowing you’ve had impact. And it could help social impact investment get better dressed for the future.

Categories:

Footer