Allegedly there’s more than one way to skin a cat.
I agree with the general point—that problems generally have more than one solution—but surely some are easier to carry out, are more humane, than others? In any context, does it not make sense to check which model works well and copy that, before experimenting to find a new solution?
It feels like almost everyone is on board with the idea that impact measurement is A GOOD THING, but I’m wondering if it’s time to think again about why this is so.
Charities often start with their existing model and consider how to prove they’re making a difference. Sure it’s about understanding, learning, improving, but ultimately it’s about reaching a point where the organisation can say that what it does makes a difference. That’s great, as far as it goes, and infinitely preferable to not bothering at all.
But we rarely see organisations doing the much scarier and more difficult thing of starting again with a blank sheet of paper and looking outside their own organisation at how other people are working to solve a similar problem.
This is an issue close to NPC’s heart—because what is the point of impact measurement if not to add to the collective understanding of what works in achieving social change? It’s only by building on this accumulated body of knowledge that we can make better decisions about what to do and what to fund.
I’ve spent time over the past year talking to commissioners for our report on public sector commissioning for the arts and cultural sector. I was struck by their frustration with organisations focused on selling the merits of their unique approach, rather than building on those which have worked elsewhere. Interventions need to be carefully adapted to different contexts, geographies, and beneficiary types, but the default should be beginning with what’s proven to be effective.
So why are we so hesitant to ”lift and shift”?
- Is it simple human nature? There are well known human reactions which work against sharing good practice; for example, Not Invented Here syndrome—an institutional culture which avoids adopting approaches that originated externally. Another is the IKEA effect, whereby people place a higher value on things they have created themselves. A colleague was recently staggered to hear that charities worry another charity might resent them copying its successful model.
- Is it about funder dynamics? While commissioners may prefer a proven approach, many funders still want to support new, innovative ideas, and charities respond to this. That’s fine where we really don’t know what works, but we also need funders who are willing to fund established, effective interventions in the long term.
- Or is it simply too difficult to find information about what works? There is a growing emphasis on collating information on effective approaches, for example, through the What Works centres, shared measurement approaches, and projects to make charity information more accessible and useable. But the evidence base is far from comprehensive, and it’s not always obvious how to apply existing insights to your situation.
Like most things, the answer is a mixture of the three. So while NPC and others work hard on the third part, we can’t ignore other factors that inhibit learning from one another—because if we don’t address them, the impact measurement movement is in danger of becoming increasingly self serving, and missing the opportunity to drive social change.
Skinning a cat is a skill I hope I never have to learn—but if circumstances required it, you can bet I’ll seek someone with experience to show me how it’s done. Because if the idea is a good one, I’m ok with being a copycat.