Newsletter
Search
Menu

Evaluation is tough at the best of times, but the practical barriers of social distancing and the financial pressures of economic slowdown may have significant implications for what we prioritise and how we go about it.

We have been thinking about this under three broad headings: purpose, methods, and ethics.

 

Purpose: Why do we evaluate?

If you’re an evaluator, you have three big tasks right now:

  1. To understand changing needs: Most of our secondary data on peoples’ needs predates covid-19, although it remains unclear how much this matters. Frontline charities are telling us that coronavirus has exacerbated existing social problems and inequalities, with some people particularly affected. For example, people who live in overcrowded homes, people who don’t speak English, or people who have lost their jobs. We need to consider whether the balance of need has changed and to treat our existing data sources with caution.

A partial solution is for everyone to share what they know about who seems worse affected, particularly organisations who routinely collect data on needs. To this end, NPC’s Covid-19 dashboard now includes data on needs from two charities (Turn2us and Buttle UK) and we are hoping more will follow.

  1. To help services adapt: Coronavirus has forced most charitable services and campaigns to change how they work, sometimes dramatically. The urgency of decisions taken quickly can go either way for evaluators. We might be ignored, to save time or because it’s thought we don’t offer anything useful. On the other hand, we could play a key role in ensuring decisions are as evidence-based as possible. We are hearing examples of both. Most positively, some colleagues have described how the situation has galvanized decision makers to engage with research and evaluation data for whatever it can tell them.

To be influential, evaluators need to work creatively at bringing different sources of data together and look for clues and insights rather than proof. Also, as Michael Quinn Patton argues in his blog, it means evaluators being proactive in sharing what they know.

  1. To learn whether changes are working: Charities should be testing whether the changes they’ve made are working. For example, look at take-up; how new services are being experienced by people; and what elements appear to be working or not working.

With the situation changing continuously, you will need to follow a more developmental evaluation approach. In terms of our 5 types of data, you should focus on collecting data on reach, engagement, feedback and possibly some shorter-term outcomes. To support you in this we have produced a worksheet to help evaluators decide on what data to collect.

Longer term, while there’s a worry that financial pressures will force charities to cut evaluation in favour of ‘core’ activities, there is also an opportunity to build on how evaluation is being used to support decisions and to reimagine its purpose. Instead of trying to prove things, we could instead focus on evidence that is useful day-to-day. I was drawn to a Twitter thread by Ian David Moss on the work of Douglas W. Hubbard who describes measurement “as an optimization strategy for reducing uncertainty about decisions”. With this mindset, the decision a charity needs to make comes first. The evaluator’s role is to provide whatever information they can to improve the probability of the best decision being made. For me, this is how evaluators keep ourselves useful.

I have written before that we should only try to measure ‘impact’ selectively and when it’s needed. Partly because it’s so hard to do, but more pointedly because it always says much more about what’s going on in the outside world than it does about the quality or value of an individual charity’s work. Hopefully the crises will have brought this point home, especially to funders and commissioners who set unrealistic expectations. Encouragingly, we are hearing about good conversations between charities and funders driven by a more pragmatic focus on what data is most helpful right now and what purpose it serves.

If the current shift to remote working is sustained, then a big ‘impact measurement’ question will be the relative value of virtual vs face-to-face service delivery. Some answers will be evident quickly through data on reach, but we may still need to research how well online delivery works for the most vulnerable populations. As ever, we will learn more if we collaborate on a few high-quality studies rather than everyone trying to answer the same question by themselves.

 

Methods: What are the implications for how we evaluate?

Coronavirus has had major implications on the methods we use to evaluate. Evaluators are asking themselves what ‘appropriate’ and ‘proportionate’ measurement and evaluation looks like in the context of a global pandemic. Inevitably, the need for quick results will mean trade-offs around methods and quality, and decisions about which methods are acceptable and which are not. For me, the most important thing to maintain is representativeness, so as to eliminate bias from results. One thing that hasn’t changed is the huge risk of only hearing from people who volunteer themselves to be consulted.

Lots of data collection is moving online (alongside services themselves). Quantitative online approaches were already widely used before, so there’s nothing new here (although we suspect people are learning and trying new things and generally improving their skills). More significantly, coronavirus could result in a step change in how we use online qualitative research and novel ways to consult users. The tools already exist thanks to the market research industry and more innovative researchers. Now even the traditionalists among us will be thinking about these opportunities.

As we move through the crisis, a problem will be how to interpret time-series data and longitudinal research. So much of what the evaluation the sector does is about measuring change; it’s painful to think about all the baseline surveys and longitudinal studies that are out-of-date and so much harder to interpret. If baseline data from before the crises is going to be any use then the evaluation community will need to grapple with this, perhaps by pooling data and with clever thinking about how recalibrate it.

 

Ethics: What must evaluators consider?

Finally, the virus and the switch to online methods raises some ethical issues to think through. For example:

  • Anyone still trying to do face-to-face fieldwork has to face new ethical questions of interviewer and respondent safety and eliminate any risk of spreading the virus. This blog from IAPS highlights some of the things to consider.
  • A more widespread problem is how the virus is affecting respondents and what that means for the questions we ask and how much we can expect them to engage. We might need to think even more carefully about what people are going through.
  • If we are interviewing people online then there’s a difference in the kind of rapport we can establish with them and what we need to tell them upfront. But a quick look at our standard ethical guides like the MRS code of conduct, the SRA ethical guidance and the Academy of Social Sciences shows that there is nothing specific to online approaches. Although there is this useful document from the British Psychological Society.
  • After GDPR, we should all be switched-on to data protection, consent and anonymity, but we still need to think about whether moving to online methods raises new issues. For example, I recently noticed that many online survey packages automatically records people’s IP address (which is personal data) unless you tell it not to.

So, these are some of the issues that are going through our minds. We don’t have a crystal ball, but this is our best guess on what’s important. Have we missed anything? Let us know via twitter or email.

For more on this topic

Here are some resources from others thinking about the same questions.

Related items

Footer