Person doing a survey

Financial incentives and charity survey participation

By Ben Fowler 4 June 2021 5 minute read

Financial incentives are often overlooked or considered as an afterthought in charity sector survey design. By contrast, paying participants is standard practice in commercial market research. This blog explores some of the data quality, ethical, and resource considerations relating to financial incentives for charity surveys, as well as looking at what the charity sector could learn from commercial market research’s use of incentives.

There are, of course, many non-financial issues which affect participation (including the survey method, survey length, and how the research is introduced to participants) but in this blog we are dealing solely with the questions: How does giving a ‘thank you’ voucher or cash reward affect response rates and survey representativeness? And what other factors should be considered when deciding whether to offer a survey incentive?

It is worth noting that, while researching the efficacy of survey remuneration, it became apparent that there is a lot of research about research out there, with mixed results from all sorts of different contexts, which makes it challenging to draw firm conclusions.

The case for incentives

On the face of it, the balance of evidence appears to lean in favour of financial incentives. Multiple systematic reviews (for example this review on increasing responses to postal and electronic questionnaires and this review on the association between response rates and monetary incentives) have analysed the data from hundreds of controlled trials and have consistently found that monetary incentives do increase survey response rates in many circumstances. An example from the youth sector helps to illustrate this point. A randomised trial published last year by the Danish National Board of Social Services concluded that sending unconditional incentives can be an effective strategy. Three quarters of young people who were mailed a €15 supermarket voucher responded to the survey, compared to less than half (43%) for the control group who did not receive a voucher.

In addition, a 2017 report from The World Trade Center Health Registry, a longitudinal health study involving a cohort of over 70,000 people exposed to the 9/11 terrorist attacks in New York City, found that incentives can be particularly useful near the end of a data collection period. In our experience at NPC, the tail end of a project is often when a response rate boost is most needed.

The type of incentive has been found to be important too. For example, a prize draw is often used for surveys (and is relatively easy to administer) but there is some evidence to suggest that a guaranteed incentive for each participant is more effective.

In our experience of running surveys on behalf of charities, we have found that incentives can be particularly useful when trying to reach ‘seldom heard’ populations, such as people with experience of homelessness or the criminal justice system. There are parallels with commercial market research in this regard. Commercial market research companies like YouGov and Ipsos MORI routinely give each participant ‘points’ worth about 50p per survey completed. However, they offer extra generous incentives to their ‘hard to reach’ groups (such as people with high incomes, CEOs, and doctors). It may feel problematic to offer certain charity target groups higher incentives than other people within the same survey, but if the practice helps to reduce sample bias, then you could argue that unequal incentives are an imperfect but pragmatic ethical choice.

On the subject of ethical considerations, there is also the argument that people should always be paid for their time. In the charity sector, we like to talk about ‘experts by experience’ but that sentiment is typically not demonstrated financially. Sharing social research budgets with the people being researched is rare. In many evaluations—including projects NPC has worked on—there are financial transactions throughout the system (for example between funders, grant-holders, research partners, and sub-contractors) but not where the money is most needed. When ‘seldom heard’ groups, such as young people in deprived areas or people with disabilities, are tasked with filling out surveys without the prospect of a financial reward, should we really be surprised if the sample size turns out to be too small to enable meaningful analysis?

A key reason why many organisations do not share research budgets with the people being researched is the direct, upfront costs of survey incentives. Charity and funder research budgets are often small and thinly stretched, so setting aside hundreds or thousands of pounds to try and boost response rates with cash or voucher rewards is a commitment. To help make that decision easier, it is worth considering the consequences of different survey response rates. Larger and more representative surveys result in more robust research, which might inform service improvements or help demonstrate impact and secure extra funding. A higher response rate will also mean that the required number of survey responses may be achieved earlier and you will not have to spend time sending out multiple survey reminders or finding more people to take part (you can get on with more useful activities like delivering the service or analysing the data).

The counter arguments

While money may appear to grease the wheels of research, there are various contextual nuances, exceptions, and bias risks for the informed researcher to consider. For example, there are studies which have found that, in some contexts, offering incentives can backfire. A landowner survey in New Zealand found that groups who were offered cash or a donation to a charity of their choice were less likely to respond than a control group who were not offered any incentive. We can speculate that some people understood the benefits of the research and so did not participate as they did not want the experience to feel like a financial transaction. An alternative explanation is that adding monetary incentives makes people think about the financial value of their time and may lead them to reason that a small ‘thank you’ reward is not worth it.

As the New Zealand case suggests, offering an incentive can influence the types of people who respond to a survey, contributing towards what is known as ‘participation bias’ or ‘non-response bias’—a phenomenon in which survey results are unrepresentative because the participants disproportionately possess certain characteristics or demographics. A well-documented example is that women are generally more likely to participate in surveys than men, and there is evidence that this gender difference can increase with the inclusion of prepaid incentives. In large social surveys with stratified sampling (where people are randomly invited in subgroups based on their characteristics or demographics) the risk of participation bias is greatly reduced. However, most charity surveys use simpler, non-random sampling methods (for example ‘opportunity’ sampling where anyone in the target group willing to take part is invited), which makes them more prone to incentive related participation bias. An important caveat here is that not offering an incentive could also increase participation bias—how people respond to a survey incentive (or the lack of one) depends on who they are and the context of the research, among other factors.

Above, I provided a seemingly strong moral case for the sharing of research budgets with the people being researched. However, there are some important counter arguments to consider. Firstly, giving financial incentives to survey participants in low-income groups has the potential to add bias to the research findings. Receiving a few £5 or £10 vouchers over the course of a research programme might not seem a lot but it could theoretically skew the final results of an evaluation of a programme aiming to tackle the consequences of economic hardship. One way around this problem is to recruit a randomised comparison or control group and to give them the same financial incentives as the intervention group, but this would be a tall order for many charities. The second counter argument is the risk posed by welfare means-testing rules. Scottish government advice about paying research participants states that ‘payments from research can affect the amount [people receiving welfare] get from the government’ and that ‘the fear of losing social security benefits can be a major factor in people not becoming involved in research.’

Another argument against offering financial incentives is the risk that participants try to game the system. If there are no checks and balances in place, a survey participant could be tempted to try and claim multiple rewards. Several years ago, I worked for a commercial research agency which offered above market rate incentives. The surveys were typically completed quickly with high response rates, but the trade-off was that we had to have various processes in place to identify the ‘speeders’ (people rushing through a survey just to claim the reward) or people trying to complete the same survey more than once.

Finally, with our system thinking hats on, we should consider the potential long-term unintended consequence of financial incentives. After getting a few survey rewards, it is possible that some people will come to expect remuneration for every survey they complete. This raising of expectations could, in theory, result in even lower response rates for non-incentivised surveys run by organisations who can’t afford to pay participants. On this point, the literature is inconclusive and no decrease in participation in non-incentivised surveys has been linked to paid surveys. As ever, more research is required.

Find out what works (and what doesn’t work) in your context

A smart use of financial incentives could—in many contexts—improve charity survey participation. However, the key word is contexts. Monetary rewards do boost response rates in many situations, but that is not always the case, they can even be counterproductive in some situations. As well as thinking about whether incentives improve participation, financial incentives can help or hinder the survey data quality, ethical consequences, or practical resource implications—it all depends on the context.

We would, therefore, recommend that research teams think carefully about how those complex and interconnected considerations apply to the context in question. If, after weighing up the pros and cons, it is decided to proceed with financial incentives then we would encourage research teams to experiment with different levels of cash, vouchers and / or prize draws. Find out what works (and what doesn’t work) in your context and publish the data to further inform the literature discussed here.

If you have any questions or reflections about survey incentives (or non-financial ways to improve survey participation), you can contact me by email at: or on Twitter: @ben_m_fowler

How do vouchers or cash rewards affect response rates and survey representativeness? This blog explores some of the data quality, ethical, and resource considerations relating to financial incentives for charity surveys: Click To Tweet