person working on a tablet with graphs on it

1. Quantitative methods

Collecting data from staff and service users, using surveys, online data and social media.

In our guide to Understanding impact, we explore how to use your theory of change to build a measurement and evaluation framework. In this closer look we explain in more detail how to make the most of quantitative methods.

An important aim of all quantitative data collection is to give you the raw material to study patterns and relationships between the different types of data. Click To Tweet

In Understanding impact we recommend you use a mixture of quantitative and qualitative measures. From our experience, we’ve found four quantitative methods you can use effectively. These are:

  1. Routine data collection from staff or through internal systems.
  2. Routine data collection from service users.
  3. Surveys
  4. Internet data, social media listening, clicks, downloads etc.

In practice, there is a grey area between methods two and three. The routine data collection that charities collect from service users may look and feel a lot like a survey. For example, asking questions about people’s attitudes before and after an intervention. These two methods can therefore be read closely together.

 

1. Routine data collection from staff or through internal systems

This is generally about counting what you deliver, to whom and how. This will mainly be output data. For example, for a health information helpline it may include the number of enquiries resolved, documents downloaded, events run, profiles of service users and customer satisfaction. On the service improvement side, it may include records of contacts made, meetings attended, and changes in policy and practice.

2. Routine data collection from service users

Any charity over a certain size will need a good system for recording data about its service users. Data is collected by staff or volunteers, so you should be mindful of the burden it can place. A good system will enable staff and volunteers to record all types of data for individuals, but most frequently user, engagement and feedback data.

Getting the right data system is therefore very important. We have noticed that projects broadly fall into three categories, which is linked to their size.

Stage 1: Basic, paper-based

The very smallest projects can sometimes get by with limited paper-based systems. For example, referral forms will be kept on file, the register is written in a book and case notes are on paper. If more than one site is operating then local sites keep their own files, and little is centralised or analysed together. This type of system is inefficient, and charities will soon find their measurement needs outgrow it.

Stage 2: Basic, electronic

In this system, standard electronic and online tools are used (like Word and Excel), but data and information are in different files. For example, attendance might be recorded in a spreadsheet, referral forms and case notes are stored in Word documents, and an online survey tool might be used for some data collection. Local sites will still store their own data, but they might send some information by email for analysis at head office.

This is a better approach but is still inefficient because someone will need to put a lot of work into pulling all the information together. Data quality is likely to be poor because the system is hard to supervise. This type of system will quickly prove inadequate for any kind of detailed analysis as different data sets do not ‘speak’ to each other.

Stage 3: Integrated system

In this approach there is one electronic system in which all information is recorded and stored. This means attendance and engagement by individuals is entered alongside any progress they make and feedback received. All local sites have access to the same system, and data entry is monitored to ensure consistency. Automatic reports are set up so staff and managers can be updated as desired.

We would suggest that any project delivering to more than a hundred beneficiaries per year should use an integrated system. An increasing number of off-the-shelf systems are available, some of which are very affordable.

The importance of data linking

An important aim of all quantitative data collection is to give you the raw material to study patterns and relationships between the different types of data. For example:

  • Which types of users have the most engagement with the service?
  • What feedback do we get from people who don’t come as often? Can we do anything to change that? Do the changes we make improve engagement?
  • Do we get better outcomes from those users who seem to have the greatest needs? How much engagement do we need to have from different types of users to achieve the best outcomes?
  • Are outcomes strongest for those users who give us the best feedback? If so, then we are starting to get good indicative evidence that the project is working.

You will note that a key feature is that the data is linked at the individual level; hence the feedback you receive from an individual can be linked back to their needs, their engagement and their outcomes. This can be difficult to do because every bit of data needs to be effectively associated with individuals, but it will provide more value than having engagement data at the individual level whilst feedback or outcomes are gathered through an anonymous survey. It is the main reason why scaling projects need good electronic case management systems.

Show Less

3. Surveys

Surveys are one of the main ways to collect evidence about projects. Surveys are hard to get right and there are some common mistakes to avoid.

What should you use surveys for?

Surveys are primarily a quantitative research method. The aim is to collect consistent information from a large enough number of people so you can measure the issues you are interested in; for example the proportion of service users who regard your project positively or who feel their lives are more stable as a result of your work.

Quantitative surveys are less helpful if you are only working with a small number of people or if your project is still at an exploratory stage. In this case, qualitative approaches will be more useful (see our paper on Proportionate evaluation for more advice on tailoring your approach to development stage).

Who should you use surveys with?

Voluntary sector organisations mostly use questionnaires to measure the experiences and views of service users, but there are other stakeholders with whom questionnaires can be used. These include:

  • Staff and volunteers.
  • Partners or other stakeholders (e.g. referral partners).
  • The wider community.
  • Service users’ families, where appropriate.

Once you have decided which groups you want to survey with your questionnaires you should think about sampling. If your project is running on a small-scale then you may decide to try to get everyone to complete your surveys. But for larger projects an alternative is to select a group of people who are representative.

Issues related to using surveys with service users

Using questionnaires with service users raises challenges:

  • People struggling or being unwilling to complete questionnaires.
  • Finding the right time and location to complete a questionnaire.
  • Designing a questionnaire that is meaningful and intelligible. If service users do not see themselves, their issues or interests in a questionnaire, then they are unlikely to engage with it properly or provide useful data.
  • Literacy and numeracy issues.
  • English being a second language.
  • Persuading service users to answer truthfully, rather than providing answers they think the project wants.
  • Questionnaire and form-filling overload.

Despite these pitfalls, if used sparingly and in line with the good practice outlined here, they can still be useful.

How to use surveys

Different methods

Research textbooks say the best way to undertake questionnaires is face-to-face. This is because an interviewer can ensure that the respondent completes the questionnaire properly and consistently, and they are on-hand to answer any questions. The downside is the time and effort needed. Moreover, if staff or volunteers are conducting the interviews, there is the increased possibility of bias, because respondents may want to provide the answers they think staff want to hear (although, on the other hand they may be more likely to trust someone they know).

Many questionnaires are therefore completed by service users themselves on paper (known as a ‘self-completion’ approach). This is the easiest way to deliver questionnaires, but relies on service users’ understanding, willingness and patience to complete them, or else the quality of data can be poor.

Many questionnaires are also sent online. This is a great approach for reaching respondents quickly, especially if they are spread across different locations. It also has the advantage that data-entry is automatic. If you are using questionnaires with stakeholders or staff, then online is almost certainly the best option. However, online questionnaires are less likely to be effective when surveying some groups because they may not have internet access. You will need to find a way to send the survey link to them and you will need to work hard to get them to complete it.

The final option is that questionnaires can be conducted by phone. The downsides are that it is not appropriate for sensitive subjects, it takes time to call people to do the questionnaires and it is not possible to use visual prompts.

Whichever option you pick, you will need to think about whether and how you can link your survey data to individuals, most commonly done by allocating ID numbers, which is important to achieve the links between different types of data, but also to be able to compare before and after outcome data at the individual level.

For conducting your survey, there are three main options:

  • In-house surveys: Off-the-shelf packages like Survey Monkey and Smart Survey enable charities to conduct good quality research at a very low cost. However, postal or telephone approaches are still better for some groups such as older service users. Maximising response rates is the enduring challenge with in-house surveys. The key is persistence. You should give people repeated chances to participate.
  • Commercial online research panels: These are members of the public who have signed up to take part in regular surveys, meaning they can be surveyed relatively cheaply. Larger panels also mean you can target people with particular characteristics (e.g. smokers). A drawback is that online panels underrepresent older people and those with lower socio-economic status. Research companies try hard to address this, but the suspicion that results are unrepresentative remains.
  • Commercial omnibus surveys: These are regular cross-sectional surveys conducted by research companies that you can buy questions on. They interview different people each time (usually by phone) so they are less ‘self-selecting’ than online panels, but also more expensive.

Questionnaire length

The length of your questionnaire, and therefore how much information you can collect, will be determined by how long respondents are willing to speak to you. It is always advisable to be brief.

  • Face-to-face: 20 minutes, or about 40 to 50 questions. In fact, many social research questionnaires will take much longer than this (sometimes up to an hour), but to do this you will need to provide some incentive (such as a small payment). There is also the issue of where the questionnaire is being completed. People will talk for longer at home or in the office, but on the street you will need to keep it shorter.
  • Paper and online: 5 to 10 minutes, or about 20 questions.
  • Telephone: 10 to 15 minutes, or about 30 questions

It’s important to tell people how long the survey may take. Test and time your survey prior to using it and update your estimate as you go along.

Questionnaire design: borrow where possible

Wherever possible you should use questions that have been used before. This is because:

  • It saves you time.
  • Questionnaires that have been used before have often been designed by experts and may have been tried, tested and improved. They are likely to be better than questions you have designed yourself and also more credible to stakeholders.
  • It gives you the potential to compare your results to those of other services or national statistics.
  • You may even find there are whole questionnaires suitable for measuring what you need.

If you cannot find an off-the-shelf survey tool that works, the next best option is to compile a questionnaire from different sources. There are plenty of online resources that help you do this, which we list in Understanding Impact. It is also worth checking national government surveys, which cut across many policy areas and may include questions you can use or adapt. To assist with this, the UK Data Service Variable and Question Bank is intended to be a one-stop-shop for accessing all social research questionnaires.

Questionnaire design: writing your own questions

As a last resort you may have to design your own questions. You should predominantly use ‘closed questions’ where the possible answers are set out in advance so the respondent just needs to tick a box. These have the advantage of giving you consistent measurable responses.

Closed questions consist of two elements; the questions themselves and the scale (i.e. the possible responses). Here are some tips for each:

  • The best questions are strictly factual. For example, asking respondents what they have done, when and how often. This works because there is less room for ambiguity or interpretation. For this reason, social researchers will often translate the issue they are interested in into proxy indicators. For example, if interested in ‘improved hope or optimism’, to ask instead about personal care, how much time they have spent taking part in activities or the number of jobs they have applied for etc. These ‘facts’ are regarded as less subjective than merely asking how optimistic the person feels.
  • Use neutral language. Avoid adjectives or anything emotive that may affect how the respondent interprets a question.
  • Use simple language. It’s perhaps an obvious point, but worth repeating that the questionnaire should be written in language that respondents will understand, which means being as simple and clear as possible, avoiding jargon, acronyms or any complicated words.
  • Try to avoid words that are open to interpretation. For example, use ‘daily’ or ‘weekly’ rather than ‘often’ or ‘usually’.
  • Ask one thing at a time. Often, if you look at questions closely you find two or more issues have been conflated. For example, ‘how satisfied are you with the support and guidance you have received?’ is potentially asking about two different things, which may confuse the respondent and confuse you when it comes to analysis.
  • Watch out for double negatives. These can creep into questionnaires, for example “Do you agree or disagree that you no longer need support?” is confusing, whereas “Do you still need support?” is not.
  • Avoid leading or pre-judging responses. So, rather than ‘how satisfied are you with the service?’ ask ‘how satisfied or dissatisfied are you?
  • Phrase sensitive or potentially incriminating questions in the least objectionable way. It can often help to put some distance between the respondent and the issue in question by framing the question in terms of their opinion on third parties. For example, ‘some people have said this, what do you say?’.

In terms of answers, be aware of the distinction between ‘nominal’ and ‘ordinal’ as most answers will fall into one of these categories.

  • ‘Nominal’ refers to sets of answers with no intrinsic order. For example, asking which part of the city someone lives in can have a list of answers in any order. Nominal questions can be either single choice or multiple choice.
  • ‘Ordinal’ refers to answers where there is an order, so for example ‘how old are you?’, ‘how satisfied are you?’ or ‘what is your highest level of qualification?’.

Tips for thinking about answers to nominal questions:

  • Try to ensure that all the possible answers are available and always include the option for the respondent to write in an ‘other’ response, where they can write something else. If you do use an ‘other’, you will need to look at this data in the early part of the analysis stage to see if there are some common answers to add to your list and update your questionnaire if you can.
  • Don’t ask respondents to rank different options. For example, “please rank aspects of this course from best to worse”. These are difficult questions to answer and analyse. A better option is to ask people to select all the options they want, or possibly the ‘three most important’. This will give you a natural ranking that is easier to interpret.
  • Ensure categories given are mutually exclusive, or allow people to select more than one.
  • Try to vary the order in which categories are presented to different respondents. For example, market research companies will tend to randomise the display of answers on online surveys or produce two different versions of a questionnaires with answer categories reversed. This minimises the risk of ordering bias, in which the first or second answers are selected more often.

Tips for thinking about answers to ordinal questions:

  • Question scales are very useful where you want a respondent’s opinion or feelings about an issue. These involve giving respondents three or more options reflecting different levels of opinion. Scales are nearly always more useful than simple yes/no responses. For example, “How would you rate todays training?” is far better measured with a five point scale, running from very good to very bad, than a simple good/bad split. The respondent will feel happier with the choice available and you will get much richer data to explore.
  • Ideally scales should have mid-points so people who do not have an opinion either way can express their view. However, four point scales are also common and can be effective.
  • Increasingly questionnaires are moving towards longer 10- or 11-point scales (for example the ONS wellbeing questions and the widely used Net Promoter Score). If using a long scale like this you do not need to label each point, just the end points. For example, 0 = Very dissatisfied and 10 = Very satisfied.
  • You can use pictures or smiley faces on scales rather than numbers or words which may be more appropriate for your audience.
  • You should also always include a ‘don’t know’ or ‘not applicable’ option for people who can’t or do not wish to answer.
  • Finally, always have at least one question that respondents can answer in their own words. This will improve their experience of the questionnaire and probably give you some very useful data. This is usually best asked at the end of the questionnaire. It is possible to ‘code’ the responses to an open question, which is a process of reading through the information and developing a set of categories from the answers given, and then counting how many times those categories appear across all the data. This is a good way to get a summary of what people are saying. It’s also valuable when you cannot predict what people will say to a question; ask it open-ended initially, see what people say and then develop a set of pre-code responses you can use in future versions of the questionnaire.

Sequencing and ordering

In sequencing a questionnaire it’s best to move from general to specific issues. Start the questionnaire with some key overall questions you are interested in, so you get people’s top of mind responses. In addition, try to keep questions on the same subject grouped together. Watch out for the risk of one question influencing another (for example, if you ask questions about fear of crime and then ask what the most important issues in an area are then this is likely to encourage people to say crime for this question as well).

Other tips are:

  • Try to minimise the amount of change in question and answer formats between questions, which can frustrate respondents and cause them to make mistakes.
  • Watch out for inconsistencies in answer response, for example positive to negative responses should always be in the same order throughout a questionnaire.
  • Use transitional sentences between sections to tell the respondent what is coming up.
  • If personal data is required and you think individuals are unlikely to complete the whole survey, ask for that information first.

Look and feel

For both online and paper self-completion questionnaires you should give some attention to how the questionnaire appears; the layout, colour scheme, use of images etc. Tips are:

  • Have a short introductory section that explains the purpose of the research and how it will help the charity.
  • Make sure questions are spaced out and there is a lot of clear, white space. Dense or complicated questionnaires are very off-putting.
  • Some pictures can be appealing, but don’t go overboard.
  • Use colour if you can.
  • Try to minimise the amount of filtering (i.e. questions which are only applicable to some people with associated instructions about where to go next). It’s often easier to get everyone to answer all questions and to filter people when you are analysing the data.
  • For open-ended questions, make sure there is enough space to write in (this will also encourage people to write more).
  • For greater clarity, put question wording in bold font and answers in normal font.
  • Include a “thank you” at the end of the questionnaire along with instructions about what to do next.

Testing a questionnaire

It is important to test draft questionnaires with other people, as you will always find ways to improve them. Start by sharing with colleagues to iron out any obvious problems and then move on to testing with one or two service users if you can. Get them to complete the questionnaire as if it were real, which will tell you how long it takes, and then go through the answers to ask whether they felt confused at any points. Talk through what they said and check whether your interpretation of their responses is correct.

Getting a good response rate

As addressed above, if there are only a small number of people who you are surveying, then qualitative focus groups and interviews may be more appropriate. Surveys tend to be more meaningful with larger samples of individuals. Tips for getting a good response rate include:

  • Marketing your survey with your target group. This can be in-person, through posters, or email. Marketing can often be a major part of improving your response rate and ensuring that individuals who may not interact with your service regularly are aware that they are being asked for feedback.
  • Sending reminders to people who have not completed the survey.
  • Offering an incentive or a prize for completion.
  • Keeping your survey jargon-free.
  • Ensuring that the survey is not too long.
  • Putting in a progress bar for online surveys, so that individuals know how much of the survey is left.
Show Less

4. What is media monitoring and social listening?

Monitoring digital media channels such as Twitter, Facebook etc. (‘social listening’) is one method for collecting output data to test uptake of communications messages. There are a range of free methods available to help you do this, and many commercial organisations and brands are experts in using social listening in sophisticated ways that go beyond just ‘mentions’ on Twitter or arranging free Google Alerts. Two widely used, free methods for this are Hootsuite and Social Mention.

Media monitoring provides a series of benefits to any charity. Namely:

  • Listening and engaging with public opinion. Listening provides the marker on how your charity is being viewed in the wider marketplace, or how your campaigns are being received, and allows you to respond and adapt your activities accordingly.
  • Identifying influencers and ‘champions’ who support your charity, and potentially providing a route to engagement.

 

Read our full guide to using your theory of change to build a measurement and evaluation framework.

Related items

Footer