security cameras

Innovative or invasive? Charities, data and digital campaigning

By Andrew Weston 4 December 2017

It is increasingly possible to use data from people’s online activity to learn about and target them. These techniques do seem to work, but they come with some big ethical questions—and have made headlines when used by political and private organisations. So should charities be using them to build support for social causes? Andrew Weston explores the issues.

In the past year or so there have been stories about a wide range of groups using social media to gather data about, and target, specific audiences in order to push their agendas.

Many of these techniques have arisen out of a growing competition: we live in a crowded market for ideas, where people are bombarded with information, offers, and appeals. Organisations turn to these techniques to cut through the noise and reach the people they want to more effectively.

This type of digital campaigning is fraught with moral issues around privacy and influence. But it could also be a powerful tool to help shape people’s ideas for social good.

So if charities are in danger of falling behind as others utilise new techniques, is it practically or ethically possible for them to use these techniques?

How do these techniques work?

Many of the newer methods of digital campaigning developed by the private and political sphere start by harvesting data from a public source—like Twitter—before ‘mining’ it. That means using specialist software to find different themes and trends within the data. It helps identify what people talk about, what motivates them, and who they connect and communicate with.

These insights give companies and campaigners the information they need to create highly targeted material—material released at just the right time to the right individuals to maximise their impact.

Once the campaign has been launched, data around the public response can be collected and again mined. This lets the campaign group understand not only how well it is working but also which groups have been impacted by which parts of the campaign, and how these things interrelate. For example, they may find their campaign is particularly well received by people with high levels of education, or people from the North of England.

Getting live access to data offers quick insight into what’s working well, and enables them to tweak their delivery to create a continuous positive cycle of impact.

As you would expect, the approach requires relatively high degree of digital skill within a communications team (or an appropriate partner organisation). But medium-to-large charities do have comparable resources to those private companies and politicians that have used these methods. And charities that already have significant digital skills will find many of these methods fairly cheap and simple to perform. So perhaps more charities should be using methods like this.

What about the ethics?

The information that is usually used for these techniques is publicly available. Yet there is the significant question of whether people really know how the data is going to be used when they post it.

And of course, many of us know about these techniques because of the shady headlines: concerns of elections influenced and privacy breached. This is a powerful tool, which means it can be abused—unwittingly or otherwise. There is the danger that targeted messages not only inform but unfairly manipulate, taking advantage of people’s vulnerabilities to encourage them to make choices that they may not otherwise make.

And even where intentions aren’t ostensibly manipulative, issues can arise.

Take Radar, a Twitter-based app developed by the Samaritans in 2014, for example. This app spotted keywords within a person’s tweet that suggest signs of depression and sent an email to them. This email referenced the person’s Twitter activity, and offered them the charity’s support.

While this was clearly done with good intentions, many users reported they found this overly invasive and the application was quickly shut down.

How can charities approach this sensitively?

There are no easy answers here, and wiser heads than NPC are grappling with these issues. But here are some starting points:

Consultation and consent

As GDPR looms, charities will be reminded of the importance of understanding and complying with the current regulation. (We’ve got some guidance here and here, and will be bringing out more in the new year.)

Much of the regulation on personal data centres around getting people’s permission. So when using data that is collected from service users, volunteers or donors, you need to make sure they are meaningfully engaged on how the data is going to be used and have given informed consent.

But when dealing with public data like social media this can be much more complex. Still, it’s important to consult as widely as possible—for example, by talking to a sample group of the sort of people whose social media data you would be using.

Engage in the ethics debate

The pace of change in the world of technology means that law and regulation is always struggling to keep up. People’s perception of their data and how they want it to be used is changing too. So it’s important to keep up with the discussions and debates on all of this. There are plenty of people talking about it, and being aware of the various issues at play will help you make informed decisions.

One rule of thumb that was given at a recent NPC event was to ask yourself: would you be happy if a loved one’s data were used in this way? Your child, or an elderly parent? It could help you and your team think it all through.

Categories:

Footer