Picture of a robot

AI for charities: What you need to know

By Hector Edwards 26 June 2023 5 minute read

NPC’s Director of Innovation and Development, Tris Lumley, recently hosted a webinar with Co-founder and CEO of WEandAI, Tania Duarte, and Why Philanthropy Matters Director, Rhodri Davies on AI in the charity sector. You can watch the webinar in full here. In this blog, we build on that discussion to give practical advice on getting started with AI. 

Artificial Intelligence (AI) has been with us for decades, but recently it feels like it’s arrived in a much bigger, more significant, way. This is thanks to the launch of ChatGPT in November 2022. With 1 million users in just 5 days, it was the fastest adoption of a software in history. Generative AI has arrived, and the hype is huge; there’s a lot happening and it’s happening fast. 

If you feel overwhelmed, you’re not alone. In this blog we’ll look briefly at some important background information, possible applications, and opportunities of AI. Most importantly, we’ll highlight some of the risks and considerations you need to take to ensure a safe and ethical discovery and implementation process. WeandAI CEO, Tania Duarte, advises a carefully considered approach to using AI; take the time to make informed decisions that are right for your organisations and the people you serve. 

Below we outline three key steps to consider when getting started with generative AI at your organisation 

1. Learn and understand what Generative AI is 

Generative AI tools are complex algorithmic systems capable of producing content. This commonly includes text, audio, images or videos, and commonly used tools include ChatGPT, Bard, DALL-E and TOME. However, before using any of these tools, it’s important that you first understand what they are, how they work (at least at a basic level) and what they can’t or shouldn’t do. Tania Duarte, WeandAI CEO, suggests agreeing on a definition of AI that’s pragmatic and useful for you as an organisation. Involving staff in this process, and all decisions surrounding AI, is vital. There is likely to be a range of attitudes from those already embracing it, to those that are understandably anxious. 

2. Explore how can you use AI in your work 

New use cases are being discovered daily. The majority of uses for charities, involve content creation. Generative AI may help you to produce ideas and save time as a ‘first draft generator’ for anything from; blogs, to campaign emails, to grant-reports and presentations (TOME). However, it’s important to recognise that ChatGPT and Bard operate via pattern recognition, they generate text by predicting the most likely word to come next from a vast data set. The result is impressive and readable text but don’t expect ground-breaking insights. Nevertheless, it can provide an excellent starting point. One area we are looking into at NPC, is how we could refine a set of prompts to generate useful starting points for Theory of Change processes. The best way to discover how AI can help your organisation is to experiment with the tools. However, it is vital that you do so armed with knowledge of the risks (and mitigating steps you can take). 

3. Research and understand the risks to your organisation 

There are inherent risks to using these tools due to the way they are built, function and continue to learn. Below we highlight some of the key risks  (this is by no means an exhaustive list and you will need to give careful thought to the specific risks for your organisation): 

  • Data securitytools like ChatGPT and Bard learn and improve from the data inputted into them. Exactly how this is done and how much user data is being used to train the models is unclear. This means you cannot be sure data or details you use in a prompt won’t be included in a response ChatGPT gives to another user asking about the same topic in the future. It is therefore vital to understand that data inputted is not secureyou must not enter any confidential or proprietary data into one of these tools. There are some steps you can take to reduce this risk, or you could consider using an API, but this requires expert knowledge and resources. 
  • Misinformation – these tools lie. Worse than that, they ‘hallucinate’; which means that they lie very well. As a rule of thumb, don’t believe anything ChatGPT or Bard say without cross-checking. They are great for idea generation, but they are not new search engines. Additionally, the information they provide is only as up do date as the data they were trained on – in the case of ChatGPT, this means that it doesn’t know anything that happened after 2021. 
  • Bias – these models are trained on huge datasets, including a vast proportion of the internet. Unfortunately, the internet reflects the biases and prejudices that exist in our society. As a result, ChatGPT reflects these biases back to us in its responses (although parameters are built in to try and regulate them). You need to be aware of this and be active in looking out for it. 
  • Legal and regulatory risks – this an evolving area, so it’s important to keep up to date with the regulatory, and particularly GDPR, landscape. Other steps include checking in with your insurance provider to ensure that your policy covers using these tools. 
  • Copyright and plagiarism – all of the content that’s produced by these tools is generated from an aggregate of the internet and data it’s trained on. Does publishing the generated content therefore amount to plagiarism? Who does the IP belong to if ChatGPT produces a new idea? Answers to these questions are not yet clear and it could be a bumpy road ahead. Trust and integrity are vital in the charity sector, if you decide to use AI generated content you need to be transparent about it and the decisions you have made to avoid reputational damage. 


After completing the above steps you may conclude that AI is not a useful tool for you at the moment. That is fine, but it’s important to remember that your service users may already be using this technology. Technologies that have the future potential to fundamentally alter the fabric of people’s live and society as a whole. We have a duty to stay informed, if not involved, so that we can continue to respond to the changing needs of service users. 

If after taking these steps you do decide to start implementing these tools, here are our key recommendations: 

  1. Develop an AI policy and set guidelines for use within your organisation 
  2. Involve your staff each step of the way 
  3. Keep your service users central to all decisions and consider the broader implications of increasing AI integration into society 

“It’s important to note that while AI can provide valuable insights and automation, it should complement human expertise rather than replace it. The human touch, empathy, and domain knowledge remain crucial in the charitable sector, and AI should be used as a tool to enhance decision-making and maximize the positive impact.” – ChatGPT 


Related items