Systems change logo

Monitoring, evaluation, and learning for complex change

By Micheal Moses 26 March 2018

To tackle complex, systemic problems you need a plan: enter theory of change. But like any plan, a theory of change must be reviewed and adapted as new information comes to light, the situation changes or assumptions are questioned.

Michael Moses of Global Integrity reflects on how he and his colleagues help their partner organisations use monitoring, evaluation and learning approaches to see what’s working, what’s not, and adapt accordingly.

The challenges that the development sector tries to address are complex and systemic. And governance challenges – like the ones we tackle at Global Integrity—are particularly confounding.

The issues the communities we work with face are unique, so generalised solutions developed outside of those communities, in other contexts, will be of limited use to those trying to solve local problems.

Instead, solutions that effectively address the issues that local people care about tend to emerge over time, in particular places and led by local stakeholders. Success is contingent on these people engaging with, learning about, and shaping the dynamics of the complex, political systems in their specific contexts.

Developing a theory of change (ToC) is an essential first step to this. A ToC helps make assumptions explicit, lays out an evidence-based hypothesis about how change is expected to occur, and provides a frame for reflection and course corrections throughout a project or program.

This latter point is key. As NPC’s recent report on systemic use of theories of change states: ‘The word ‘theory’ in its name is no coincidence. Theories are tested and updated as new knowledge emerges

Effective use of a ToC requires collecting, and reflecting on, the data that’s needed for learning and adaptation.

The specific design of any approach to monitoring, evaluation and learning (MEL) will vary according to the theory of change it serves, and the context in which it’s used. But three general principles are worth keeping in mind when pursuing change in complex systems.

Emphasise (local) action.

Monitoring data/evidence must be useful to local stakeholders. Whether it helps them interrogate their assumptions, track progress, or consider how to adapt in response to emergent information—understanding opportunities for impact is the key. Reporting to donors is a secondary priority.

For example, in Global Integrity’s recent Learning to Make All Voices Count project, we worked with civil society organisations in five countries to explore whether progress expected in their theories of change was unfolding as expected.

When it wasn’t, we helped our partners use evidence to adapt their strategies.

In another case, our partners in Tanzania had initially thought that, if they provided community members with information about a national open government policy, citizens would respond by trying to hold local officials to account for honouring those policy commitments.

Initial monitoring data, perhaps unsurprisingly, did not bear out this assumption. Instead, local power dynamics— rather than access to information about government policy—were found to be the issue. So our partners adapted their theory of change, and set to work helping young people and women mobilise into ‘people’s committees’ to reshape those dynamics. See this case study for more.

Support participation

Good monitoring, evaluation and learning frameworks are not developed solely by MEL staff, or by people sitting in offices in places that are geographically or culturally far-away. Rather, they are co-created and used in partnership with local stakeholders and beneficiaries, with their perspectives, priorities, and interests baked into the design and application.

For example, our colleagues in the Philippines co-created their MEL framework with regional universities and regional civil society organisations. In doing so, they learned that local partners were most interested in improving their capacity to understand and use district-level budget data. So determining whether and how the project was helping this became a priority.

As a result, our partners to realised that some of the training tools they initially used weren’t having the intended effects. They made course corrections to the assumptions and activities underpinning that aspect of their theory of change. More details are available in this case study.

Embrace iteration

This means that data is gathered regularly, in as close to real-time is as feasible, rather than only at beginning, middle and end of a project. Regular data collection becomes the fuel for regular reflection, and adaptation.

In Kenya, our partners recognised that their work in two counties was subject to a number of potential risks, from drought to ethnic conflict. So they made sure to regularly revisit, and analyse, those potential risks. And when conflict did break out, our partners were ready. They quickly identified the danger, and worked with local stakeholders to temporarily replace planned in-person community meetings with virtual discussions over WhatsApp.

This key change not only kept project participants safe until the violence died down, but helped keep the project on track amid challenging circumstances. More information is available in here.

Ingraining these principles of action, participation, and iteration into your monitoring, evaluation and learning isn’t easy. It requires time, resources, and buy-in—from donors, implementers, and partners.

But it can pay off! It helps organisations unlock the usefulness of theories of change and, over time, learn and adapt their way to successfully addressing the complex, systemic challenges that matter to citizens in countries and communities across the world.

Tools and resources on this topic:

 

Categories:

Footer