By Organisation Development Support staff:
Zdena Middernacht – Senior Research Consultant
Andres Narros Lluch – Senior MERL Consultant
Wouter de Iongh – Founder and OD Specialist
This blog on decolonising monitoring and evaluation is the second in our series on decolonising consultancy. For the introductory blog written by Kate Newman and Paul Knipe, see here. For all the outputs from this theme, including a recording of our webinar on 21 September 2023, visit the relevant project page.
The need for alternative practices
International consultancy work in human rights, sustainable development, or humanitarian responses can either perpetuate coloniality, or advocate for and practice decoloniality. To move to a place where consultants push for decoloniality, it is important that we do more than merely use the correct language. Joining the chorus without action leads to concepts becoming depoliticised and captured by elites. Rather, we need to reflect on how our practice perpetuates coloniality, and what decolonising consultancy means in practice – and then to change that practice accordingly.
In this article we focus on one aspect of organisational development often referred to as M&E (or MEL, MERL, MEAL, etc); Monitoring and Evaluation. While there are many problematic principles that underlie approaches to M&E (and research and organisational development more generally), here we analyse two fundamental ones and provide thoughts on what alternatives could look like.
Framing evaluation research from the perspective of participants
Going into the research involved in M&E, we need to acknowledge the colonial roots of social research as such. This is – characterised, among other things, by the distinction between the researched (the ‘subjects’ who share their experiences or are observed and are not regarded as having the capacity to interpret their own experiences) and the researcher (the ‘objective’ knower, who analyses, interprets from their own perspective and produces knowledge).
Even if social science research might be trying to move away from this kind of thinking, its traces remain visible in our research logic and methodologies. For instance, mainstream evaluation follows a logic which usually begins from document review, and then frames the research from there. Yet official program documents tend to only tell a partial story, and framing the research only on their basis means that narratives that do not fit neatly into that story are suppressed. We call these ‘hidden transcripts’.
A practical way to rise above this could be to reverse methodologies, and see the credibility of engaging the voices of those who are often ‘the researched’ at the very beginning, to frame the research from their perspective. Of course doing this within the scope of the evaluation has its limitations, since in many cases by the time the consultant shows up it is too late. This is because the research has already been somewhat framed through the Terms of Reference (ToR) which often outline the research questions, and the programme design restricts what can be evaluated.
However, there is still room for the evaluator to validate the research questions and reflect on the programme design. We typically do this through inception briefings, which are briefings conducted with research participants at the very beginning of the evaluation (with a strong focus on research participants who are the final users of the program, also commonly known as rights holders). These are an opportunity to check the relevance of the research questions before finalising the research methodology. It has happened that this process has resulted in discovering that the proposed ToR is neither relevant nor welcomed by research participants, and that their needs are elsewhere. With flexible funders, it has been possible in such a case to adapt the ToR to make it relevant.
From traditional M&E frameworks to a learning approach
Monitoring and evaluation is usually required by donors as a condition for funding and often functions as a means for control and accountability. For this traditional type of M&E, the evaluation is based on rigid, pre-defined frameworks that do not allow for useful learning from the program to emerge. Certainly, they do not focus on useful learning for the end user. They tend to be based on imposed worldviews that are often disconnected from the socio-economic, political, historical and cultural realities of the context in which the programs are implemented and evaluated. This is not only colonial – it is also bad research, because what can be learned is deprioritised in favour of what a funder thinks it wants to learn.
From a decolonial perspective it is encouraging to see that several organisations, including donors, are starting to understand the importance of shifting the power in M&E. Practically, this means moving away from rigid frameworks of M&E to a learning practice that is entrenched within the ways of being of a community and the ways of working as an organisation. Here, learning becomes part of existing norms and processes, and lessons are captured in those dimensions. This is different to the status quo of M&E as an event that is often externally facilitated. It also means that such a process is driven by grantees, framed from the perspective of their context.
The shift of power in M&E should go much deeper than this. But already, the quality of reflections, findings and learnings that emerge when the shift of power is made, even at this level, can feed into quality programs. These programs have a better chance to achieve the social, economic and climatic justice that tends to be their goal. We call this ‘closing the fracture’ between the stated narratives of organisations and their actual practice.
While these are humble methodological shifts, a growing collection of them, implemented consistently at great scale, with continuous collective reflection and refinement, can go a long way in decolonising consultancy. For consultants to achieve this, we would need to break free of the boundaries set around us by competition and the symbolic North-South divide. We must find ways to collectively deconstruct and reconstruct. A collective effort to push alternative methods and approaches into the mainstream will see us decolonising consultancy carefully, step by step, and would give back to us the opportunity to contribute to impact with creativity, fun, and honesty.
For more material on decolonising monitoring and evaluation, see our project page on shifting the power through monitoring, evaluation, and learning (MEL).