By Anne Garbutt, INTRAC Fellow.
Between 2014 and 2016 INTRAC managed the monitoring and review component of a pilot project on beneficiary feedback mechanisms. The pilot was led by World Vision UK in partnership with INTRAC, Social Impact Lab and CDA Collaborative, and was funded by UK Department for International Development. The pilot explored what makes a beneficiary feedback system effective, and whether it improves accountability or the delivery of development programmes. The mechanisms were attached to seven maternal and child health projects in marginalised contexts in Africa and South Asia, and designed according to the following definition:
Findings from the project have recently been shared at the annual conferences of the UK Evaluation Society and the International Society for Third Sector Research, and will be published shortly. However, throughout the pilot INTRAC’s M&E team constantly found itself wondering whether a beneficiary feedback mechanism is any different to participatory Monitoring and Evaluation (PM&E)?
During the 1990s we were all captivated by Robert Chambers work around putting the last first, which advocated community/beneficiary participation in all development processes, including monitoring and evaluation. During that period many of us were reluctant to provide a single definition or methodology of PM&E but agreed that it encompasses a wide range of tools and approaches that shared common values such as joint planning and decision making, shared learning, mutual respect, empowerment and joint ownership.
Does this differ so much from the principles that underpin a beneficiary feedback mechanism? In our research we frequently observed project staff making comments like “the beneficiary feedback mechanism provided data that standard M&E systems do not.”
The questions this raised for us were: Why not? What has happened to participatory monitoring and evaluation?
In May 2008, in a paper I wrote with Jerry Adams, we observed that: “In very complex social development programmes there is often no concept of the role and purpose of taking a participatory and empowering approach to monitoring that includes the people who are classified as ‘beneficiaries.’ It was clear that too often M&E approaches didn’t stand up to scrutiny when it came to including ‘beneficiaries’. Beneficiary feedback mechanisms seem to be plugging a gap between theory and practice of participation in M&E.
INTRAC has for many years encouraged participatory M&E. But have we done enough, when clearly there remains a gap between theory and the practice?
Our concern is that often, when programmes claim to use participatory M&E, it is at best secondary to a donor requirement, and at worst not there at all. If participatory monitoring and evaluation is employed effectively by development organisations at all six stages of the project cycle, from problem identification to sharing the learning for future projects, then we should not need to be developing beneficiary feedback mechanisms at all.
If we accept that participation requires joint ownership and empowerment, then all stakeholders, including the service users, should have equal ownership of the project and not need to provide feedback to other stakeholders for them to respond to!
Since the early days of participatory M&E, there has been a growing body of innovative methods in the use of quantitative approaches to planning, monitoring and evaluation, and many new methods and tools have emerged including counting, mapping, valuing and scoring. While these approaches have proved effective in upward accountability to donors (reflecting the ever-increasing demand to see results), it has perhaps been at the expense of downward accountability to beneficiaries. Possibly the reluctance to provide a single definition or methodology for PM&E has also contributed to this methodological drift.
One of the key findings from the project was that most feedback loops were closed at project level (e.g. in Somaliland, women gave feedback about lack of shade at the clinic, which was resolved through decisions made by staff at project level, and the response was communicated back to the community). However, we observed limited use of feedback higher up the aid delivery chain. The content of beneficiary feedback did not, within the timeframe of the pilot, inform upward accountability to the donor, a key aspect of the theory of change underpinning the pilot. A notable exception, nevertheless, was one pilot in India that used the feedback mechanisms to generate the data for an indicator within the project logframe.
Should participatory monitoring and evaluation methods such as beneficiary feedback influence policy processes of donors? All such processes are only of value if all stakeholders are prepared to take action. If the more powerful actors in the aid chain do not see any value in listening to what they call beneficiaries and they do not see them as equal partners in development, then we have learned nothing new. We are paying lip service to participation under a different name, but not approaching participation as being about joint ownership, shared responsibility, and decision making.
The real concern we should be raising is that the existing M&E systems of most of the pilot projects didn’t sufficiently include beneficiary participation already; we were in most cases introducing something new with the pilot. This begs the question of how participatory the existing M&E systems of the projects really were. However, the experience of the pilot seemed to be encouraging at least some of the partners to incorporate beneficiary feedback more fully into their M&E systems. If that happens, then we could be seeing a resurgence of real participatory Monitoring and Evaluation.
For more about the beneficiary feedback mechanisms pilot see here or visit the website here: www.feedbackmechanisms.org
Garbutt, A. and Adams, J. (2008) Participatory Monitoring and Evaluation in Practice: Lessons learnt from Central Asia. INTRAC Praxis paper no. 21