By Alison Napier.

This year’s UK Evaluation Society (UKES) conference was all about the use and usability of evaluations – surely the most important stage in an evaluation process since, however robust the methodology, however thorough the report, if the evaluation findings are not used by the commissioning organisation, then the process has been (mostly) a waste of time, money and effort.

The first keynote presentation by Michael Anderson of the Center for Global Development asked how to give voice to evaluations “in an era of slogans and snapchat”. While he suggested that this could be considered a ‘golden age of evaluation’, with high demand for evaluations and growing sophistication about multiple methods, there are still many challenges to their use and usability. Among these are publication bias – only publishing positive results (missing the opportunity to learn from no or negative results), and inappropriate design (not matching methods to the purpose of the evaluation). Anderson asked how, in this ‘post truth’ age of mistrust of experts, where policy is not necessarily based on evidence, can evaluators preserve space for complexity and layered lessons? Aside from packaging and sharing information from evaluations in new, more engaging and visually appealing ways, ensuring ‘authenticity’ and using social media, he suggested that evaluators need to produce easily digestible evidence and lessons, ready at the right time to inform policy decision-makers.

INTRAC was part of a session on utilisation focussed evaluation (UFE), along with TripeLine, Bond and Islamic Relief. As Hamayoon Sultan (Islamic Relief) asked, ‘Why is this even a ‘thing’? Shouldn’t all evaluations be utilisation focussed?’ To which a short answer – yes, most of them should but we all know that they are not! André Clarke of Bond referred to their 2015 Transparency Review, which found that 65% of NGOs do not publish any evaluations or results online. Key barriers include a perception amongst commissioners that they are insufficiently rigorous, and under-resourcing due to pressures to maximise spending on service delivery. Key enablers to sharing include an organisational learning culture, anticipated reputational benefit and a funding requirement. Hamayoon shared his experiences over the past year of promoting an internal organisational culture that values evaluation and actively encourages and promotes learning from evaluations.

Utilisation focussed evaluation, developed by Michael Quinn Paton[1], puts evaluation use at the heart of an evaluation process. It is based on the principle that an evaluation should be judged on its usefulness to its intended users. Therefore evaluations should be planned and conducted in ways that enhance the likely utilisation of both the findings and of the process itself to inform decisions and improve performance.

UFE has two essential elements. Firstly, the primary intended users of the evaluation must be clearly identified and personally engaged at the beginning of the evaluation process to ensure that their primary intended uses can be identified.  Secondly, evaluators must ensure that these intended uses of the evaluation by the primary intended users guide all other decisions that are made about the evaluation process.

This may seem obvious, but in practice has a number of implications, as INTRAC and TripleLine found in attempting to apply it in two PPA-focussed evaluations. Challenges included the increased time and investment needed in: i) the evaluation inception phase in order to understand organisational culture and politics, identify and engage users, understand users’ needs and get buy-in from users and decision makers; ii) identifying opportunities, and tailoring timing of evaluation inputs and outputs to internal decision making processes (e.g. strategic review processes) to maximise engagement and enhance usability; iii) analysis and validation of evaluation findings and co-developing recommendations with evaluation users. This obviously reduced the time that could be spent on other aspects of the evaluation process, including data collection. Key questions to ask in deciding whether to use the approach include:

  • Is the commissioning organisation ready to learn?
  • Is there someone in the organisation who has the time and remit to support navigation of internal politics?
  • Does/ how does evaluation timing weave into the organisational processes for learning?
  • Are evaluators open to close engagement with the commissioning organisation/ users and to communicating findings sensitively?
  • Is there flexibility and budget for evaluators to engage with stakeholders/users at different points?
  • Do organisations/users have time to engage with the process?

Like any other evaluation approach, UFE may not always be appropriate, but I would say for commissioners and evaluators alike it is worth starting any evaluation design process with a consideration of how to integrate the core principles of UFE.

Other methods to produce easily digestible evidence and lessons were explored throughout the sessions. Consultants from OPM Group Dialogue by Design delivered a session titled ‘Beyond Burden: Engaging beneficiaries as equal partners in evaluation’. They used the example of the ‘Outcomes Star’ to challenge the assumption that the ‘burden’ of evaluation on beneficiaries should always be minimised. The evaluators were commissioned at the design stage of an innovative Early Intervention Programme with Essex County Council and were able to work with practitioners – and beneficiaries – to tailor the tool and then embed its use into ongoing work with children and parents. After initial scepticism, practitioners are confidently using the tool and families and children involved in the programme have given positive feedback about how the stars help them to reflect on their journey.

Evaluation governance matters! Joe Dugget of SQW Ltd talked about the ‘tricky triangle’ – the relationship between the evaluation commissioner, the evaluator and the ‘evaluated entity’/‘evaluand’; and the ‘tricky rectangle’ – all of the above plus the ‘decision makers’. While the evaluation ToR is important, evaluation steering groups do not necessarily speak with one voice and there is plenty of potential for conflict. In SQW’s experience, an active chair, a formal scoping or inception report (that confirms the evaluation purpose and approach) and a formal engagement process, particularly during analysis and validation of findings, can help to balance and manage different perspectives and expectations within evaluation steering groups.

One of the most interesting sessions for me was ‘Staying one step ahead: Using and communicating evaluation results from DfID’s Girls’ Education Challenge (GEC)’ presented by Simon Griffiths (Coffey International) and James Bonner (DfID). Some of you may remember that INTRAC led on the baseline for one of the GEC implementing agencies back in 2013. This was our first – and last – foray into using a randomised controlled trial approach, and I was interested to hear whether the commissioner and evaluation manager considered it had been worth it. Both freely admitted that the process has been challenging. Some of the (relatively few) positives of the quasi-experimental/ experimental evaluation approach were that the implementing agencies have built their capacity in the use of rigorous experimental/quasi-experimental methods and that this has spilled over into general improvements in M&E capacity.

However, they noted that generally while the quantitative data was strong, the quality of the qualitative data generally remained relatively weak. This was a problem in terms of understanding how and why effects (observed through the quantitative data) happened, understanding effects on different sub-groups (gender differences), and the influence of context. While there was better reporting of lessons learned, it still remains difficult to generalise beyond specific contexts. A key learning point was that deliverables from such large-scale and complex evaluation processes (at baseline, midline and endline) inevitably lag behind project and programme delivery processes, i.e. they are not appropriate vehicles for driving adaptation and improvements in implementation. At the portfolio level, the commissioners found that a combination of “smaller intermediate analytical products”, timed to key future policy and programming decision points were more useful learning tools.

To conclude, this year’s UKES conference was again full of great discussions and it was good to see a renewed interest in the debates about what evaluators can and should do to promote the use of evaluation findings among stakeholders, bridging the gap between evidence generation and decision-making.


Want to know more? Read André Clarke‘s and Hamayoon Sultan‘s blogs on the UKES Conference on Bond’s website.


[1] Michael Quinn Patton (2008) Utilization-Focused Evaluation, Fourth Edition, Saint Paul, MN. Sage Publishing.


 

Categories

News,