By Dan James.

With an upcoming training on working with evidence in Laos, a colleague asked me to outline some common pitfalls in working with evidence through the programme cycle, and how to avoid them.

 1.       Not identifying a need

Imagine an encounter between a freshly minted graduate in development studies and their more seasoned field manager… “Look, I read some great evidence that deworming pupils leads to better performance at school”… “Well that’s great Kale, but the problem we have right now is getting the teachers to attend schools, not the pupils”.

There is a lot of evidence out there, but it is only useful if it is relevant to the problems that you face in your work. The first step in working with evidence is understanding what your needs are. Before you start any evidence gathering exercise (desk-based, or field), first ask yourself what decisions will it inform, and what type of information/data/evidence might inform them. Second – go ask the people that will take those decisions.

2.       Not looking on the shelf

So you’ve identified a need for evidence, now what?

As Donald Rumsfeld once proclaimed, there are ‘known unknowns’ and ‘unknown unknowns’ – things which we know we don’t know, and things which we don’t know that we don’t know. We undertake primary research to get a better understanding of our known unknowns “did we achieved the outcomes of our project?” and sometimes the unknown unknowns “what went wrong? What were the unintended impacts?

But when it comes to planning our monitoring and evaluation, it can be easy ignore what is known already. Resources for primary research in the field are scarce, so it is important that we focus our efforts on the real unknowns rather than reproducing what’s on the shelf already.

3.       Letting resources dictate methods

If you are sure that you need to collect new evidence, the next question is how?

A common mistake is to let resources dictate the methodology. Of course we are all faced with finite resources, and at INTRAC we frequently see methods being specified that don’t necessarily provide the level of depth or rigour that organisations really need. Sometimes this work is still useful but at other times, the key questions remain unanswered because there were insufficiently resources to answer them well enough.

Less frequently we see the opposite: spending more than is necessary on research evaluations. Sometimes this is due to pressure from donors for particular methodologies, at other times because the appropriateness of different methodologies are not well understood. This is exemplified by a question that frequently comes up at INTRAC’s Advanced Monitoring and Evaluation course : “should I do an RCT / use quantitative methods because they are more rigorous?

INTRAC strongly believes that there is no overall “gold standard” – just different horses for different courses. The trick is understanding what methods will answer the evidence needs most efficiently. One (and I stress this is one among many) way to think about what kind of evidence might be required is to think about your project/programme within an ‘innovation’ cycle. Are you at the very beginning of identifying problems, needs and exploring tentative solutions? Or are you looking to scale a programme that you have found to be working so far? The two imply very different requirements for rigour and quantities of existing evidence.

4.       Mistaking evidence that something is good, for ‘good evidence’

If you have gathered existing evidence or collected new data, the next stop is to analyse it. A common mistake in the analysis phase relates to a phenomenon known in psychology as “confirmation bias”. When we are presented with evidence that confirms our initial hypothesis (or prejudice) we are more likely to accept it than evidence that goes against what we originally thought.  The result is that we are programmed to take some evidence too seriously, and other evidence not seriously enough. It also means we may seek evidence from sources that are more likely to give us the answers we expect.

Scientific researchers often go to extreme lengths to avoid confirmation bias in their methodologies. However, a simpler approach is simply to pay more attention to the quality of evidence (thereby introducing some objective criteria and reducing the risk of cognitive shortcuts). Standards of evidence such as this, and this, help us be more objective in how seriously we take different pieces of evidence.

5.       Putting evidence and research on the shelf

This is really the mirror of point 2. When we have gone to the effort of generating evidence, very often it is left on the shelf (or even worse in the heads of departed members of staff).

To reduce the risk of your carefully assembled results going unnoticed or being forgotten, first refer to point 1 – ensure it is useful and consult those that will use evidence in the design process. Next, try involving decision-makers in the evaluation or research along the way (approaches such as developmental or real-time evaluation do this), and finally have a concrete plan for using and disseminating the results from the start.

In larger organisations, knowledge management systems may be needed to ensure evidence reaches (and is accessible to) those that need it.

Dan James is INTRAC Senior Research Consultant.

Contact the training team training@intrac.org to learn more about our course on working with evidence in Laos.

Categories

Blog,

Tags