As portfolio managers of a developmental finance organisation, we often find ourselves completely exhausted after a round of tug-and-war during funding application assessments. How can we be certain that the projects that we would like to see funded are worth our investment? What evidence is there that the model works? How is one project approach better than what another is trying? Last week, while attending a J-PAL Executive Education Course on Evaluating Social Programmes, we got to think more deeply about how we interrogate, compare, consider and randomise evaluations.
The purpose of the week-long course was to provide a thorough understanding of why, when and how to use randomisation in an impact evaluation. It gave us a taste of both fantasy and reality. As course participants, we stepped into research projects from around the world and assumed the roles of people we had never met, in order to feel and see the dedication and elbow grease that is required to understand a programme’s impact on a community. Projects spanned from evaluating the effective of school monitoring systems in Madagascar to understanding the impact of a training programme for coffee farmers in Rwanda, and so many more which you can find at http://www.povertyactionlab.org/evaluations. We were also thrown headfirst into real-world problem solving, as we worked in groups to map out real evaluations and take the practical steps required to evaluate a programme and to answer the million-dollar question – is this programme ready to go to scale?
As lecturers took us through questions of “why randomise?”, foundational concepts like theories of change, definitions of counterfactuals, standard deviations, omitted bias and power calculations, we also interacted with people from all walks of life, which created wonderful opportunities for learning through diversity on an intellectual and a human level. At the end of the week, we felt both encouraged and terrified as we headed back to our offices. “Could we do a randomised control trial on NSFAS students or pregnancy grants to inform policy? Mmm… exciting! But wait… will we now be overly sceptical of every project report we see? Eeek…” We know that policy makers and fund managers often need to make decisions without the luxury of all the evidence they might want. Yet, as evaluation becomes a more and more common practise, we feel that we have a far better understanding of the practical limitations such as time, cost and human resources that arise when evaluation is applied to the non-profit sector. Our final take-home from the course was a point made J-PAL Africa’s Policy Manager: research or evidence should not be seen as a “black box”. Rather, and this is particularly helpful for the work that we do, see it as providing a theoretical overview where contextual knowledge, administrative data and human interactions allows for more efficient use of evidence.
Renisha Patel and Fefekazi Mavuso, Portfolio Managers, DG Murray Trust