Evaluating Anti-Poverty Programs
The World Bank
"Governments, aid donors and the development community at large are increasingly asking for hard evidence on the impacts of public programs claiming to reduce poverty. Do we know if such interventions really work? How much impact do they have?" Mobilised by these questions, Martin Ravallion uses this 74-page paper to critically examine what he calls "the archetypal evaluation problem": impact evaluation (or "counterfactual analysis") often does not strongly attribute observed outcomes to the specific programme being evaluated.
Ravallion uses mathematical analysis to defend the claim that "To assess impact we need data on one or more outcome indicators. The choice of indicator will depend on the aims of the intervention....We will also need some way of inferring the counterfactual. This is inherently unobserved, since it is physically impossible to observe someone in two states of nature at the same time (participating in a program and not participating). Thus evaluation is essentially a problem of missing data. As we will see, there are many ways of filling in the missing data."
In this context, Ravallion reviews the methods available for the "ex-post counterfactual analysis" of anti-poverty programmes that are assigned exclusively to individuals, households, or locations. The discussion covers both experimental and non-experimental methods (including propensity-score matching, discontinuity designs, double and triple differences, and instrumental variables). These methodologies, Ravallion explains, concern the "'internal validity' of an evaluation: does the evaluation design plausibly allow us to obtain a reliable estimate of impact in the specific context?" He notes, however, that there are other concerns related to what can be learned from an evaluation: namely, how to apply the results from the evaluation in other settings and to draw lessons for development knowledge and future policy making ("external validity" concerns, which relate to both experimental and non-experimental evaluations). Ravallion stresses that "A key factor in program success is often adapting properly to the institutional and socio-economic context in which you have to work."
Information and communication technologies (ICTs) are key to one of the methodologies Ravallion discusses in light of this context-sensitive approach: project monitoring data bases. He argues that these tools "are an important, under-utilized, source of information....For example, the idea of combining spending maps with poverty maps for rapid assessments of the targeting performance of a decentralized anti-poverty program is a promising illustration of how, at modest cost, standard monitoring data can be made more useful for providing information on how the program is working and in a way that provides sufficiently rapid feedback to a project to allow corrections along the way (Ravallion, 2001)."
Two main lessons emerge from Ravallion's analysis:
- Despite the claims of advocates, no single method dominates; rigorous, policy-relevant evaluations should be open-minded about methodology.
- Future efforts to draw more useful lessons from evaluations will call for more policy-relevant measures and deeper explanations of measured impacts than are possible from the classic ("black box") assessment of mean impact.
In conclusion, the author notes that, "In drawing useful lessons for anti-poverty policy, we need a richer set of impact parameters than has been traditional in evaluation practice, including distinguishing the impacts on gainers from losers at any given level of living. The choice of parameters to be estimated in an evaluation must ultimately depend on the policy question to be answered..."
The World Bank's PovertyNet Newsletter #80 (July 2005).
- Log in to post comments











































