Judy Baker, a senior economist in the World Bank s Transport and Urban Development network, discussed a handbook she authored entitled Evaluating the Poverty Impact of Projects: A Handbook for Practitioners. The handbook seeks to provide World Bank managers and policy makers with tools and methodologies for analyzing the poverty impact of project work. The seminar was a forum on concepts and methods for impact evaluation, how this could be applied to urban development projects, key steps and related issues regarding implementation, and lessons learned from 15 case studies. The handbook also has examples of the practical inputs that would be necessary in planning any impact evaluation: sample terms of reference, a budget, impact indicators, a log frame, and a matrix of analysis.
The review work began when Baker was in the World Bank s Latin America region, where poverty impact evaluations for the most part had not been conducted. She said this was because of lack of resources, government hesitancy, and lack of knowledge on how to do them. The first step of the review process is monitoring to see if the project is being implemented as planned. This is followed by a process evaluation is determine how the program operates and problems with service delivery. Then a cost-benefit analysis should be done. The final step would be an impact evaluation to see if the program had the desired effects. Baker said it is important to also estimate the counterfactual, what would have happened if the project had never taken place. For this a control group is needed for comparison with the project group. She added poverty was defined very broadly for the purposes of the handbook. Questions on a range of issues could be asked including health, nutrition, access to water, etc.
There are three designs for evaluation, she said: experimental, quasi-experimental, and qualitative methods. Experimental or randomization designs are fast and limits selection biases in the evaluators counterfactual. Baker said this is hard to do, however, unless you are piloting a project in a specific area. Quasi-experimental, the most common method, tries to match the control and the treatment group from an existing survey. She described several different econometric models within the quasi-experimental design. Qualitative methods, however, don t focus on the counterfactual, but rather try to understand processes, behaviors and conditions perceived by the survey group. Baker said cost-benefit analyses, while not a formal design evaluation method, but should be included. Two additional methods are theory based evaluations, and a before and after approach. Baker believes the best methods mesh the qualitative and quantitative within the evaluation.
The first step in deciding how to do a impact evaluation is deciding whether one is even appropriate. Objectives and evaluation questions should be clarified. Exploring data availability is another important determinant, followed by the actual design of the evaluation. Timing issues are important so evaluation results are available at critical junctures in the project. Determination should be early in the process on implementation capacity. Team composition is also important. Who does the evaluation and where is it done are also vital factors. Important choices need to be made regarding data development and collection. Pilot testing should be done to see if the instruments selected work. Data timing is also an influential factor in the process. Finally, a process should be set up to manage and clean the data. It then needs to be analyzed. Reporting and dissemination is the next to last step in the process. The last step is incorporating the results into project design, something Baker said is very tricky for political and analytical reasons.
Early planning leading to a well organized evaluation is an important lesson, Baker said. Baseline data is not always available, so researchers may have to turn to randomization or constructing a control group. She repeated her assertion that combining quantitative and qualitative methods seem to be among the best practices. Average costs for evaluations (with the projects Baker looked at) was about 0.6 percent of the entire project cost or on average $400,000. Half these costs were for data collection, a fourth spent on consultants, and the rest were Bank staff costs and travel. Government resources paid for most of the evaluations, and occasionally funding came from donors. Strong political support is necessary not just to do the evaluations, but to also ensure the results will be used.
Baker then discussed the evaluation of a project in Honduras building social infrastructure, and a discussion period ensued.