Today is World Soil Day, so it’s an opportune time to discuss some of the work 3ie has been supporting through our Agricultural Innovation Evidence Programme, which is jointly funded by the Bill & Melinda Gates Foundation and the UK Department for International Development. In particular, an evaluation team studying an integrated soil fertility management intervention in Malawi is helping farmers get to know their soil. The programme under study, implemented by the Clinton Development Initiative, seeks to increase the use of improved planting and soil management techniques to boost agricultural productivity, while a parallel set of activities seeks to increase farmers’ market access. As part of the evaluation, the team has amassed a wealth of detailed plot-level data about soil nutrient contents in the study areas. As the evaluation comes to a close, members of the team are visiting villages in these areas to present these data to the farmers, along with recommendations for how they can improve soil fertility.
On a recent field visit, 3ie staff observed one of these soil presentations in Chatambalala village, located in the Dowa district of Central Malawi. An agronomist member of the research team from Bunda College assembled a group of farmers in the centre of the village, described the soil’s characteristics and deficiencies, and provided advice on which fertilisers to buy and how and when to apply them. First, the group discussed issues relevant for the village as a whole and provided general recommendations related to fertiliser application, but also management practices to increase fertility and water retention. Then, each farmer received information about his or her own soils. Importantly, the meetings concluded with discussions among participants on how useful they expected the soil information to be. The research team is planning to use insights from these conversations to improve the design of future projects associated with small holder farmers and soil testing.
Learning now and learning later
An interesting and unique feature of the soil data component of this evaluation is that the research team is providing these data to villages in both the treatment and control groups of the study. The team decided to do this for the simple reason that all the farmers in the study were very keen to learn about their soil. They specifically asked for the results of the soil tests during the study, so they could use these data both to tailor their production practices and to improve their soil quality. (At 3ie, we salute the farmers’ dedication to evidence-informed decision-making!)
This dialogue between participants and researchers about the participants’ needs illustrates an important dilemma when it comes to the role of development research and evaluation. For researchers, it’s easy to get caught up in methodological concerns, like maintaining the separation between treatment and control groups. Researchers also often like to think about long-term research prospects. In this case, if the team had provided soil data only to the treatment group, there may have been greater prospects for research in these areas of Malawi in the future. The difference between treatment and control in their access to soil data would have meant the team could come back some years later and study the effects of soil information in combination with the other intervention components. By providing the same information to the treatment and control groups, the research team has closed off at least some avenues for further research. Yet researchers decided that it was important to provide the information to all farmers given their interest in and demand for the data.
This isn’t to say there aren’t cases where it’s appropriate to maintain a research setup for a long time. Doing so may be one of our best ways of identifying the long-term effects of development programmes, which is evidence we desperately need but don’t have a lot of. It is all the more important the less sure we are about whether an intervention is actually going to have positive effects or may even be harmful – this is known as the principle of equipoise in clinical research. But the risk is that we get so impressed with a nifty research design that we don’t give our full attention to ways we can use a research setup to have an immediate impact on participants.
Evidence for whom?
The experience of the team in Malawi also raises questions about who the data belong to and whether it would have been ethically irresponsible to refrain from sharing the data over any period of time with the soil owners for the larger public good of research. A core principle of impact evaluation is that the evidence we generate should be used to improve the lives of participants. The standard model for how this is supposed to happen can be circuitous: the evidence will make its way to someone with decision-making power over the fate of the programme being evaluated, and based on that evidence, the decision-maker allocates resources in a way that makes constituents better off. And by all means we should continue striving to make that model work as often and as effectively as possible. But we should also be on the lookout for more direct routes for research to benefit those who participate.
With inputs from the research team: Hope Michelson, Eric Kaima, Christopher Phiri, Wezi Mhango, and Annemie Maertens