play_arrow
10- 10+
0:0
0:0

Randomized controlled trials (RCTs) are a powerful tool for understanding what works in development and anti-poverty programs. They provide insights to guide practitioners and policymakers in improving and scaling interventions. But for RCT findings to inform these decisions, they must be communicated clearly and systematically—something that’s easier said than done. Good reporting isn’t just about sharing final findings; researchers should share their process to make results actionable. This includes going beyond the numbers to explain the context, what findings mean, and their applicability to the real world. In this blog, we draw on lessons learned from a collaboration between 3ie and ideas42, including a review of RCTs evaluating behavioral designs in cash transfer programs. 

We identified good practices that can improve evaluation design and reporting. These recommendations aim to help researchers and practitioners ensure their work leads to more informed, impactful decisions and, ultimately, better outcomes for the communities they serve.

How can researchers improve reporting practices to ensure their RCT findings are clear and actionable?

While RCT evaluations can provide a robust foundation for understanding the effects of interventions, researchers should ensure consistency and transparency in design and reporting to meaningfully inform other researchers and practitioners. When information is lacking, it is harder to assess the quality of the work, hindering the usefulness of program evaluations and replicability. However, to report critical information, programs must also be designed to collect the relevant data during implementation. 

Through a collaborative stocktaking project, we identified the following design and reporting aspects that can help increase the value of reports by reflecting well-thought-out and well-conducted evaluations: 

  • Clearly report dropout rates. This is important for RCTs, which are particularly prone to attrition problems (especially for longer-term evaluations). It is just as critical to report tests for whether there are differential dropouts between treatment and control groups to understand if the resulting data loss is balanced for both groups.
  • Consider blinding to prevent possible effects of participants knowing which group they're in (e.g., how this might change their behavior and ultimately affect study results). For example, acknowledging the likelihood of respondents changing their behavior depending on their treatment, ideas42 used brown envelopes in one of their RCTs to blind participants to group status. 
  • Make detailed pre-analysis plans (PAPs) publicly available and share the location in subsequent reports. PAPs have been established as good practice to avoid p-hacking (conducting analyses until finding something statistically significant). During implementation, it is also possible (and frequent) that initial registered plans must be amended. Acknowledging and justifying the deviations from the planned protocols supports robustness, transparency and applicability.
  • Specify the randomization processes, implementation materials, and summaries of relevant contextual considerations that help demonstrate that participant characteristics are balanced between intervention and control groups.
  • Report the ethical review conducted for the evaluation to demonstrate that international ethical standards for human subjects’ research are followed. See Evans (2023) approach to improved and more transparent ethics in RCTs for more information. The author provides guidelines for a three-stage assessment of RCTs in the planning, implementation and write-up phases.
  • Provide more details for intervention scale-up on the relationship between the study sample and populations being considered. Circling back to this information and making a qualified recommendation in the final write-up would be helpful when discussing the implications of the studies.
Cost analyses in scaling interventions

To ensure the most efficient use of resources, the cost and cost-effectiveness of interventions are as important as their impact. A systematic cost analysis should be an integral part of any evaluation, especially when decisions about scaling interventions are on the line. By including transparent and thorough costing data, researchers and practitioners can better assess the feasibility and value of scaling an intervention. Key tips for successfully including costing analyses include: 

  • Plan for costing early: Before implementation begins, identify a clear and systematic method for conducting the associated cost analysis. Whether using cost-effectiveness analysis or another approach, align on a framework that will provide consistent, actionable insights. See 3ie’s approach to costing for more details.
  • Account for scale differences: The costs and effects observed during a trial may differ significantly when interventions are scaled. Be sure to include adjustments that reflect how costs—such as staff training, infrastructure, or delivery logistics—might change at scale. This foresight ensures that calculations used for scaling decisions are realistic.
  • Compare alternatives: Where possible, compare the intervention’s cost-effectiveness to other available options. Understanding whether the intervention offers better value than existing alternatives strengthens the case for scaling—or highlights where improvements might be needed.
Call to action: Using RCTs and cost analyses to inform better policy and practice

RCTs and associated cost analyses are powerful tools for informing program design and ensuring anti-poverty initiatives optimize their limited resources for maximum impact. However, their true potential can only be realized when findings are reported clearly, systematically, and transparently.

The best practices outlined in this blog—such as addressing participant attrition, documenting ethical considerations, sharing pre-analysis plans, and providing detailed cost analyses—aim to support researchers and practitioners in creating more credible, actionable, scalable, and reliable reports. Embracing these suggestions will increase the potential of evaluations to inform policy and practice effectively.

The authors would like to acknowledge and thank the co-authors of the evidence stocktaking project and for their support on the blog, Shannon Shisler, Constanza Gonzalez Parrao, Daniel Handel from 3ie and Ariadna Vargas and Wen Wen Teh from ideas42. 

Leave a comment

Enter alphabetic characters only.