Five lessons from our Transparent, Reproducible, and Ethical Evidence (TREE) reviews

Five lessons learned from our Transparent, Reproducible, and Ethical Evidence (TREE) reviews

We have been consolidating our efforts to develop stronger systems for producing transparent, reproducible, and ethical evidence (TREE). We have emphasized asking ourselves the right questions at the right times, even when there are no easy answers. We’ve examined very specific questions such as: Does a state of scarcity or equipoise make it ethical to withhold an intervention from a control group? When and how should we pay study participants? How should we assess the standard of care and best proven attainable (BPA) policy as we examine the ethics of interventions we study? There are many more such issues that we continue to explore.

To support our staff and partners, 3ie is refining the process to consider the ethical questions raised throughout the evaluation life cycle and document risk assessment and management strategies. We believe our TREE Review Framework complements the necessary work of Institutional Review Boards (IRBs) while ensuring we do not outsource our ethical judgment to them. Its objectives within 3ie are to: (i) establish ethical standards, (ii) better integrate TREE best practices into the workflow; and (iii) establish timely, independent oversight of risks facing the study team to produce credible, meaningful, ethical evidence.

Our work to pilot test and implement this process has been underway since December 2021 with four 3ie-managed evaluations and one external client (as discussed in this 3ie Evidence Dialogues webinar held during gLOCAL Evaluation Week 2022). Across these five TREE Reviews, we have engaged with more than 15 study team members working on evaluations in four countries across three continents. Two studies conducted the TREE Review just as baseline data collection began, one before midline data collection, and two during the initial evaluation design planning stage. Here are five key initial lessons from this diverse experience:

  1. Study teams consider many ethical issues throughout the evaluation life cycle, but there is not always clear and consistent documentation of assessments and mitigation strategies. For example, several teams in this sample made the decision to conduct face-to-face interviews for the studies during the ongoing COVID-19 pandemic. The TREE Review raised questions regarding the risk-benefit ratio for conducting data collection and the risk mitigation strategies established to protect the health, safety, and welfare of staff, participants, and bystanders. Although teams had put significant time and effort to think through these issues, there was no strong paper trail documenting these decisions and strategies. The TREE Review highlighted this need and created space to improve documentation closer to the point of decision-making.
     
  2. The sooner TREE practices are considered in evaluation planning, the better. At first glance, it can seem too early in the process to ask some of these questions. But, for one project where TREE Review was conducted at the evaluation design planning stage, several issues came up that informed adjustment in study design decisions. For example, the study initially included several secondary questions related to how government officials perceived the intervention implementation and its partners, as well as sensitive financial data collection. Through the TREE Review process it was determined early that it would be difficult to protect the confidentiality of respondents given the strong risk of re-identification due to the small sample and specific indirect identifiers. Given these questions were not a priority for the evaluation and could introduce unnecessary risk to the participants, the study removed unnecessary questions from the design and specific ones from the questionnaire.
     
  3. Reflecting on evaluation and implementation team composition, roles, and responsibilities is an important part of examining power dynamics and using transparency as a tool to maximize constructive influence and minimize destructive influence of stakeholders on evaluation design and reporting. For the five studies in this sample, the intervention and the evaluation have the same funder. For at least two studies, the TREE Review flagged a need for more clarity on the roles and responsibilities and institutional arrangements that govern the evaluation design and implementation. Given the potential for actual, or perceived, influence over the evaluation that can lead to highlighting positive and downplaying null or negative results, the TREE Review placed additional emphasis on the need to pre-specify analysis plans and rely on agreed, standardized reporting requirements that align with pre-specification. The TREE Review offered an opportunity to define these potential influences and ensure transparency is used as a tool for mitigating risks of bias.
     
  4. Understanding roles and responsibilities supports defining who is accountable for what should risks arise. There is often a complex set of actors – within the evaluation team across junior and senior level staff, data collection firms, implementation partners, policymakers, funders, and other stakeholders – for any given evaluation. In this complex environment, it is sometimes not clear who is accountable for what if risks arise – as a result of the intervention, as a result of the study – for staff, participants, and/or bystanders. The TREE Review helped flag this issue to allow for proactive, rather than reactive and defensive response management. While ethical concerns of actual data collection are often deflected as “data collection firm responsibilities” and ethical concerns of intervention are deflected as “implementation partner responsibilities”, this is not always fully appropriate or responsible. Recognizing the role the evaluator plays in assessing and understanding these risks and defining who is accountable for what became an important part of the TREE Review.
     
  5. Conducting TREE Review may support more meaningful engagement with IRB review. Several studies had already completed IRB review of their protocols and informed consent. Issues raised after IRB review but prior to data collection included assessments of the clarity and completeness of the informed consent, assessment of the meaningfulness of informed consent and assent process particularly when the data collection required engagement with minors, and reflections on the significant burden of the survey (more than 90 minutes per household) without compensation. Ideally, these issues would have been raised with the IRB, particularly a local IRB that understands and represents the needs and interests of the participant populations. As we work to not ‘outsource’ our ethical judgments to IRBs, the TREE Review process may support and empower study team members to identify the critical questions and issues they need the IRB to weigh in on and create a more proactive engagement with the IRB on these issues. While some research and evaluation teams may already have this relationship with IRBs, not all do, and we see this as an opportunity to strengthen how we engage with IRBs.

We welcome the community’s input and feedback on this process and tool. Please reach out to tree@3ieimpact.org with any suggestions, comments, questions, or ideas!

Add new comment

Authors

Chandan Jain Chandan JainSenior Evaluation Specialist
Charlotte Lane Charlotte LaneEvaluation and Learning Consultant, Food Security Evidence Brokerage
Andrew Tangang Andrew TangangResearch Fellow and Senior Policy Analyst, eBASE Africa
Jennifer Sturdy Jennifer SturdyFormer Senior Evaluation Specialist, 3ie

About

Evidence Matters is 3ie’s blog. It primarily features contributions from staff and board members. Guest blogs are by invitation.

3ie publishes blogs in the form received from the authors. Any errors or omissions are the sole responsibility of the authors. Views expressed are their own and do not represent the opinions of 3ie, its board of commissioners or supporters.

Archives

Authors

Chandan Jain Chandan JainSenior Evaluation Specialist
Charlotte Lane Charlotte LaneEvaluation and Learning Consultant, Food Security Evidence Brokerage
Andrew Tangang Andrew TangangResearch Fellow and Senior Policy Analyst, eBASE Africa
Jennifer Sturdy Jennifer SturdyFormer Senior Evaluation Specialist, 3ie
Bidisha Barooah Bidisha BarooahLead Economist, IFAD

About

Evidence Matters is 3ie’s blog. It primarily features contributions from staff and board members. Guest blogs are by invitation.

3ie publishes blogs in the form received from the authors. Any errors or omissions are the sole responsibility of the authors. Views expressed are their own and do not represent the opinions of 3ie, its board of commissioners or supporters.

Archives