RT Journal Article SR Electronic T1 Getting Rigor Right: A Framework for Methodological Choice in Adaptive Monitoring and Evaluation JF Global Health: Science and Practice JO GLOB HEALTH SCI PRACT FD Johns Hopkins University- Global Health. Bloomberg School of Public Health, Center for Communication Programs SP e2200243 DO 10.9745/GHSP-D-22-00243 VO 11 IS Supplement 2 A1 Synowiec, Christina A1 Fletcher, Erin A1 Heinkel, Luke A1 Salisbury, Taylor YR 2023 UL http://www.ghspjournal.org/content/11/Supplement_2/e2200243.abstract AB Key FindingsThe level of certainty about an entire program or smaller elements of a program should correlate with the level of rigor used for adaptive learning.Cocreation is critical to ensuring right-fit rigor and that results and data generated are useful to decision-makers.Certainty is dynamic over the course of an engagement, and the assessment of it should be revisited on an ongoing basis following learning activities.Key ImplicationsOur framework can guide stakeholders through the process of assessing their programs to design relevant, timely, and iterative adaptive learning or responsive feedback activities.Future case studies should inform continuous adaptation and iteration of the framework to reflect the experiences of research and development practitioners.The field of global development has embraced the idea that programs require agile, adaptive approaches to monitoring, evaluation, and learning. But considerable debate still exists around which methods are most appropriate for adaptive learning. Researchers have a range of proven and novel tools to promote a culture of adaptation and learning. These tools include lean testing, rapid prototyping, formative research, and structured experimentation, all of which can be utilized to generate responsive feedback (RF) to improve social change programs. With such an extensive toolkit, how should one decide which methods to employ? In our experience, the level of rigor used should be responsive to the team's level of certainty about the program design being investigated—how certain—or confident—are we that a program design will produce its intended results? With less certainty, less rigor is needed; with more certainty, more rigor is needed. In this article, we present a framework for getting rigor right and illustrate its use in 3 case studies. For each example, we describe the feedback methods used and why, how the approach was implemented (including how we conducted cocreation and ensured buy-in), and the results of each engagement. We conclude with lessons learned from these examples and how to use the right kind of RF mechanism to improve social change programs.