Factor 8: Purpose. What are the goals of the evaluation? Are the evaluation questions about implementing and testing the efficacy of a particular best practice or program model in a specific context, or making a judgment of the program’s value? Or, are the questions addressing how best to move forward in a complex initiative?
Factor 9: Reporting and use of findings. When, how, and to whom are results reported? Is reporting linked to, or kept separate from, sessions with decision makers and stakeholders to understand and interpret the findings and take action in response?
Factor 10: Rapid evaluation methods. Which evaluation methods are the best match for the circumstances?
Because evaluation designs depend on the kinds of evaluation questions asked as well as on the system conditions and dynamics, there is no one best design option. The right design is one that addresses the evaluation’s purpose(s) and captures the complexities of the intervention and its context or environment. Rapid evaluation methods can be used in developmental, formative, or summative evaluations, addressing questions regarding the implementation of a process improvement, program, or larger initiative, how it can be improved, and its cost-effectiveness. Quality improvement, rapid cycle, and systems change evaluation approaches can use similar qualitative and quantitative research methods, including feedback surveys, focus groups, key informant interviews, and tracking of performance indicators.
One element that sets apart the different rapid evaluation designs is their feedback mechanism, including how their findings are reported, to whom, and for what purposes. Although the results of quality improvement projects can be disseminated broadly, the target audience for the findings is the internal program unit whose processes are being changed. External funders are a key audience for the findings of rapid cycle evaluations, although the findings are also reported back to the organizations implementing the grant or program model. In rapid cycle evaluations, the evaluation’s cross-site findings are reviewed and interpreted at the funder level, not at the grantee level, to maintain the evaluation’s objectivity. In contrast, in complex initiatives the lines between internal and external evaluation audiences are blurred. In complex initiatives, collaborative learning processes might be used to convene initiative leaders and stakeholders to learn about, understand, and interpret the results, and to make collective decisions about how to improve and adapt the initiative based on the evaluation findings.
In the next three sections (Section IV. V, and VI), rapid evaluation examples for three types of change initiatives, process change, organizational change, and systems change, illustrate the match between evaluation design, contextual complexity, and content of the intervention. For each example, the ten evaluation factors highlighted in Table 1 are described in more detail in the section-specific tables.