The U.S. Department of Health and Human Services (HHS) embarked on an effort to build internal evaluation capacity throughout the agency and its multiple components. HHS is among many federal agencies aiming to build stronger internal capacity, in large part due to internal HHS efforts and as a result of the federal Office of Management and Budget’s (OMB) memorandum in 2012 encouraging the use of evidence and evaluation in budget, management, and policy decisions to make government work more effectively. HHS has an internal evaluation working group that has worked to outline the practical value of evaluating HHS programs as well as common challenges encountered when considering the fit of an evaluation approach with different types of programs. Multiple evaluation approaches are used across HHS, often in combination, to address complex questions about program implementation, whether a program, policy, or initiative is operating as planned and achieving its intended goals, and why or why not.
Across many HHS agencies, the first step in an evaluation is determining the type of evaluation that can reasonably be conducted. Through its work, the HHS evaluation work group identified a need for new methods to conduct more rapid assessments and evaluations of programs as they are being implemented, rather than waiting for years after programs have ended for evaluation results. The federal government is also looking for new ways to work more efficiently and effectively to evaluate programs within an evidence-based framework. At the same time, policymakers, health care providers and public health practitioners are employing multifaceted interventions targeting large-scale organization and systems change at multiple levels in health care, behavioral health, public health and human services. The motivation for this paper is to articulate appropriate rapid evaluation methods for such complex and multilayered initiatives.
To prepare this paper, Mathematica Policy Research gathered information from colleagues about current efforts to conduct rapid cycle evaluations of large-scale initiatives, collected and reviewed relevant literature on rapid evaluation methods and related approaches, and attended rapid evaluation methods sessions at the June 2013 AcademyHealth Meeting to learn more about the methods and findings of current rapid evaluation projects. To analyze the information, Mathematica compared the rapid evaluation methods used in projects of differing complexity and their key attributes, and identified evaluation projects exemplifying different approaches.
This paper addresses the challenges of conducting rapid evaluations in widely varying circumstances, from small-scale process improvement projects to complex, system transformation initiatives. Rapid approaches designed to evaluate projects at lower levels of complexity do not take into account the inter-organizational aspects of more complex initiatives, especially those designed to build capacity and integrate activities across organizations, sectors, and levels. Providing a framework that recognizes key differences in the scope and complexity of interventions helps to advance implementation science beyond a program-centric focus on process and organizational improvements to encompass a whole systems approach (Perla 2013).
The paper is organized as follows. Section II provides a review of various rapid evaluation approaches that were developed for different kinds of initiatives. Next, Section III presents a comparative framework of rapid evaluation methods for projects at three levels of complexity: quality improvement methods for simple process improvement projects, rapid cycle evaluations for complicated organizational change programs, and systems-based rapid feedback methods for large-scale systemic change or population health initiatives. Then, Sections IV, V, and VI provide examples of each type of rapid evaluation. In Section VII, the paper ends with a discussion of the value of rapid evaluation principles that are appropriate at any level of complexity.