Factor 8: Purpose. What are the goals of the evaluation? Are the evaluation questions about implementing and testing the efficacy of a particular best practice or program model in a specific context, or making a judgment of the program’s value? Or, are the questions addressing how best to move forward in a complex initiative?
Factor 2: Intervention complexity. Is the intervention a simple, direct process change, a test of a program model, or a larger initiative addressing multisector, multilevel population or systems change? Factor 3: Governance structure. Who is funding and overseeing the initiative—a single organization, a federal funder of a cohort of grante
Factor 1: Situational dynamics. How complex are the dynamics of the context or environment in which the intervention is operating?
When is it appropriate to use each type of rapid evaluation method? The right design is one that best fits the evaluation’s purpose(s) and captures the complexities of the intervention and its environment (Funnell and Rogers 2011). When system dynamics are not considered in an evaluation’s design, the evaluation will inevitably miss crucial as
All situations or systems share certain basic attributes or conditions, called boundaries, relationships, and perspectives. Together, these conditions generate patterns of system-wide behavior that are called situational or system dynamics.
It can be a challenge to identify which rapid evaluation methods are best suited for particular HHS studies. To allow for easier comparison of different rapid evaluation approaches, this framework identifies ten points of comparison: the intervention’s (1) situational dynamics; (2) complexity; (3) governance structure; (4) scale of outcomes, (5)
Developed by John Kania and Mark Kramer of FSG, Collective Impact is a social change process for “highly-structured cross sector coalitions.” In Collective Impact projects, there is “heightened vigilance of multiple organizations looking for resources and innovations through the same lens, learning is continuous, and adoption happens simulta
Created by Michael Q. Patton, Developmental Evaluation (DE) applies complexity concepts to evaluation to support innovation development. Information collected through the evaluation is used to provide quick and credible feedback for adaptive and responsive development of an innovation.
Attributed to work done in the 1940s by Kurt Lewin, action research is an iterative research process involving researchers and community stakeholders. The approach creates a cycle of inquiry through three elements: “an ongoing analysis of contextual conditions, discrete actions taken to improve those conditions, and an assessment of the efficacy
Innovators within and outside of government are also seeking to conduct systems change evaluations of large-scale, multisector, multilevel, and community-based initiatives, including those aimed at transforming juvenile justice systems, building healthy communities, and developing and implementing health reforms. Mathematica is currently evaluatin
In 2010, the Patient Protection and Affordable Care Act (ACA) established the Center for Medicare and Medicaid Innovation (CMMI) to test innovative payment and service delivery models that aimed to improve the coordination, quality, and efficiency of care. The legislation provided $10 billion in funding from 2011 to 2019 and enhanced authority to
Performance measures can be used for internal quality improvement processes within institutions and for external quality improvement processes across institutions. Continuous quality improvement (CQI) is an ongoing process to improve products and processes either through incremental improvement or sudden “breakthrough” improvement. In CQI, ser
Performance measurement is the process of collecting, analyzing, and reporting information regarding the performance of an individual, group, organization, system, or component. Started in the 1950s, this approach gained popularity with the Outcomes by Objectives and Performance Management movements. An early leader was William Edwards Deming, who
There are numerous rapid methods for evaluating program, system, and organizational change. Each approach was developed to address particular problems or improve certain kinds of practices. However, they have often been adopted for universal use without consideration of the circumstances for which they were originally created. This section provide
The U.S. Department of Health and Human Services (HHS) embarked on an effort to build internal evaluation capacity throughout the agency and its multiple components. HHS is among many federal agencies aiming to build stronger internal capacity, in large part due to internal HHS efforts and as a result of the federal Office of Management and Budget
This paper addresses the challenges of conducting rapid evaluations in widely varying circumstances, from small-scale process improvement projects to complex, system transformation initiatives. Rapid approaches designed to evaluate projects at lower levels of complexity do not take into account the inter-organizational aspects of more complex init
Figures 1 and 2 describe the flow that we are comparing between Michigan and Illinois for the period 1990 through 1994. All first contacts refer to the number of children that came in contact with the child welfare system for the first time in this time period - in Illinois, nearly half a million and in Michigan slightly over a quarter million chi