In 2010, the Patient Protection and Affordable Care Act (ACA) established the Center for Medicare and Medicaid Innovation (CMMI) to test innovative payment and service delivery models that aimed to improve the coordination, quality, and efficiency of care. The legislation provided $10 billion in funding from 2011 to 2019 and enhanced authority to waive budget neutrality for testing new initiatives to allow quicker and more effective identification and spread of desirable innovations (Gold et al. 2011). CMMI created its Rapid Cycle Evaluation Group to evaluate the effectiveness of the new delivery and payment models. This group has created a new rapid cycle evaluation approach that will use summative and formative evaluation methods to rigorously evaluate the models’ quality of care and patient-level outcomes, while also delivering rapid cycle feedback to participating providers to help them continuously improve the models. The goal is to “evaluate each model regularly and frequently after implementation, allowing for the rapid identification of opportunities for course correction and improvement and timely action on that information” (Shrank 2013).
To maintain the rigor of its evaluations, the Rapid Cycle Evaluation Group is employing quasi-experimental designs that use repeated measures—time series analyses—to understand the relationship between implementation of new models and both immediate changes in outcomes and the rate of change of those outcomes. The group is also using other statistical methods, such as propensity score approaches and instrumental variables, and the use of comparison groups, where appropriate, to help clarify the models’ causal mechanisms. This approach assesses both the results and context of those results to gain a better understanding of how favorable outcomes are obtained. Evaluators collect qualitative information about (1) providers’ practices, organizational characteristics, (2) the culture of the health care systems in which they operate, (3) how providers implement the intervention, and (4) the factors that hinder and support the change. This will allow evaluators to assess which features of the interventions are associated with successful outcomes (Shrank 2013).
Evaluators will submit these data to a CMMI Learning and Diffusion Team that has been organized to provide quarterly feedback to participating providers on dozens of performance metrics, including process, outcome, and cost measures. The team will also organize learning collaboratives among participating providers to “spread effective approaches and disseminate best practices, . . . ensuring that best practices are harvested and disseminated rapidly.” CMMI evaluation and dissemination activities are separated to “preserve the objectivity of the evaluation team” (Shrank 2013).