This comparative framework is a heuristic tool rather than a prescriptive how-to manual for assigning rapid evaluation methods to different projects. There is not one best rapid evaluation method that works in all circumstances. The right rapid evaluation design addresses the goals of the evaluation and captures the complexities of the intervention and its environment. However, these methods are not mutually exclusive. They may be more effective when nested, just as the basic building blocks of genetic code are used in different combinations to develop biological functions that work together to form organisms that evolve over time (Holland 1995). Or, as illustrated in this paper, simple checklists are used to reduce preventable surgical errors within larger hospital safety campaigns that are funded through national payment reforms that reward system-wide shifts in health care costs and quality. Evaluating an intervention from process, organization and system perspectives allows us to implement change more effectively from multiple vantage points.
Several universal rapid evaluation principles are important, regardless of the level of complexity. First, rapid evaluation methods should maintain a balance between short-term results and long-term outcomes, so that there is “an alignment of task control, management control and strategic control” (Kaplan 2010). In other words, a short-sighted emphasis on immediate results (the optimization of subsystems), should not jeopardize the achievement of long-term goals (the optimization of the whole system). Second, rapid evaluation should not just be a measurement or diagnostic tool; it should also be part of an interactive and adaptive management process, in which internal operational results and external environmental feedback are used together in an iterative process to test and revise an initiative’s overall strategy. Third, the information collected, analyzed, and interpreted should be used “as a catalyst for continual change,” in which data and action plans are reconsidered and original assumptions are questioned through a reflective, double-loop learning process that supports rethinking of project goals (doing the right thing) as well as project strategies (doing things right) (Argyris 1982).
As federal initiatives become more complex and pressures increase to learn quickly from them, there will be many more opportunities to use these methods, alone or in combination, to improve the effectiveness of a wide range of initiatives. For example, this framework can be used within HHS for complex public health and human service programs as well as for broader systemic reforms. The most valuable rapid evaluation projects may include a combination of rapid evaluation approaches.