Core Intervention Components: Identifying and Operationalizing What Makes Programs Work. Why Is It Important to Identify Core Components?


The lack of description and specification of the core components of programs presents challenges when it comes to assessing whether or not a given program has been or can be successfully implemented, effectively evaluated, improved over time, and subsequently scaled up if results are promising. This means that, when agencies and funders promote or require the use of evidence-based programs that are not well-operationalized, and agencies and practitioners are recruited to engage in new ways of work, there can be a great deal of discussion and confusion about just what the "it" is, that must be implemented to produce the hoped for outcomes.

Benefits of increased attention to the definition and measurement of core components and their associated active ingredients include an:

·         Increased ability to focus often scarce implementation resources and supports (e.g., resources for staff recruitment and selection, training, coaching, fidelity monitoring) on the right variables (e.g., the core components) to make a difference.

·         Increased likelihood of accurately interpreting outcomes and then engaging in effective program improvement strategies that address the "right" challenges.

·         Increased ability to make adaptations that improve fit and community acceptance, without moving into the "zone of drastic mutation" (Bauman, Stein, & Ireys, 1991).

·         Increased ability to engage in replication and scale-up while avoiding program ‘drift' that can lead to poor outcomes.

·         Increased ability to build coherent theory and practice as common core components emerge that are associated with positive outcomes across diverse settings and/or programs.

These benefits are elaborated below.

Application of implementation supports to ensure and improve the use of core components. When core components are more clearly defined, implementation supports can be targeted to ensure that the core components and their active ingredients come to life as they are used in everyday service settings. As noted in the in-home services example above, the usability testing approach not only allows for repeated assessments and improvements in the intervention, but it also creates opportunities for improving the implementation supports - the "execution" part of usability testing. That is, each round of improvement allows for adjustments in implementation supports such as training, coaching, and the performance assessment process itself (e.g., did we execute these activities as intended?), as well as serving as fodder for further defining and the core components and active ingredients themselves.

As noted above, usability testing is a variant of the Plan, Do, Study, Act process (PDSA). PSDA cycles and implementation supports are typically rapid cycle processes to ensure that you are getting proximal outcomes. When applying a usability testing process to an incompletely operationalized evidence-based program or to an evidence-informed program, the "plan" can be to test a segment of the program or test one or more core components as intended to be used in practice. To carry out the "do" part of the PDSA cycle, the "plan" needs to be operationalized and grounded in best evidence. That is, who will say or do what activities, with whom, under what conditions to enact the plan? And to what degree are these core components and/or active ingredients supported by evaluation and research findings? This attention to the "plan" compels attention to the core components and active ingredients.

The "do" part of the PDSA cycle provides an opportunity to specify the implementation supports required to enact the plan. How will the confidence and competence of practitioners to "do" the plan be ensured? This requires attention to the implementation supports; such as the recruitment and selection criteria for staff, as well as training and coaching processes (e.g., who is most likely to be able to engage in these activities; what skill-based training is needed to "do" the "plan"; how will coaching be provided to improve practitioners' skills and judgment as they execute the "plan?"). And the "study" portion of the PDSA cycle requires creating an assessment of performance (e.g., did practitioners "do" the plan? were our implementation supports sufficient to change practitioner behavior?), as well as the collection of proximal or near-term outcomes (e.g., were parents at home? were parents willing to engage in practice sessions with the therapist?).

As three or four newly trained staff begin providing the new services, the budding performance assessment measures can be used to interpret the immediate outcomes in the "study" part of the PDSA cycle (e.g., did we do what we intended?; if so, did doing what we intended to do result in desired proximal outcomes?). Without proximal outcomes, distal outcomes are much less likely. Once the results from the "study" segment of the cycle are known (e.g., from performance assessment data and outcomes for participants), work can commence to "act" on this information by making adjustments to segments of the program, the core components, and/or to particular active ingredients. And further action can be taken as implementation supports are adjusted for the next group of staff as the usability testing cycle begins (e.g., Fixsen, Blase, Timbers, & Wolf, 2001; Wolf, Kirigin, Fixsen, Blase, & Braukmann, 1995). The ‘act' portion of the cycle defines the improvements to be made and initiates a new PDSA cycle related to the improvements related to training, coaching, and feedback systems to improve practitioner competence and confidence and adherence to the new, revised processes.

These brief descriptions of usability testing and implementation supports have focused on identifying and developing the core components for an initial effective working example of an evidence-informed innovation or to develop an improved definition of an evidence-based program that has not been well operationalized. But even a well-operationalized intervention will continue to evolve as new situations are encountered and more lessons are learned about how to better operationalize core components, improve implementation supports, improve fidelity, and improve outcomes. The goal is not to "do the same thing" no matter what, just for the sake of "doing the program". The goal is to reliably produce significant benefits, with better outcomes over time and to clearly identify, understand and skillfully employ the core components that are associated with better outcomes.

Interpreting Outcomes and Improving Programs. Identifying the core components that help to create positive outcomes, and knowing whether or not they were implemented with fidelity, greatly improves the ability to interpret outcomes (Fixsen, Naoom, Blase, Friedman, & Wallace, 2005). In addition, it reduces the likelihood of "throwing the baby (the new program) out with the bath water (poor implementation)". Without understanding and monitoring the use of the core components, it is difficult to tell the difference between an implementation problem and an effectiveness problem. This is particularly problematic when positive outcomes are not achieved or when outcomes were not as beneficial as expected. Because strategies for improving effectiveness are different from strategies for improving fidelity and the implementation of the core components, it is important to be able to assess whether the program does not work or whether the implementation of the program was flawed. The following table illustrates the types of improvement strategies or subsequent actions that may be useful depending on the where one may "land" with respect to fidelity and outcomes.


Table 1. Analyzing data related to both fidelity assessments and outcomes helps to identify the actions needed to improve outcomes.


High Fidelity

Low Fidelity


Satisfactory Outcomes


Continue to monitor fidelity and outcomes

Consider scale-up

Re-examine the intervention

Modify the fidelity assessment


Unsatisfactory Outcomes


Select a different intervention

Modify the current intervention

Improve implementation supports to boost fidelity


Obviously, we want our efforts to "land" in the quadrants that involve achieving satisfactory outcomes. When there are satisfactory outcomes and fidelity is high, then continued monitoring of fidelity and outcomes helps to ensure that the core components are continuing to be used with good effect. And it may indicate that the program should be reviewed for scalability or increased reach since the core components appear to be well-operationalized and the implementation supports seem to be effective in producing high fidelity.

When satisfactory proximal and/or distal outcomes are being achieved but fidelity is low or lower than expected, it may require re-examining the intervention to determine if there are additional core components or active ingredients that have not been specified. This requires qualitative and quantitative data collection and analysis of the strategies used by practitioners who are positive outliers (e.g., achieving good results but with low fidelity). Or it may be that the context for the program has changed. For example, there has been a change in the population (e.g., different presenting problems, different age-range), leading to the need to provide very different program strategies to meet the needs of the population. Or it may be that there are core components and active ingredients that are well operationalized and are being trained and coached, but are currently not included in the fidelity assessment process. In such cases, revising and re-testing the fidelity assessment may be called for. In any event, discovering the source of this discrepancy will be important if the program is to be sustained over time or scaled-up.

The combination of high fidelity but unsatisfactory outcomes may indicate that the selected intervention or prevention program is not appropriate for the population or does not address critical needs of the population. Since the purpose of using programs is to achieve positive results, the achievement of high fidelity with poor outcomes produces no value to the population in need. Such findings may help build theory and set future research agendas and hopefully would result in communities choosing to invest resources differently. Once unmet needs are identified through data gathering and analysis, it may be possible to modify the intervention and add in core components that have theory and evidence to support their impact on the unmet needs. Or it may be that the selection process for the intervention was flawed or that the population being served is different from the population identified during the original needs assessment. In any event, the search for programs with core components that address the needs of the population may need to be re-initiated.

If outcomes are unsatisfactory and fidelity is low, then the first approach is to improve or modify implementation supports (e.g., more targeted staff recruitment and selection, increased skill-based training, and frequency and type of coaching) in order to improve fidelity (Webster-Stratton, Reinke, Herman, & Newcomer, in press). Or it may be necessary to review the organizational factors (e.g., time allocated for coaching, access to equipment needed for the intervention) and systemic constraints (e.g., licensure requirements that limit recruitment, billing constraints, inappropriate referrals) that may be making it difficult to achieve high fidelity. Making changes to address organizational and/or systems issues (e.g., funding, licensure, billing) often require considerable time and effort. Therefore, the implementation supports of selection, training, and coaching may need to ‘compensate' for the organizational and systems barriers (Fixsen et al., 2005). For example, it may take time to address the funding constraints that make it difficult to fund coaching of staff. While attempts to fund the coaching are being pursued, it may be necessary to use more rigorous selection criteria to recruit more experienced staff or provide increased training to ‘compensate' for the impact of funding on the provision of coaching.

In summary, this table brings home the point that both knowledge and measurement of the presence and strength of the core components (e.g., through fidelity and other measures), are required to interpret and respond to outcome data.

Making adaptations to improve "fit" and community acceptance. There may be a variety of reasons that adaptations are considered by communities and agencies as they implement programs and innovations. There may be a perceived or documented need to attend to cultural or linguistic appropriateness or community values (Backer, 2001; Castro, Barrera, & Martinez, 2004). Or there may be resource or contextual constraints that result in decisions to adapt the program or practices. Perhaps the workforce available to implement the program influences making programmatic adaptations that are perceived to be better aligned with the background, experience, and competencies of the workforce.

Adapting evidence-based programs and evidence-informed innovations may make it more likely that communities will make the decision to adopt such programs and innovations (Rogers, 1995). However, improving the likelihood of the decision to adopt through adaptation does not necessarily mean that those adaptations in the service settings will help to produce positive outcomes. While some initial adaptations are logical (e.g., translation to the language of the population, use of culturally appropriate metaphors), recommendations by some program developers are to first do the program as intended and assess both fidelity and outcomes. Then, based on data, work with program developers and researchers to make functional adaptations. Functional adaptations are those that a) reduce "burden" and "cost" without decreasing benefits, or b) improve cultural fit, community acceptability, or practitioner acceptance while maintaining or improving outcomes. Adaptations are much more likely to be functional when the core components and the associated active ingredients are known. In addition, those engaged in adapting programs and practices must understand the underlying theory base, principles, and the functions of the core components so that adaptations do not undermine the effectiveness of the program. Finally, process and outcome data must be collected to validate that the adaptations meet the criteria for "functional". It then stands to reason that adaptations are most likely to be functional when: the core components are well-operationalized, when implementation supports are able to reliably create competent and confident use of the intervention, and when adaptations are made in partnership with the original program developer and researcher to avoid moving into the "zone of drastic mutation" (Bauman, Stein, & Ireys, 1991) and destroying the effectiveness of the intervention; and when data verify that the changes have not undermined the effectiveness of the program or practice (Lau, 2006).

As program developers and researchers work with diverse communities, cultures, and populations to make adaptations, they may look for ways to change "form" (e.g., time, place, language, metaphors used) to improve appropriateness and acceptability while preserving the "function" (e.g. the processes that relate to effectiveness) of the core components. Collecting data to analyze the impact of making cultural adaptations is key to determining when such adaptations are functional since reducing the dosage of the core components or altering them can result in adaptations that reduce positive outcomes, as noted by Castro et al. (2004). For example, Kumpfer, Alvarado, Smith, and Bellamy (2002) describe a cultural adaptation of the Strengthening Families Program for Asian Pacific Islanders and Hispanic families that added material on cultural and family values but displaced the content related to acquiring behavioral skills - a core component. This resulted in less improvement in parental depression and parenting skills, as well as less improvement in child behavior problems than the original version, which focused only on behavioral skills.

Cultural adaptations can be made that enhance acceptability but that do not undermine the core components and active ingredients of the evidence-based program. A cultural adaptation of Parent Child Interaction Therapy (PCIT) was developed by McCabe and her colleagues (McCabe, Yeh, Garland, Lau, & Chavez, 2005). They made modifications to the core component of engagement by including engagement protocols for immediate and extended family members to reduce the likelihood of lack of support undermining treatment. They also "tailored" the manner that certain active ingredients were framed when the results of a parent self-report questionnaire detected elements at odds with parenting beliefs. For example, for parents who expressed a commitment to strict discipline, the active ingredient of "time out" was re-framed as a punitive practice by using terms such as "punishment chair" for the time out location. Or if Mexican American parents of young children expressed concerns about the practice being too punishing for young children, the term "thinking chair" was adopted. This left the function of the time out process intact (e.g., brief removal from positive reinforcement) while tailoring or adapting the form to fit the cultural and familial norms.

Lau (2006) makes the case for selective and directed cultural adaptations that prioritize the use of data to identify communities or populations who would benefit from adaptations and are based on evidence of a poor fit. Lau makes the case for focusing on high priority adaptations that avoid fidelity drift in the name of cultural competence. In short, the process of adaptation needs to be based on empirical data and demonstrate benefits to the community or population.

In summary, modifications to core components must be done thoughtfully and in partnership with program developers and researchers, so that the underlying theory-base of the program is not inadvertently undermined. Data-based decision-making should guide modifications to core components. Linguistic adaptations aside, an implementation process that first implements the core components as intended and then analyzes results may be better positioned to make functional adaptations. Functional adaptations are those that are developed in order to improve fit, acceptability, and/or reduce burden or cost while improving outcomes or at least maintaining positive outcomes while avoiding negative impact.

Improving the success of replication and scale-up efforts . As David Olds (2002) noted, "Even when communities choose to develop programs based on models with good scientific evidence, such programs run the risk of being watered down in the process of being scaled up" (p. 168). Of course, understanding whether or not a program has been "watered down" requires an understanding of the core components and their relationship to achieving desired outcomes. Michie et al. (2009) noted that clear definitions of the required core components increase the likelihood that programs and practices can be successfully introduced in communities and scaled-up over time. However, it takes time and a number of closely controlled and monitored replication efforts by the developers to first stabilize the intervention before making the decision to attempt to more broadly scale-up the program. From the business arena, Winter and Szulanski (2001) note that, "The formula or business model, far from being a quantum of information that is revealed in a flash, is typically a complex set of interdependent routines that is discovered, adjusted, and fine-tuned by ‘doing'" (p. 371). Such fine-tuning can be done through usability testing, evaluation, and research. Scaling up too soon can lead to a lost opportunity to adequately develop, specify, and reliably produce the core components that lead to effectiveness.

Successful replication and scale-up are significantly enhanced when the core components are well specified and when effective implementation supports are in place to promote the competency and confidence of practitioners, and when organizational and systems change occurs to support the new way of work. Effectiveness and efficiency of replication and scale-up also may be improved when there is greater clarity about the non-core components that can be adapted to fit the local circumstances including culture, language, workforce, and economic or political realities. And, as noted above, efficiency is enhanced when resources for implementation supports (e.g., training, coaching, data systems, fidelity measurement and reporting) are targeted to impact core components.

View full report


"rb_CoreIntervention.pdf" (pdf, 451.57Kb)

Note: Documents in PDF format require the Adobe Acrobat Reader®. If you experience problems with PDF documents, please download the latest version of the Reader®