Core Intervention Components: Identifying and Operationalizing What Makes Programs Work. Challenges in Identifying and Validating Core Components


The core components may be developed over time by experimentally testing a theory of change (e.g., what are the mechanisms by which we expect change to occur) and by developing and validating fidelity measures (e.g., was the intervention done as intended) that reflect the core components. Core components can be identified through causal research designs (e.g., randomized control trials, quasi-experimental designs, single-subject designs) that test the degree to which core components produce positive outcomes, as compared to results that occur in the absence of these core components. Research that demonstrates a positive correlation between high fidelity and better outcomes also increases our confidence in and understanding of the core components (e.g., higher fidelity is associated with better outcomes). However, causality cannot be inferred from such correlational research.

Core components are often equated with measures of fidelity; but such measures do not necessarily tell the whole story about what is required for effective use of an intervention in typical service settings. Moreover, identifying and validating core components through the creation of valid, reliable, and practical measures of fidelity is not a simple task. It requires research over time and across replications. Efforts to create, test, and refine fidelity measures have been conducted for programs for children and families (Schoenwald, Chapman, Sheidow & Carter, 2009; Henggeler, Pickrel, & Brondino, 1999; Bruns, Burchard, Suter, Force, & Leverentz-Brady, 2004; Forgatch, Patterson, & DeGarmo, 2005) and for programs serving adults (Bond, Salyers, Rollins, Rapp, & Zipple, 2004; Propst, 1992; Lucca, 2000; Mowbray, Holter, Teague, & Bybee, 2003; McGrew & Griss, 2005). These studies chronicle the challenges of creating fidelity measures that not only reflect the core components, but also are practical to use in typical service settings and are good predictors of socially important outcomes. Concerted effort over time by teams of researchers seems to be required to produce valid and serviceable assessments of fidelity.

While teams of researchers have successfully taken on the task of better articulating and validating the core components of some programs (Henggeler et al., 2002; Chamberlain, 2003; Forgatch et al., 2005; Webster-Stratton & Herman, 2009), on the whole, there are few adequately defined programs in the research literature that clearly detail the core components with recommendations on the dosage, strength, and adherence required to produce positive outcomes. The source of this problem has been documented by Dane and Schneider (1998). These authors summarized reviews of over 1,200 outcome studies and found that investigators assessed the presence or strength of the independent variables (the core intervention components) in only about 20 percent of the studies, and only about 5 percent of the studies used those assessments in their analyses of the outcome data. A review by Durlak and DuPre (2008) drew similar conclusions. The challenge is further exacerbated by the lack of commonly accepted definitions or criteria related to the verification of the presence or validation of the independent variables (the core components that define the program) in gold standard, randomized control studies. This means that the published research literature is likely a poor source of information about the functional core components of interventions, evidence-based, or otherwise.

One reason that very few program evaluations are able to actually research which components of the program are most strongly related to positive outcomes is that, in demonstration projects, extra efforts are made to insure that the program is implemented with fidelity, thus eliminating variations. An exception to this occurred in the early research on the Teen Outreach Program (TOP) where some variations in program implementation did occur because the program did not yet have "minimum standards" and site facilitators took liberties with the curriculum and volunteer service components of the program (Allen, Kuperminc, Philliber and Herre, 1994, Allen et al., 1990). Variations in facilitator "style" also occurred, and data were collected on how interactions with facilitators and others were perceived by students.

The Teen Outreach research found that presence of volunteer community service was related to positive outcomes including less failure of school courses, lower rates of teen pregnancy, and lower rates of school suspension. On the other hand, variations in the amount of classroom time and exact fidelity to the curriculum were not related to these outcomes. This research also found that, when students said they had a great deal of input in selecting the volunteer work they would do and that this work was truly important, they had more positive outcomes (Allen, Philliber, Herrling, and Kuperminc, 1997). After this research was completed, the TOP adopted minimum standards for replication of TOP including 20 hours of community service and choice regarding volunteer work viewed as important by the teen. TOP also requires 25 curriculum sessions, but program facilitators can use any of the curriculum sessions they choose. In communities where teaching about sex is prohibited or restricted, this left facilitators free to leave out those lessons since their inclusion had not been shown to affect outcomes. In addition, training for TOP facilitators stresses that this is a curriculum to truly be facilitated rather than taught, and that, at the end of the program, young people should report that they did most of the talking. Fidelity data for TOP currently include measures of each of these important core components derived from examining variations in program practices and protocols as related to outcomes. 

With such exceptions, there is, as noted by Dane and Schneider (1998) and Michie, Fixsen, Grimshaw, and Eccles (2009), little empirical evidence to support assertions that the components named by an evidence-based program developer are, in fact, the functional, or only functional, core components necessary for producing the outcomes. In their examination of intervention research studies, Jensen and his colleagues (Jensen, Weersing, Hoagwood, & Goldman, 2005) concluded, "when positive effects were found, few studies systematically explored whether the presumed active therapeutic ingredients actually accounted for the degree of change, nor did they often address plausible alternative explanations, such as nonspecific therapeutic factors of positive expectancies, therapeutic alliance, or attention" (p 53). This may mean that the mention or failure to mention certain components by a program developer or researcher should not be confused with their function or their lack of function in the producing hoped for outcomes in the intervention settings.

Thus, the current literature regarding evidence-based programs heavily focuses on the quality and quantity of the "evidence" of impacts. And the vetting of research design, rigor, number of studies, and outcomes has resulted in rosters of evidence-based programs such as SAMHSA's National Registry of Evidence-Based Programs and Practices ( , Blueprints for Violence Prevention ( with various criteria and rankings (e.g., evidence-based, evidence-informed, and promising) based on reviews of the research literature. A resource from "What Works Wisconsin" (Huser, Cooney, Small, O'Connor, & Mather, 2009) provides brief descriptions of 14 such registries ( covering a range of areas including substance abuse and violence prevention as well as the promotion of positive outcomes such as school success and emotional and social competence. The identification of programs and practices that "work" and assessing the quality and quantity of the evidence are important for building confidence about outcomes. We need to understand the rigor and the outcomes of the research because we need to invest in "what works". But we also need to define and understand the core components that make the "what" work.

View full report


"rb_CoreIntervention.pdf" (pdf, 451.57Kb)

Note: Documents in PDF format require the Adobe Acrobat Reader®. If you experience problems with PDF documents, please download the latest version of the Reader®