Defining a program and its core components matters because practitioners do not use "experimental rigor" in their interactions with those they serve; they use programs. Thus, the lack of adequately defined programs with well-operationalized core components is an impediment to implementation with good outcomes (Hall & Hord, 2006). Since the research literature, with the predominant focus on rigor and outcomes, is not yet a good source for defining programs and the attendant core components, what processes can help? And what defines a well-operationalized program?
To be useful in a real-world service setting, any new program, intervention, or innovation, whether evidence-based or evidence-informed, should meet the criteria below. When the researcher and/or program developer has not specified these elements, then funders, policy makers, and implementing agencies, with the guidance of researchers and program developers, will need to work together to do so. This means allowing the time and allocating the resources for this important work to occur before and during initial implementation of the innovation as it moves from research trials into typical service settings.
With the use of evidence-based and evidence-informed innovations in mind, we propose that the following elements comprise a well-operationalized program including the core components:
· Clear description of the context for the program.
o This means that the philosophical principles and values that undergird the program are clearly articulated. Such principles and values (e.g., families are the experts about their children, children with disabilities have a right to participate in community and school life, culture matters, all families have strengths) provide guidance for intervention decisions, for program development decisions, and for evaluation plans. If they are a "lived" set of principles and values, they promote consistency and integrity; and they serve as a decision-making guide when the â€˜next right steps' with a child or family are complex or unclear, even when the core components are well-operationalized.
o The context of the program also includes a clear definition of the population for whom the program is intended. Without clear inclusion and exclusion criteria and the willingness to apply these criteria, the core components will be applied inappropriately or will not even be applicable.
· Clear description of the core components. These are the essential functions and principles that define the program and are judged as being necessary to produce outcomes in a typical service setting (e.g., use of modeling, practice, and feedback to acquire parenting skills, acquisition of social skills, and participation in positive recreation and community activities with non-deviant peers).
· Description of the active ingredients that further operationally define the core components.
o One format and process for specifying the active ingredients associated with each core component involves the development of practice profiles. Practice profiles are referred to as innovation configurations in the field of education (Hall & Hord, 2011). In the context of a practice profile, the active ingredients are specified well enough to allow them to be teachable, learnable, and doable in typical service settings. Well-written practice profiles help promote consistent expectations across staff.
· A practical assessment of the performance of the practitioners who are delivering the program and its associated core components.
o The performance assessment relates to identifying behaviors and practices that reflect the program philosophy, values, and principles embodied in the core components, as well as the active ingredients/activities associated with each core component and specified in the practice profiles. Assessments are practical and can be done routinely in the context of typical service systems as a measure of how robustly the core components are being utilized.
o A useful performance assessment may comprise some or all of the fidelity assessment process, and across practitioners should be highly correlated with intended outcomes. Over time the researchers, evaluators, and program developers can correlate these performance assessment measures with outcomes to determine how reliable and valid they are. When higher fidelity is associated with better outcomes, there is growing evidence that the program is more effective when used as intended and that the assessment may indeed be tapping into core components.