Core Intervention Components: Identifying and Operationalizing What Makes Programs Work. Usability Testing to Operationalize and Validate Core Components


Researchers and program developers should provide information to enable agencies to support practitioners to implement a program with fidelity. The vast majority of programs (evidence-based or evidence-informed), as noted earlier, do not meet these criteria. For the evidence-based and evidence-informed interventions that have not been well-operationalized (Hall & Hord, 2011), there is a need to employ usability testing to verify or elaborate on the program's core components and the active ingredients associated with each core component before proceeding with broader scale implementation.

What is usability testing? Usability testing is an efficient and effective method for gaining the experience and information needed to better operationalize a program and its core components. Usability testing methods were developed by computer scientists as a way to de-bug and improve complex software programs or websites. Usability testing (e.g., Nielsen, 2005) employs a small number of participants for the first trial, assesses results immediately, makes corrections based on those results, and plans and executes the next, hopefully improved, version of the core component and its associated active ingredients. This cyclical process is repeated (say, 5 times with 4 participants in each iteration for a total N = 20) until the program is producing credible proximal or short-term outcomes related to the tested core components and the associated active ingredients.

Usability testing is an example of the Plan, Do, Study, Act (PDSA) cycle (e.g., Shewhart, 1931; Deming, 1986). The benefits of the PDSA cycle in highly interactive environments have been verified through evaluations across many domains including manufacturing, health, and substance abuse treatment. This "trial and learning" approach allows developers of complex programs and those charged with implementing them to identify the core components and active ingredients of a program and further evaluate, improve, or discard non-essential components. Usability testing often is done in partnership with the program developers, researchers, and early implementers of the program.


Figure 2. Plan, Act, Do and Study


An example of usability testing may provide more clarity about the utility of such an approach. A home-based intervention for parents whose children have just been removed due to child welfare concerns might include a small test of the degree to which the core component of ‘engagement' and the associated active ingredients during the initial visit (e.g., the therapist expresses empathy, asks parents to identify family and child strengths, allows the parents to tell their story) are associated with parents' willingness to engage with the therapist.

Measures of engagement might include the number of times the family is at home at the scheduled time for visits and the number of sessions in which the parents agree to participate in parent training activities guided by the therapist. Such information can be collected very efficiently from supervisors and/or therapists. The results from the first cohort of trained therapists as they interact with families might then be assessed after three visits are scheduled and therapeutic interventions are attempted during each visit. The data, both process and outcome, are then reviewed and, if the a priori criteria are met (e.g., 75 percent of families allow the therapists into their homes for all three visits; 80 percent of families participate in the parent training activities), the same engagement processes are continued by new therapists with new families and by current therapists with subsequent families. Or, if results are not favorable, improvements in engagement strategies are made and operationally defined. Changes are made in the protocol and the process begins again. That is, new and current therapists receive additional material, training, and coaching regarding revised engagement strategies to be used during initial interactions. The revised engagement process is then tried with a second cohort, again including proximal measures of engagement. Such usability testing may occur throughout the implementation of a new program. Some program components may not occur until later in the course of the intervention (e.g., procedures related to reintegration of children into their families).

Because this is a new way of work, there can be concerns related to the cost and feasibility of usability testing. While effort and organization are required, this is not a research project but a ‘testing' event that can be managed efficiently and effectively. The costs of maintaining program elements that are not feasible or are not yielding reasonable proximal results can be far greater that costs associated with making time to target key elements for usability testing. And, while not all core components are amenable to such testing (e.g., use of x# of sessions), this "trial and learning" process does create the opportunity to efficiently refine and improve important elements of the "program" and/or its core components and active ingredients with each iteration. Each small group is testing a new and, hopefully, improved version. Each iteration results in incremental improvements and specificity until the outcomes indicate that the program or the tested set of core components is ready to be used more systematically or on a larger scale and is ready to undergo more formal fidelity assessment and validation.

View full report


"rb_CoreIntervention.pdf" (pdf, 451.57Kb)

Note: Documents in PDF format require the Adobe Acrobat Reader®. If you experience problems with PDF documents, please download the latest version of the Reader®