Skip to main content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

The Importance of Contextual Fit when Implementing Evidence-Based Interventions

Publication Date

DEPARTMENT OF HEALTH & HUMAN SERVICES

Office of the Secretary

Washington, DC

OFFICIAL BUSINESS

Penalty for Private Use $300

This brief is one in a series exploring issues related to the implementation of evidence-based interventions. It defines contextual fit, which is based on the premise that the match between an intervention and local context affects both the quality of the intervention implemented and whether the intervention actually produces the outcomes desired for the children and families receiving the intervention. An operational definition, formal measures, and systematic research that guides both policy and practice are needed before assessing the fit of evidence-based interventions for a particular context can become common practice. The brief describes eight elements that combine to establish the fit between an intervention and a setting. It also includes recommendations for developing formal measures of contextual fit. This brief is one of three developed as part of the Investing in What Works project, which builds the knowledge and supports that evidence-based programs can use to improve the quality and outcomes of interventions funded through federal investments.

"

About This Issue Brief

This issue brief was written by Rob Horner, Ph.D., of the University of Oregon; Caryn Blitz, Ph.D., of the Administration on Children, Youth & Families; and Scott W. Ross, Ph.D., of the Utah State University.

In 2012, ASPE awarded the American Institutes for Research to manage the Investing in What Works (IWW) project, to continue ASPE’s efforts to keep building the knowledge and supports that evidence-based programs and initiatives can use to improve the quality and outcomes of interventions funded through federal investments.

Office of the Assistant Secretary for Planning and Evaluation
 
Office of Human Services Policy
 
U.S. Department of Health and Human Services
 
Washington, DC 20201

Executive Summary

Implementation science informs us that local context is important to the successful adoption of evidence-based interventions. “Contextual fit” is based on the premise that the match between an intervention and local context affects both the quality of intervention implementation and whether the intervention actually produces the desired outcomes for children and families.

Although the importance of contextual variables is often referenced, there is neither consensus on the specific elements that constitute contextual fit nor a strong research base. In an effort to address these gaps, we propose a set of core elements drawn from the existing literature that can be used to define contextual fit and guide practice, policy, and research.

We define contextual fit as the match between the strategies, procedures, or elements of an intervention and the values, needs, skills, and resources available in a setting. Contextual fit is defined by the perceptions of those who implement, receive, and support an intervention. Practitioners should understand the important role of the decision agent in the fit determination process. Although certain interventions might appear to “fit” on paper, practitioners must have a certain level of motivation, interest, and support for intervention fit to be present.1 Eight elements combine to establish the fit between an intervention and a setting:
 
  1. Need: The extent to which an intervention meets anidentified need for a particular target population. Theoutcomes of an intervention must be valuable to those delivering, supporting, and receiving the intervention. In addition, the intervention should confer a relative advantage above and beyond existing services.
  2. Precision: The extent to which the core features of an intervention—what is to be delivered—are well defined. Interventions that are defined globally are difficult to match with a specific setting because the implementers cannot determine exactly what they should be delivering.
  3. An Evidence-Base: The intervention has demonstrated effectiveness for the target population and the outcome(s) of interest. This typically means the intervention is supported by rigorous, published research with strong internal and external validity.
  4. Efficiency: The intervention needs to be not only effective but practical. An undervalued feature of evidence-based interventions is the level of efficiency (time, personnel, money, materials) needed to generate valued outcomes within the time frames and budgets necessary.
  5. Skills/competencies: Contextual fit requires clarity regarding how implementers will acquire the skills to use an intervention as intended. The training, coaching, orientation, and support needed for personnel to deliver an intervention should be clearly defined.
  6. Cultural Relevance: An intervention should match the values and preferences of those who will (a) implement the intervention, (b) benefit from the intervention, and (c) manage and support the intervention. Personal, societal, cultural, and professional values matter. The type of intervention, how it is implemented, and the intended outcomes should be acceptable to those in the local setting.
  7. Resources: Contextual fit requires the ability and willingness to allocate the resources needed for both initial adoption and sustained implementation.
  8. Administrative and Organizational Support: Contextual fit includes the values and preferences of those making administrative decisions.

Defining, measuring, and applying the elements of contextual fit to large-scale adoption of evidence-based interventions will be both effective and efficient with initial and sustained implementation. The elements of contextual fit have relevance for (a) the design and selection of interventions, (b) the process of initial implementation, and (c) the ongoing adaptation of the interventions needed for sustainability.

We offer recommendations for developing formal measures of contextual fit and use these measures to prompt a rigorous program of scholarship on the impact of contextual fit variables and the likely implications for policy, technical assistance, and the organization of large-scale implementation efforts.

Key Take Away-Messages
 
  • Contextual fit is an undervalued factor affecting the quality with which evidence-based interventions are implemented. Core components of fit to consider include need, precision, evidence, feasibility, skills/competencies, cultural relevance, resources, and administrative and organizational support.
  • Research is needed to better understand the role and process of contextual fit, the elements of contextual fit most important for improving effective implementation, and metrics to assess contextual fit.
  • Policymakers should include contextual fit criteria in Funding Opportunity Announcements to improve the selection, adoption, implementation, and sustainability of supported interventions.
  • Technical assistance should focus on building strong contextual fit before investing in direct implementation efforts.

1Dymnicki et al. (2014) provide a broader discussion of the different components that comprise readiness for implementing evidence-based interventions, one of which is motivation.

 

The Role of Contextual Fit When Implementing Evidence-Based Interventions

Purpose

The purpose of this issue brief is to propose an expanded role for “contextual fit” in the implementation of evidence-based interventions across education and human services domains, including mental health, juvenile justice, child welfare, and residential supports. The burgeoning field of “implementation science” speaks to the important role of contextual fit in the implementation process (Blase & Fixsen, 2013; Fixsen et al., 2005). Our basic premise is that contextual fit is important for (a) selecting evidence-based interventions, (b) the initial implementation of evidence-based interventions, and (c) the ongoing adaptation and scaling of evidence-based interventions. Appreciation of the contribution of contextual fit is not new, but for this appreciation to influence common practice, we need an operational definition of contextual fit, formal measures of contextual fit, and systematic research that guides both policy and practice.

What Is Contextual Fit?

Contextual fit is the match between the strategies, procedures, or elements of an intervention and the values, needs, skills, and resources of those who implement and experience the intervention. An intervention is said to possess good contextual fit when implementers, recipients, and other stakeholders (e.g., parents, teachers, community members, administrators, and related service systems) identify the intervention as acceptable, doable, effective, and sustainable. The contextual fit of an intervention for a specific setting is local and personal. Contextual fit is defined by those who will be implementing, supporting, and receiving the intervention (Damschroder et al., 2009).

Defining contextual fit requires that we first define an “intervention” and the distinction between the core elements of an intervention and the procedures used to achieve those core elements. We use the term “intervention” to refer to (a) a procedure, or set of procedures (b) designed for use in a specific context (or set of contexts) (c) by a specific set of users (d) to achieve defined outcomes (e) for (a) defined population(s) (c.f. Cook, Tankersley, & Landrum, 2009; Dunst, Trivette, & Cutspec, 2002; Flay et al., 2005; Horner, Sugai & Anderson, 2010). Interventions are what we do to achieve desired outcomes. They include the behaviors, tools, and protocols used for assessment, intervention, data collection, and evaluation. Historically, interventions have been viewed primarily as solutions to specific problems. This approach emphasizes the match between a desired outcome and the intervention, but it ignores the importance of issues like the skills of users, extent of need, values related to intervention options, and capacity for data-based decision making.

More recently, there has been a renewed emphasis on interventions as context dependent—developed with significant assumptions about the specific setting, users, and target populations in and for which they can and should be implemented (Spencer, Detrich, & Slocum, 2012). What might work well in preschool settings may not work well in juvenile justice contexts, mental health clinics, or high schools. When the fit between the setting and the intervention is poor, the likelihood of effective implementation diminishes, and the likelihood that implementation will lead to valued outcomes evaporates (Fixsen et al., 2010; Fixsen et al. 2005).

Contextual fit has gained increased attention as program developers, researchers, and practitioners have recognized the need to define evidence-based interventions by their outcomes, core features, and strategies or intervention packaging to achieve the core features. In the past, “interventions” and “core features” have been synonymous. An intervention package for bullying prevention, drop-out prevention, self-regulation, or early literacy included a set of core features (curriculum content, instructional routines, setting variables) and specific strategies or interventions (specific text, training manuals, video exercises, and family supports). Implementers were expected to purchase or adopt the intervention; in so doing, they would use the intervention procedures to achieve the core features and through the core features the valued outcomes. Experience with large-scale implementation of evidence-based interventions has forced the recognition that intervention strategies for achieving core features may vary across settings (Horner et al., 2013). For example, a core feature of Positive Behavioral Interventions and Supports is defining and teaching a small number of social expectations to all students in the school. The core feature is the building of a school wide social culture with a common set of expectations. However, although this core feature is constant across settings, the specific expectations taught and the process for teaching these expectations can vary across elementary, middle, and high schools and across urban, suburban, and rural schools. The social and ethnic culture of a community may affect how these expectations are constructed and taught. The core feature is held constant, but the procedures to achieve the core feature are adapted to the context. This distinction is relevant because contextual fit applies both to the core features that should be present in an implementation setting and the intervention strategies used to achieve these core features.

How Does Contextual Fit Align With Existing Implementation Frameworks?

One of the reasons the implementation of evidence-based interventions in human service systems is so challenging is that interventions are delivered by providers and organizations to individuals in communities, all within complex, multilayered social ecologies. A variety of models summarize the many implementation factors or “drivers” at various levels of the social ecology that facilitate or impede effective implementation (Aarons, Hurlburt, & Hurwitz, 2011; Damschroder & Hagedorn, 2011; Fixsen, Blase, Naoom, & Wallace, 2009; Glisson & Schoenwald, 2005).

Which Constructs Are the Most Important for Determining Contextual Fit?

To date, contextual fit has been discussed most often as a general concept with overarching implications. There is a need to operationalize the construct in a way that allows for agreement in the field and enables the development of formal measures. Table 2 summarizes themes drawn from the literature to help define elements of contextual fit (Blase, Kiser, & Van Dyke, 2013; Center for Substance Abuse Prevention, 2012; Horner et al., 2003; Sandler, Albin, Horner, & Yovanoff, 2003). Table 1 presents the eight core components of fit and application questions that can be asked for each element.

Table1. Summary of Elements of Contextual Fit

Element Application Questions for Each Element
Need
1a. Is the outcome of the intervention highly valued?
1b. Is the level of current success low enough that there is a need for something different according to:
Those receiving support (children, youth, families, clients)
Those providing support
Those responsible for effective support (administrators, community members, political leaders)
Precision
2a. Is the proposed intervention defined with clarity and is detail provided to determine what is done, by whom, when, and why? Are core features defined? Are strategies for achieving the core features defined?
An Evidence-Base
3a. Does empirical evidence exist that the implementation of the core features results in valued outcomes? Does the evidence document the target population, setting conditions, and usability conditions in which valued outcomes were achieved?
Efficiency
4a. Are the time and effort for initial adoption reasonable?
4b. Are the time and effort for sustained adoption as efficient or more efficient than current interventions (given the outcomes generated)?
Skills/competencies
5a. Are the skills needed to implement the intervention defined?
5b. Are materials and procedures available to establish needed skills?
5c. Does the level of skill development fit professional standards and or the organizational staffing structure?
Cultural relevance
6a. Are the outcomes of the intervention valued by those who receive them?
6b. Are the strategies and procedures consistent with the personal values of those who will perform them?
6c. Are the strategies and procedures consistent with the personal values of those who will receive them?
Resources
7a. What time, funding, and materials are required for initial adoption?
7b. What training, coaching, and performance feedback are needed for high-fidelity implementation?
7c. What time, funding, and materials are required for sustained adoption?
7d. What fidelity measures are needed to ensure monitoring of an implementation?
Administrative and organizational support
8a. Is adoption of the intervention supported by key leaders?
8b. Will adoption of the intervention be monitored by key leaders?
8c. Will fidelity and impact of the intervention be monitored by key leaders?
8d. Is there a documented commitment to make the intervention a standard operating procedure?

Contextual fit influences the implementation process at three points. The first is in the exploration and selection of an effective intervention. An intervention should match the skills, values, and resources of those in the implementation context—that is, those who are providing, supporting, and receiving the intervention. Contextual fit plays an important role in selecting the evidence-based intervention that best matches the skills, values, and resources of the local setting.

Second, contextual fit is important to consider when adopting an intervention during the installation and initial implementation stage. The way an intervention is introduced can determine whether it is accepted and adopted by both the community and service providers. The timing, amount, format, and integration of training into an existing service setting can affect the likelihood that the new intervention will be implemented well and yield positive results (assuming that readiness for the intervention has already been established).

The third point of impact where contextual fit affects implementation is in the adaptation of an intervention once it has been implemented. Effective implementation does not end with initial adoption; it is a continuous process of tailoring an intervention to improve efficiency and effectiveness. The sustained use of an intervention may depend on implementers’ ability to continually adapt the intervention as conditions in the setting evolve (McIntosh et al., 2013). Adaptations need to be developed with full consideration of the extent to which they “fit” with the skills, values, and resources of those who use and benefit from the intervention.

Determining Need: The Kansas Intensive Permanency Project

 The Administration for Children and Families (ACF) Children’s Bureau (CB) and Office of Planning, Research and Evaluation (OPRE) developed the Permanency Innovations Initiative (PII), a multisite federal demonstration project designed to improve permanency outcomes among children in foster care who face the most serious barriers to permanency. Implemented in 2010, this 5-year, $100-million initiative includes six grantees, each with a unique intervention designed to reduce long-term foster care stays and improve child and family outcomes. The project is distinguished by its provision of rigorous evaluation, purposeful application of implementation science, and coordinated dissemination of findings. In its intent to assist grantees to develop or adapt evidence-supported interventions (ESIs), PII aims to build an evidence base and disseminate findings throughout the child welfare field. Intensive technical assistance is available through federal program staff at CB and OPRE and from technical assistance contractors (Permanency Innovations Initiative Training and Technical Assistance Project & Permanency Innovations Initiative Evaluation Team, 2013).

One of the six PII grantees, the Kansas Intensive Permanency Project (KIPP), is a statewide public–private partnership between the University of Kansas School of Social Welfare, the Kansas Department for Children and Families, and Kansas’ private providers of foster care. KIPP is testing the effectiveness of an evidence-based parenting intervention on the safety, permanency, and well-being outcomes of a subpopulation of children at risk of long-term foster care (LTFC): children with serious emotional disturbance. The goal of the first stage of implementation—exploration—was to assess the match between community needs, evidence-based interventions, and community resources (Fixsen et al., 2005). (A more detailed discussion of KIPP can be found in Akin, Bryson, Testa, Blase, & McDonald, 2013). To determine need, KIPP engaged in a series of data mining activities to better identify and understand the target population, critical barriers encountered by parents, and system barriers to permanency. The activities included (1) review and analysis of administrative and program data to identify factors that place certain groups of children at risk for LTFC; (2) case reviews and data extraction to uncover family characteristics associated with LTFC; and (3) electronic informant surveys for child welfare staff, administrators, and advocates across the state to identify system barriers to permanency. Based on this input, the project team identified a list of parenting models and assessed their relevance to the selected target population. (For further information on how KIPP selected and implemented the chosen parenting intervention, see Bryson, Akin, Blase, McDonald, & Walker, 2014).

What Does Contextual Fit Look Like in Practice?

Examples of the impact that contextual fit can have on implementation are available in every discipline. Consider one intervention focused on reducing substance abuse developed in the Midwest that emphasized both the development of after-school community activities and family support. The intervention had been used with significant success in two midwestern states and was highly anticipated by community organizers in an urban west coast context. The intervention manual and materials were purchased; midwestern developers were hired to assist with implementation; and a series of community events, training forums for mental health professionals, and orientations for youth and families were funded. Unfortunately, there was no effort to assess whether the roles, responsibilities, and specific strategies of the intervention were valued by and culturally comfortable with the families, youth, or local professionals. Insufficient attention was paid to language differences and parents’ expectations. The poor match between the vision that parents in the host city had for themselves and the expectations of the intervention led to both poor-quality implementation and no change in substance abuse levels. The same intervention implemented two years later in a southwestern urban setting was launched only after adapting the intervention to fit the local culture. Core features of the intervention were retained, but the process of introduction (presentations by local leaders, not external leaders) was modified to launch the program from within the community rather than as an external “initiative.” Recasting materials to fit local language and cultural norms combined with a strong emphasis on training by local community members rather than external experts enhanced the contextual fit of the intervention for that specific location. The result was higher implementation fidelity and valued improvement in reported rates of substance abuse by youth.

Addressing Skills/Competencies: It’s Your Game Project in South Carolina Schools

The U.S. Department of Health and Human Services’ Office of Adolescent Health (OAH) Teen Pregnancy Prevention (TPP) Program works to preven teen pregnancy by supporting the replication of evidence-based interventions and the implementation of demonstration programs to develop and test new models and innovative strategies. In September 2010, OAH provided funding to 75 grantees to replicate medically accurate age-appropriate, evidence-based TPP interventions that have been proven through rigorous evaluation to prevent teen pregnancy and/or associated sexual risk behaviors. The South Carolina (SC) Campaign to Prevent Teen Pregnancy (SC Campaign) received funding from OAH to replicate It’s Your Game, Keep it Real (IYG), a 2-year, middle school, evidence-based intervention shown to delay the initiation of sex, increase positive beliefs about abstinence, and decrease unprotected sex at last intercourse. The curriculum consists of 12 lessons in seventh grade and 12 lessons in eighth grade. The SC Campaign partnered with 10 SC school districts representing 24 middle schools to participate in the project, most of which selected physical education teachers or coaches to implement the IYG curriculum at school. As a condition of the agreement between the SC Campaign and participating schools, IYG facilitators were required to be trained by a certified trainer before implementing the intervention. Implementation began in 12 schools during the 2011–2012 school year and has expanded to 25 schools across the state in the current school yea (2013–2014).

The SC Campaign has held a 3-day training of facilitators each year since 2011 for rising IYG facilitators. The training included a review of the curriculum’s logic model, core components, and theoretical foundation. To ensure fidelity, a lesson-by-lesson review of the curriculum also was conducted and teach-back sessions were used to build educators’ implementation skills. Moreover, the SC Campaign included instruction on values clarification and opportunities for participants to practice answering sensitive questions to better prepare them for implementing a reproductive health curriculum. The literature has shown that training is necessary but not sufficient for quality implementation; instead, training should be supplemented with site-specific, customized technical assistance (TA). As a result, SC Campaign staff provide IYG facilitators with continuous, customized TA to address site-specific needs that is informed by data on fidelity and implementation quality collected from facilitators and independent observations of the intervention. The ability to identify facilitator needs through fidelity data is important because facilitators with lower capacity levels are less likely to request TA or assistance. Because of data-informed TA, SC Campaign staff could provide strategies to address issues in real time so that further threats to implementation were prevented. Long-term responses to challenges to implementation fidelity, such as trainings and webinars, were also developed based on site needs as derived from data collected from facilitators and independent observers (For further information on how the SC Campaign used real-time fidelity data to inform TA, see Kershner et al., 2014).

How Should Contextual Fit Be Measured?

One reason contextual fit has received muted attention is that there is no accepted approach for how to measure it. Horner and colleagues (2003) provide one possible approach in their 16-item assessment of contextual fit (each item is rated on a 6-point Likert-like scale).2 Although this self-assessment has been used in studies assessing the contextual fit of behavior support plans in school, home, and community settings, and the resulting outcome score has been correlated with fidelity of implementation (e.g., Rodriguez, Loman, & Horner, 2009; Sandler et al., 2003; Smith, 2013), it has not been extended to studies or interventions outside of education. Currently, no contextual fit measure with documented psychometric properties can be used to evaluate the implementation of a broad range of evidence-based interventions across educational, mental health, juvenile justice, and community contexts.
 
To establish a useful measure of contextual fit, there must be agreement on the core elements of contextual fit. Then, these elements need to be included in a standard measurement protocol that can provide a “total contextual fit score” and scores about each element of contextual fit. Demonstrating the content validity of such a measure would then have to be combined with demonstrations of reliability and internal validity (Algozzine, Newton, Horner, Todd, & Algozzine, 2012).

 

What Are the Policy Implications of Contextual Fit?

Increasingly, federal, state, and local agencies are focused on improving the implementation of evidence-based interventions (Spencer et al., 2012). However, existing implementation science models do not fully consider the realities and constraints imposed by federal grants award and management processes. For example, organizations that seek federal grants must respond to Funding Opportunity Announcements (FOAs) within short timeframes and adapt proposals to respond to problems or issues that have already been identified by the federal government. In contrast, many implementation science models assume a community-driven planning and conceptualization process in which local groups of concerned persons or organizations identify a problem, build commitment to address the problem, identify the best evidence-based interventions for solving the problem, and then find the resources to refine and implement the interventions selected. The implementation timeframes of discretionary grant projects are further constrained by the limited duration of grants, which typically last from 3 to 5 years and sometimes as little as 17 months. These factors all affect the speed and trajectory of the entire implementation process for a discretionary grant program from initial startup to long-term intervention adaptation and sustainability.

To address these limitations, we offer three policy considerations. First, policymakers could consider including criteria for contextual fit into FOAs to facilitate preparing and selecting grantees. The elements of contextual fit should be clearly defined and grantees should be evaluated on their plans for assessing each of the elements. And, because assessment of contextual fit is multifaceted, longer planning periods may be necessary to ensure the successful selection, adoption, implementation, and sustainability of grantee interventions.

Second, because longer planning periods within a 5- or 3-year grant may not be feasible, policymakers might also consider developing a series of FOAs, beginning with planning grants that assess contextual fit and tie it to implementation readiness, so that grantees can build their infrastructure and capacity for implementing evidence-based interventions over time to ensure optimal success. An example of this is a series of cooperative agreements and grants offered by the http://www.samhsa.gov/grants/2009/sp_09_001.aspx . The funding opportunities began with developing and implementing comprehensive needs assessments to estimate the prevalence of substance abuse among youth in target communities. Based upon the data, the target communities were able to develop plans to build collaboration and capacity for substance abuse prevention efforts across service systems. CSAP then funded an effort focused on building core measures for intermediate and distal substance abuse outcomes across a limited number of states. The next opportunity was the State Incentive Grants (SIGs) that funded the implementation of evidence-based interventions statewide. The next generation of SIG grants were the Strategic Prevention Framework grants (SPF-SIGs), culminating in the current SPF-Partnerships for Success (PFS) grants. It is through these various funding opportunities provided over a 14-year period that CSAP has supported states and communities in the development of infrastructure and capacity to implement evidence-based interventions.

Third, policymakers should investigate the kinds of changes needed in the organization of federal, state, and regional TA efforts to help grantees determine contextual fit. Assessment of any single element of contextual fit can be intensive and time consuming, as evidenced in the assessment of need by the KIPP (see p. 6) and IYG (see p. 7). Assistance for implementing new interventions should be tied to establishing a contextual foundation in which implementation will be both efficient and effective. The U.S. Department of Education’s current emphasis on TA of “multi-tiered systems of support” is a good example. Not only are interventions defined with multiple tiers of intensity, but the TA available to schools and communities is organized around multiple tiers of TA intensity. Some schools/communities will need more training, more coaching, and more organizational support to adopt new interventions. Intervention intensity should match the needs of the individual and TA intensity should match the needs of the host organization. A major step toward advancing these multi-tiered efforts will be the incorporation of contextual fit measures at both the intervention and TA levels.

What Are the Research Implications of Contextual Fit?

For contextual fit to assume the role it is touted to fill in implementation science, a concerted effort is needed to build a solid empirical foundation. Three initial steps for future research are needed: (a) developing technically adequate measures of contextual fit, (b) documenting the role of contextual fit in the effectiveness and efficiency of implementation, and (c) determining the extent to which questions of contextual fit can be used to assess readiness for implementation. The first step is to develop technically adequate measures of contextual fit. Contextual fit needs to be defined with operational precision. The field needs to agree on the elements of contextual fit that allow strong content validity measures, which must be demonstrated to be both valid and reliable. The challenges related to assessing perceptions need to be addressed, and the field requires multiple approaches for systematically measuring the degree of fit that an intervention has within a specified setting.

The second line of research need involves documenting the role of contextual fit in the effectiveness and efficiency of implementation. We propose that contextual fit improves not only the likelihood that an intervention will be adopted with fidelity, but the efficiency (time, money, personnel) needed to achieve adoption. These are compelling claims, but to date they are claims based more on theory than documented evidence. Once we have valid, reliable measures of contextual fit, the field will be open to systematic studies (both correlational and experimental) that assess the role of contextual fit in implementation.

A related line of research will focus on the extent to which questions of contextual fit may be used to assess “readiness for implementation.” If intervention implementation is delayed until the “exploration” process indicates there is a good match between the intervention and the setting, is the intervention more likely to be implemented? And is investment in building fit before investing in active implementation cost-effective? These and related questions need to be assessed in formal studies.

Summary

Contextual fit is a construct that has gained increased attention from those who implement evidence-based interventions across education and human services domains. Contextual fit is based on the premise that the match between an intervention and local context affects both the quality of the intervention implemented and whether the intervention actually produces the outcomes desired for the children and families receiving the intervention. Although contextual fit is not new, an operational definition, formal measures, and systematic research that guides both policy and practice are needed before assessing the fit of evidence-based interventions for a particular context can become common practice. We encourage current implementers to incorporate efforts to assess and adapt contextual fit into the interventions they intend to adopt. More importantly, we encourage the formal development of measurement technology and experimental studies that can further define the role of contextual fit in implementation science.

References

Aarons, G. A., Hurlburt, M., & Hurwitz, S. M. (2011). Advancing a conceptual model of evidence-based practice implementation in public service sectors. Administration and Policy in Mental Health, 38, 4–23.

Akin, B., Bryson, S., Testa, M., Blase, K., & McDonald, T. (2013). Usability testing, initial implementation, and formative evaluation of an evidence-based intervention: Lessons from a demonstration project to reduce long-term foster care. Evaluation and Program Planning, 41, 19–30.

Algozzine, B., Newton, J. S., Horner, R. H., Todd, A. W., & Algozzine, K. (2012). Development and technical characteristics of a team decision-making assessment tool decision observation, recording, and analysis (DORA). Journal of Psychoeducational Assessment, 30(3), 237–249.

Blase, K. & Fixsen, D. (2013). Core intervention components: Identifying and operationalizing what makes programs work, ASPE Research Brief, U.S. Department of Health and Human Services, February, 2013.

Blase, K., Kiser, L., & Van Dyke, M. (2013). The Hexagon Tool: Exploring context. Chapel Hill, NC: National Implementation Research Network, FPG Child Development Institute, University of North Carolina at Chapel Hill.

Bryson, S. A., Akin, B. A., Blase, K. A., McDonald, T., & Walker, S. (2014). Selecting an EBP to reduce long-term foster care: Lessons from a university–child welfare agency partnership. Journal of Evidence-Based Social Work, 11(1–2), 208–221.

Center for Substance Abuse Prevention. (2009). Identifying and selecting evidence-based interventions: Revised guidance document for the Strategic Prevention Framework State Incentive Grant Program. HHS Pub. No. (SMA)09-4205. Rockville, MD: Center for Substance Abuse Prevention, Substance Abuse and Mental Health Services Administration.

Cook, B. G., Tankersley, M., & Landrum, T. J. (2009). Determining evidence-based practices in special education. Exceptional Children, 75, 365–383.

Damschroder, L. J., Aron, D. C., Keith, R. E., Kirsh, S. R., Alexander, J. A., & Lowery, J. C. (2009). Fostering implementation of health services research findings into practice: A consolidated framework for advancing implementation science. Implementation Science, 4(1), 50-65.

Damschroder, L. J. & Hagedorn, H. J. (2011). A guiding framework and approach for implementation research in substance use disorders treatment. Psychology of Addictive Behaviors, 25(2), 194–205.

Dunst, C. J., Trivette, C. M., & Cutspec, P. A. (2002). Toward an operational definition of evidence-based practices. Centerscope, 1(1), 1–10.

Dymnicki, A., Wandersman, A., Osher, D., Grigorescu, V., Huang, L. & Meyer, A. (2014). Willing, able→ ready: Basics and policy implications of readiness as a key component for implementation of evidence-based practices. ASPE Issue Brief, Office of the Assistant Secretary for Planning and Evaluation, Office of Human Services Policy, United States Department of Health and Human Services.

Fixsen, D. L., Blase, K. A., Duda, M. A., Naoom, S. F., Van Dyke, M., Weisz, J. R., et al. (2010). Implementation of evidence-based treatments for children and adolescents: Research findings and their implications for the future. In J. Weisz & A. Kazdin (Eds.), Implementation and dissemination: Extending treatments to new populations and new settings (2nd ed.) (pp. 435–450). New York, NY: Guilford.

Fixsen, D. L., Blase, K. A., Naoom, S. F., & Wallace, F. (2009). Core implementation components. Research on Social Work Practice, 19(5), 531–540.

Fixsen, D. L., Naoom, S. F., Blase, K. A., Friedman, R. M., & Wallace, F. (2005). Implementation research: A synthesis of the literature. Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health Institute, National Implementation Research Network. (FMHI Publication No. 231).

Flay, B. R., Biglan, A., Boruch, R. F., Castro, F. G., Gottfredson, D., Kellam, S., et al. (2005). Standards of evidence: Criteria for efficacy, effectiveness and dissemination. Prevention Science, 6, 151−175.

Glisson, C., & Schoenwald, S. K. (2005). The ARC organizational and community intervention strategy for implementing evidence-based children's mental health treatments. Mental Health Services Research, 7(4), 243–259.

Horner, R. H., Kincaid, D., Sugai, G., Lewis, T., Eber, L., Barrett, S., et al. (2013). Scaling up School-wide Positive Behavioral Interventions and Supports: The experiences of seven states with documented success. Journal of Positive Behavioral Interventions.

Horner, R. H., Sugai, G., & Anderson, C. M. (2010). Examining the evidence-base for school-wide positive behavior support. Focus on Exceptional Children, 42(8), 1–14.

 
Kershner, S., Flynn, S., Prince, M., Potter, S. C., Craft, L., & Alton, F. (2014). Using data to improve fidelity when implementing evidence-based programs. Journal of Adolescent Health, 54(3), S29-S36.
 
McIntosh, K., Mercer, S., Hume, A., Frank, J., Turri, M., & Mathews, M. (2013). Factors related to sustained implementation of schoolwide Positive Behavior Support. Exceptional Children, 79(3), 293–311.
 
Permanency Innovations Initiative Training and Technical Assistance Project & Permanency Innovations Initiative Evaluation Team. (2013). The PII approach: Building implementation and evaluation capacity in child welfare (Rev. ed). Washington, DC: U.S. Department of Health and Human Services, Administration for Children and Families, Children’s Bureau, and Office of Planning, Research and Evaluation.
 
Rodriguez, B. J., Loman, S. L., & Horner, R. H. (2009). A preliminary analysis of the effects of coaching feedback on teacher implementation fidelity of first step to success. Behavior Analysis in Practice, 2(2), 3–11.
 
Sandler, L., Albin, R. W., Horner, R. H., & Yovanoff, P. (2003). Contextual fit and the viability of behavior support plans. Unpublished manuscript, Educational and Community Supports, University of Oregon.
 
Smith, T. (2013). What is evidence-based behavior analysis? The Behavior Analyst, 36(1), 7–33.
 
Spencer, T. D., Detrich, R., & Slocum, T. A. (2012). Evidence-based practice: A framework for making effective decisions. Education and Treatment of Children, 35(2) 127–152.
Product Type
ASPE Issue Brief