About this research Brief
As part of this contract, three research briefs have been developed that focus on critical implementation considerations.
Office of the Assistant Secretary for Planning and Evaluation
Office of Human
Services Policy
US Department of Health
and Human Services
Washington, DC 20201
Introduction
In April 2011, the U.S. Department of Health and Human Services (HHS)' Office of the Assistant Secretary for Planning and Evaluation (ASPE) hosted a Forum, Emphasizing Evidence-Based Programs for Children and Youth, to convene the nation's leading practitioners and researchers with experience using and evaluating an array of evidence-based programs. During the Forum, experts discussed challenges encountered when selecting and replicating evidence-based programs (EBPs) and also identified approaches for developing evidence-informed programs when EBPs are not available or applicable for a given population. This brief introduces key themes that emerged from the discussion. The remaining briefs in the series document the importance of implementation and provide guidance on ensuring quality program implementation identify, strategies for identifying a program's core components, and explore techniques that can be used to inform the development of new social programs.
Research evidence is important for informing decisions by agencies, communities, and funders about investments in education, health care, and social services. Investing in programs that are likely to produce significant social or health outcomes is vital. Several initiatives across the Department of Health and Human Services, the Department of Education, and the Corporation for National and Community Service have made significant investments in identifying and encouraging the replication of evidence-based programs. In addition, there have also been substantial investments in expanding the evidence base by supporting the development of evidence-informed approaches that are innovative or have not yet been tested.
The emphasis on evidence represents only an initial step toward ensuring that programs that have demonstrated positive impacts on critical outcomes such as child maltreatment, school readiness, teen pregnancy, and delinquency are adequately funded and brought to scale. However, choosing to implement a program supported by evidence is just one piece of the puzzle; in addition to investing in what works, equal consideration needs to be paid to investing in the supportive factors that make the program work. Less attention is often paid to issues such as high quality program implementation or ensuring that the essential components of evidence-based programs are delivered to participants. Recent research indicates that programs are substantially less effective when evidence-based programs are implemented poorly, or when there is a failure to implement the essential components of these programs.
There are also steps that should be considered as new interventions are developed for populations where suitable evidence-based programs do not exist. Theory, research, and practice are all useful tools that can inform the development of new programs. However, there is less guidance about how researchers can use this information when developing new approaches.
Despite the increasing availability of information that identifies evidence-based programs, a large gap remains between what the research has shown with regard to program outcomes and the key mechanisms that can facilitate or inhibit program implementation. In order to address this gap and raise key implementation considerations that are critical to the success or failure of ASPE issued a contract to Child Trends to address challenges that stakeholders face when they choose to replicate an evidence-based program or develop an evidence-informed strategy.
Current Project
This project has two broad areas of focus: (1) supporting the replication of evidence-based programs and (2) providing the field with the latest thinking on current strategies that can be used to develop evidence-informed approaches. For over two years, ASPE and Child Trends have worked with leading experts to determine the major gaps in knowledge around the replication and scale-up of evidence-based programs for children and youth. The project began by assembling a core group of five experts in the topical areas of program replication, scale-up, and adaptation. The group also represented the perspectives of state and local program implementers, researchers, and program developers.
With guidance from the group about pressing topics in the field, ASPE and Child Trends convened more than 80 researchers and practitioners who have grappled at many levels with the work of evaluating and implementing an array of programs. These experts were varied in their background and area of expertise, including human services, social science research, education, mental health, and research methodology. Participants represented foundations, universities, state and Federal agencies, social programs, and research firms. The Forum focused on this research-practice gap to examine factors that encourage or inhibit EBP implementation, replication, and scale-up.
This brief summarizes themes that emerged from Forum discussions and introduces the primary topics that were introduced during the Forum. These topics have been further developed as issue briefs.
Highlights from the Forum
Each Forum session focused on one of four aspects related to evidence-based programs (EBPs): selecting EBPs for replication; replicating EBPs; scaling up EBPs; and implementing evidence-informed and innovative strategies.
During the first session, panelists and participants considered the selection process and criteria for choosing EBPs for replication. Lessons learned from the Maternal, Infant, and Early Childhood Home Visiting Program (managed by the Health Resources and Services Administration and the Administration for Children and Families), as well as from state intermediaries working to encourage the implementation of EBPs, were shared. The panelists discussed several principle considerations when selecting an EBP. For example, needs assessments were noted as an important tool to use when selecting evidence-based programs. Assessments can help staff determine which problems (such as teen pregnancies, rates of repeat pregnancies) are most pressing and ensure that the program selected aligns with the problems that the community seeks to solve. Participants also noted the critically important role that leadership (such as that provided by program directors or advocates) plays in guiding the EBP selection process and building political support for the program selected. The conversation then turned to assessing program readiness to determine whether a program is ready for implementation in the current environment, including adequate staffing, training and materials, community support, and funding. If a program is not ready for replication or the conditions are not suitable for the program, implementation may not succeed.
The next session focused on the need to maintain the fidelity of EBPs during replication, while at the same time adapting programs to fit the situation at hand. Panelists and participants discussed particular factors that should be considered when identifying appropriate adaptations for diverse populations. Examples ranged from Federal efforts to replicate evidence-based teen pregnancy prevention programs to the Boys & Girls Clubs. Participants discussed the relation of fidelity and adaptation in program implementation. In particular, they noted that no tension exists between the two, but that each is critically important to successful replication.
The first day concluded with a discussion of how EBPs can be “scaled up†and the specific steps for doing so. Panelists presented Federal and program perspectives on taking EBPs to scale, and the importance of ensuring quality and accountability in the programs as they expand. The panelists discussed the different effect sizes that are found for EBPs, and the discussion turned to program quality and the implementation strategy needed to expand the EBPs. To expand a program effectively, an initiative needs critical infrastructure such as leadership within and across agencies to implement the program, a target population of potential consumers, adequate resources, and revised policies and practices to support the EBP implementation strategy. Participants also identified numerous challenges programs face in replicating programs and/or taking them to scale. They cited staff turnover, which not only requires training for new staff but may also require that new leadership be persuaded to support the program. Resource constraints may also inhibit the fidelity with which a program can be replicated. For example, resources can affect a variety of decisions from how many staff can be supported to carry out the project to whether some components must be dropped. These decisions can greatly inhibit successful replication.
The second day of the Forum focused on how to use research and evidence in developing innovative interventions for children and youth in the absence of rigorous gold standard evaluations. Several presentations from the child welfare community set the stage for the discussion and were followed by a technical presentation on innovation and creating solutions for new settings, and a talk on developing evidence-informed strategies. Evidence-based programs can be very helpful in improving the potential effectiveness of social programs, but the number of EBPs available for replication is small relative to the need for social service programs. In some program areas, such as in the child welfare system, there are very few EBPs to replicate. In other cases, programs may not be suitable for the population that needs services. Participants at the Forum suggested several strategies for incorporating evidence into program development when evidence-based programs are not available. For example, meta-analysis can be useful in identifying the individual characteristics, such as past behaviors, psychological characteristics, and family features, most strongly associated with negative or positive outcomes. This knowledge can be helpful in determining what aspects of previous interventions to incorporate into the new initiative. Participants also noted that researcher-practitioner partnerships may be very beneficial as researchers can synthesize information from the literature about what types of behaviors should be targeted and how. Practitioners, in turn, can share their experience and provide insights from working with individuals and families in the community, which researchers can then investigate and communicate more broadly.
Emerging Issues
From the Forum discussions, several important implementation-related themes emerged that may guide the EBP field as it continues to evolve. These themes form the basis for the next three papers in this series.
Identifying Core Components - In the brief, Core Intervention Components: Identifying and Operationalizing What Makes Programs Work, Karen Blase and Dean Fixsen focus on determining the “core components†of evidence-based and evidence-informed interventions that are critical to producing positive outcomes. The brief defines “core componentâ€, discusses the processes for identifying and validating them as well as the reasons for operationalizing and testing them. The brief also explores implications for selecting and implementing programs and for grant making, funding, research, and evaluation.
The Importance of Program Implementation - Program implementation is critical for obtaining intended outcomes and thus is relevant for practitioners, researchers, and policy makers. The brief, The Importance of Implementation for Research, Practice and Policy, by Joseph A. Durlak, reviews program implementation approaches and how to evaluate them effectively. Durlak defines program implementation and discusses why it is important, what factors affect implementation, who has responsibility for implementation, and how implementation should be addressed by summarizing the steps involved in the implementation process. He explores ways to adapt evidence-based programs and concludes with practical lessons that have been learned about implementation through systematic research and practice. The terms program and intervention are used interchangeably to refer to a planned set of activities that are being introduced into a new setting to assist youth and their families in various ways.
Using Evidence to Inform Development of New Interventions - In the third brief in the series, Best Intentions are not Enough: A Systematic Approach to Developing New Evidence-Informed Prevention Programs, Dennis Embry , Mark Lipsey, Kristen Moore, and Diana McCallum discuss methods for using evidence to address social or behavior problems. Specifically, Embry describes how to identify and use evidence to inform development, adaptation, and innovation. The brief also draws on Mark Lipsey's presentation at the Forum, where he detailed methods for how to use syntheses of research to develop “evidence-informed†interventions.
Conclusions
The Forum and issue briefs reflect a rapidly-increasing interest in developing and replicating evidence-based programs. The discussion also acknowledges that levels of interest have outpaced the dissemination of critical knowledge that can inform program adaptation, fidelity, replication, and the development of evidence-informed programs to varied audiences including policy makers, funders, and practitioners. Information from the Forum and these briefs seeks to bring the experience and wisdom of researchers and others working on these issues to a wider audience, with the goal of enhancing both the effectiveness of intervention efforts and the wider use of effective programs.
Additional Resources
Slavin, R.E., & Madden, N.A. (2007). Scaling up Success for All: The first sixteen years. In B. Schneider and S. McDonald (Eds.), Scale-up in education. (pp.201-228). Lanham, MD: Rowman & Littlefield. http://www.eric.ed.gov/PDFS/ED483812.pdf
Embry, D.D., & Biglan, A. (2008). Evidence-based kernels: Fundamental units of behavioral influence. Clinical Child and Family Psychology Review, 11, 75-113. http://cle.osu.edu/familycivicengagement/documents/references-and-guides/downloads/evidence-based-kernels-embry.pdf
Wandersman, A., Duffy, J., Flaspohler, P., Noonan, R., Lubell, K., Stillman, L., Blachman, M., Dunville, R., & Saul, J. (2008). Bridging the gap between prevention research and practice: The interactive systems framework for dissemination and implementation. American Journal of Community Psychology, 41, 171-181. http://prevention.psu.edu/documents/ajcpisf2008wandersmanetal.pdf
Fixsen, D.L., Blase, K.A., Homer, R., & Sugai, G. (2009). Readiness for change. Chapel Hill, NC:FPG Child Development Institute. http://www.fpg.unc.edu/~sisep/docs/SISEP_Brief_3_Readiness_2009.pdf
Mihalic, S.F., & Irwin, K. (2003). Blueprints for violence prevention: From research to real-world settings—Factors influencing the successful replication of model programs. Youth Violence and Juvenile Justice, 1, 1-23. http://www.colorado.edu/cspv/blueprints/articles/MihalicIrwin_Article.pdf