Skip to main content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Best Intentions are Not Enough: Techniques for Using Research and Data to Develop New Evidence-Informed Prevention Programs

Publication Date
By: Dennis D. Embry, Mark Lipsey, Kristin Anderson Moore, & Diana F. McCallum
 
This brief is part of a series that explores key implementation considerations important to consider when replicating evidence-based programs for children and youth. This brief describes ways research and data may be used to inform the steps involved in developing new or modified prevention programs. At each step, the goal is to draw on data, research, and evaluation findings, as well as theory and the experience of the practice community, to inform the development of strategies that improve outcomes. This is one of four research briefs prepared under the auspices of an ASPE contract entitled Emphasizing Evidence-Based Programs for Children and Youth: An Examination of Policy Issues and Practice Dilemmas Across Federal Initiatives.
"

ABOUT THIS RESEARCH BRIEF

This research brief was written by Dennis D. Embry, Ph.D., Mark Lipsey, Ph.D., Kristin Anderson Moore, Ph.D., & Diana F. McCallum, Ph.D.

In 2010, ASPE awarded Child Trends a contract for the project "Emphasizing Evidence-Based Programs for Children and Youth: An Examination of Policy Issues and Practice Dilemmas Across Federal Initiatives." This contract was designed to assemble the latest thinking and knowledge on implementing evidence-based programs and developing evidence-informed approaches. This project has explored the challenges confronting stakeholders involved in the replication and scale-up of evidence-based programs and the issues around implementing evidence-informed strategies. Staff from ASPE's Division of Children and Youth Policy oversaw the project.

As part of this contract, three research briefs have been developed that focus on critical implementation considerations.

Office of the Assistant Secretary for Planning and Evaluation

Office of Human
Services Policy

US Department of Health
and Human Services

Washington, DC 20201

Executive Summary

Despite increased attention to the value of implementing evidence-based programs, there are many issues for which no programs have been proven effective. Evidence-informed prevention strategies can be used to develop effective programs to fill these gaps. This brief describes ways research and data may be used to inform the steps involved in developing new or modified prevention programs. At each step, the goal is to draw on data, research, and evaluation findings, as well as theory and the experience of the practice community, to inform the development of strategies that improve outcomes.

Defining the Problem. At this initial stage, data can be used to assess trends and identify the outcome(s) the program will address, as well as the communities or the social, economic or demographic groups where the issue is most prevalent.

Identifying Relevant Risk, Protective and Promotive Factors. Once the target problem has been established, the varied factors that affect this outcome can be identified. These include behaviors, knowledge, values, goals, or attitudes that affect the likelihood that the target outcome will occur. Meta-analysis represents a useful way to identify factors that affect the targeted outcome(s). Developmental theory, the knowledge of experienced practitioners and community members, and data from longitudinal studies can also help identify relevant factors.

Selecting Strategies Most Likely to Influence Targeted Risk, Protective and Promotive Factors. Several types of evidence can also inform the choice of strategies to influence risk, protective, and promotive factors. Meta-analysis is highlighted at this stage as well, but identification of research-based kernels can also identify ways to modify these factors. These kernels (Embry & Biglan, 2008) are proven small units of behavioral influence that can be used to create new solutions to persistent or novel problems of human wellbeing or to construct adaptations of existing proven programs. Additional ways to identify factors for interventions include analysis of data from longitudinal studies and consultation with practitioners, clients, and other stakeholders. Strategies that come to the fore based on varied types of evidence warrant particular attention. It is necessary, though, to identify factors that are malleable, that have large enough effects to bring about the desired change, and that are cost-effective.

Assembling Your Intervention Using a Logic Model. While varied approaches to organizing information are feasible, logic models are an important tool to increase clarity about how the program developer expects targeted outcomes to be achieved. The logic model is developed by first specifying the targeted outcome(s); then the risk, protective and promotive factors that affect the outcome are identified; and then the strategies, approaches and activities that affect those factors can be depicted. Developing a logic model can force program designers and stakeholders to be explicit about their assumptions and confirm that the elements of the model are likely to produce the desired change in the target outcome. It illustrates how the intervention is hypothesized to produce the intended results. However, the elements of the model still need to be tested.

Testing the Elements of Your Evidence-Informed Program. Once a programmatic approach is developed, data from performance management systems, observations, implementation evaluation, and behavior analysis can be examined to assure that the intervention(s) can be implemented as designed and achieve their intended results. Sufficient time needs to be allocated to this step, because iterative efforts will be required to examine the evidence to see whether the strategies lead to the desired outcomes.

It is possible to improve the likelihood of a program's success by building more consistently on several types of existing knowledge bases and combining effective components in thoughtful ways to address new problems or new populations. Triangulating across information from research and evaluation, including meta-analysis and research-based kernels, as well as developmental theory, longitudinal and other research, and the wisdom of experienced practitioners, can inform the development of programs that are more effective at achieving the outcomes desired for children, youth, and families. This is not a quick or easy endeavor; but investing the time and effort necessary to develop evidence-informed interventions should result in more effective programs and thus better outcomes for children and youth.

This Research Brief is one of four presenting material developed under a research project titled Emphasizing Evidence-Based Programs for Children and Youth: An Examination of Policy Issues and Practice Dilemmas Across Federal Initiatives. Others in the series include:

  • Key Implementation Considerations for Executing Evidence-Based Programs
  • Core Intervention Components: Identifying and Operationalizing What Makes Programs Work
  • The Importance of Implementation for Research, Practice and Policy

Purpose

In an era of scarce programmatic resources, funders of social services and savvy program operators are increasingly seeking effective, evidence-based interventions to address their agencies' missions and improve outcomes for children, youth and families. Though many evidence-based programs have been identified, there are still issues for which intervention programs are lacking, and there are subpopulations with unique needs that require new or modified interventions. In addition, over time new issues arise for which interventions are needed. To address these additional needs, federal, local, and state governments, foundations, nonprofit organizations, and researchers invest significant resources to develop new programs. However, little guidance exists for the field about rigorous, systematic approaches to developing innovative or promising programs.

This issue brief describes key strategies for using research evidence and data to inform the development and testing of new evidence-informed interventions and will highlight key strategies that can be useful at each stage of program development. These approaches draw on accumulated research and evaluation knowledge, as well as social science theory and the expertise of practitioners. This brief particularly highlights the use of meta-analysis and "kernels" to identify research-based components and practices to incorporate into new programs. Meta-analysis is a technique to synthesize the results of many studies on a topic, while kernels are program elements or practices that have been shown in research to have behavioral impacts and that can be re-combined in the development of new interventions. The brief also suggests how a logic model can help organize this information and guide program development, testing, and revision. However, we are only highlighting the important considerations in this brief. The appendix provides a resource list suggesting where to obtain additional detail on how to pursue the strategies described.

A Systematic Approach to Developing New Evidence-Informed Prevention Programs

In the absence of evidence-based interventions, and often even when evidence-based approaches exist, program operators frequently rely primarily on their personal experiences and good intentions without careful consideration of related research evidence. While past experience is valuable, ignoring existing evidence and developmental theory can lead to missed opportunities, unintended results, and inefficient progress. In order to advance the field of prevention, program developers need to build in the successes and mistakes of past efforts to promote positive outcomes for children and youth.

This brief describes ways of using research evidence and data to inform five critical steps involved in developing new prevention programs. This section will outline these steps briefly and identify techniques through which research and data may be utilized at each stage. Subsequent sections will describe the techniques in more detail. References for further reading appear in an appendix.

Defining the Problem. The critical initial step in developing a new program is to identify the outcome the program is being designed to prevent, such as insufficient school readiness, teen pregnancy, delinquency, or substance use. This decision process, sometimes referred to as a needs assessment, might be based on trend data that depict an increased incidence of a problem, information that a particular problem is acute in a community or population group, or evidence that a new problem has emerged or been recognized.

Identifying Relevant Risk, Protective and Promotive Factors. Few problems are completely unrelated to anything that has ever been seen or researched before. Once the target problem has been defined, then the varied risk, protective, and promotive factors (National Research Council, 2009) that affect this outcome can be identified. These factors are behaviors, knowledge, values, goals, or attitudes that precede the outcome we seek to change and influence the likelihood that it will occur. Risk factors are related to an increased likelihood of a negative outcome and protective factors reduce the likelihood of a negative outcome, while promotive factors raise the likelihood of a positive outcome. To the extent that these relationships are causal, targeting risk, protective, or promotive factors and diminishing or enhancing them appropriately is an established strategy in prevention science. We highlight meta-analysis as a particularly useful source of reliable information about pertinent risk, protective and promotive factors associated with the problem to be addressed. Focusing on key factors sets the stage for strategy selection and the development of specific interventions or combinations of interventions.

Selecting Strategies Most Likely to Influence Targeted Risk, Protective and Promotive Factors. Once key factors are identified, program developers need to determine what research has to say about the strategies available to influence those factors. We discuss several approaches to identifying strategies, including: kernels; analysis of data from longitudinal studies; and consultation with practitioners, clients, and other stakeholders, as well as meta-analysis.

Assembling Your Intervention Using a Logic Model. This process involves selecting program elements based on the previous step and assembling them into a coherent programmatic approach. A logic model is a tool that forces clarity about how the program developer envisions that key outcomes will be achieved through selected program components and how success will be demonstrated.

Testing the Elements of Your Evidence-Informed Program. Once a programmatic approach is developed, rigorous assessment is the key to making sure the intervention(s) achieve their intended results. This step will be iterative as initial evidence leads to programmatic improvements which are then further tested for improved efficacy.

Defining the Problem

Communities and service providers typically are able to articulate their pressing problems. But bringing data to the table allows the program developer to demonstrate the magnitude of the issue, compare their community to others with respect to prevalence and consequences, identify subgroups and geographic areas where the issue is most acute, and make the case to funders as to why intervention is necessary.

A good needs assessment provides the foundation for program development. Trend data may suggest that some issues are getting worse over time, for example, obesity, crime or substance use. However, cross sectional data can also suggest which problems are elevated and which age groups or population groups are most likely to experience the problem. In addition, since many interventions represent programs that are located in a particular neighborhood, district, or catchment area, it is helpful to have data to identify the geographic areas of highest concentration and unmet need. Relevant data can be obtained from special data collection efforts or from available public records, such as child welfare, crime, education, and health data systems, including birth and death records. It should be recognized, however, that some forms of official data may under-represent prevalence. For instance, most federal and state child maltreatment data includes only cases reported to child protective services agencies. Data relying on contact with social services agencies especially under represent the prevalence of social problems in non-poor neighborhoods (Theodore, et al., 2005).

Ideally, data should be examined for the community to be served, to identify those issues that are particular concerns for the community in question, whether or not they represent problems for the country or the state or city as a whole. For example, Communities that Care fields an in-depth Youth Survey of risk and protective factors completed by students in schools (Arthur, Hawkins et al., 2002; Hawkins, Catalano & Kuklinski, 2011). Results, augmented by archival data, are compiled into a community portrait and shared with community representatives and officials, asking them to select the issues that pose problems for children and youth in their community. This process identifies risk and protective factors at the local level and helps assure that intervention efforts target issues that are problematic for a community and about which there is a perception that action is needed.

Identifying Relevant Risk, Protective, and Promotive Factors

Many types of information can be used to identify risk, protective and promotive factors that could be targets for interventions. Here we highlight one research-based approach – meta-analysis – and briefly describe several additional information sources including developmental theory and the knowledge of experienced practitioners.

Meta-analysis is a useful technique both for identifying risk factors associated with a problem and for systematically isolating those that can be influenced through targeted interventions. Work by Mark Lipsey illustrates the use of meta-analysis to inform the process of identifying targets for change. Figure 1 shows results from Lipsey's recent (2011) meta-analysis examining predictors of adolescent antisocial behavior 

Figure 1: Risk and Promotive Factors at Age 10 Predicting Antisocial Behavior at Age 16

Source: Lipsey, M.W. (2011, April). Using research synthesis to develop "evidence-informed" interventions. Paper presented at the Emphasizing Evidence Based Programs for Children and Youth Forum, Washington, DC.

Another source for identifying key risk, protective and promotive factors in order to design an evidence-informed intervention is developmental theory. An ecological perspective on human development, for instance, highlights the importance of varied contexts for the development of children and youth (Brofenbrenner, 1979). Self-efficacy theories highlight elements related to persistence in the face of specific challenges (Bandura, 1982), such as the academic self-efficacy beliefs that are related to academic accomplishment. A recent review of parenting programs found that many interventions aimed toward improving child and adolescent outcomes identify social learning theory (27%) or cognitive behavioral theory (26%) as the foundation for their intervention strategy (Abt Associates, Inc., in progress). The FAST (Families and Schools Together) program has drawn on family stress theory, family systems theory, and social ecological theory to develop program activities, structure, and implementation (Small, Cooney & O'Connor, 2009). Which developmental theories are appropriate sources for ideas will vary depending on the particular problem being targeted and population toward which the intervention is directed. In addition, not all developmental theories provide insight into ways of influencing development – some simply describe invariant patterns. Even these theories may be useful, however, in considering characteristics, behaviors, or tendencies that cannot be changed and thus should not be targeted for intervention.

The knowledge of experienced practitioners, long-time community members, and tribal or First Nations keepers of cultural wisdom can also suggest relevant risk, protective, and promotive factors. Recognizing that some practitioners over-emphasize the value of information-only approaches, it must nevertheless be acknowledged that many, or perhaps most, strong program approaches have arisen from the efforts of local programs. One example is the Children's Aid Society program to prevent teen pregnancy, which was developed in New York City some years before being formally evaluated and found to have positive impacts (Philliber, Kaye, et al., 2002). In another example, interviews and epidemiological data among the Inuit (McGrath-Hanna , Green, et al., 2003) led to the discovery of the role of omega-3 fatty acid in human behavioral and physical health for infants, children and adults, now established in multiple randomized trials and longitudinal studies (Richardson, 2012; Sublette, Ellis, et al., 2011; Amminger, Schafer, et al., 2010). Insights from practitioners can be obtained by conducting interviews or focus groups, attending meetings of practitioners, and reading publications of practitioner associations. Insights from children and youth can also be sought in direct observations, interviews or focus groups.

Longitudinal studies can also be instructive in identifying relevant information about potential risk, protective, and promotive factors to target. With care, correlational data can also be informative. For example, decades ago, the correlation between smoking and lung cancer led cancer researchers to investigate the relevance of tobacco as a carcinogen (Hecht, 1999). Similar correlations can inform social interventions, though it is important to caution that such analyses do not prove causality and should take account of possible confounding factors.

Of course, demonstrating a causal relationship that justifies targeting a risk, protective, or promotive factor for change, in order to improve the outcome it predicts, requires two other forms of evidence. First, it must be shown that the predictive factor can be changed by intervention—that it is malleable. Second, change in that factor must then result in change in the behavior it is intended to prevent. For instance, in the case of substance abuse, multiple longitudinal studies have identified early disruptive, inattentive behaviors in the primary grades as predictors of serious drug use in adolescence and young adulthood. Experimental studies, in turn, have demonstrated that such early disruptive, inattentive behaviors are changeable using family (Sanders, 2012) or school-based strategies (Embry, 2002), including mass media. Moreover, when applied, those strategies are effective for preventing later substance abuse (Furr-Holden , Ialongo et al., 2004) and promoting other positive outcomes (Kellam, Mackenzie et al., 2011).

A list of predictive factors, such as those illustrated in Figure 1, therefore, neither tells us specifically what to do to alter them in a favorable direction or which ones, when manipulated, will actually be effective in preventing delinquency. Deeper digging is required to find or invent effective prevention, intervention, treatment, or recovery strategies.

Selecting Strategies Most Likely to Influence Targeted Risk, Protective and Promotive Factors

Having identified risk, promotive, and/or protective factors as potential targets for intervention, it is then necessary to identify ways to modify those factors and determine if doing so in fact reduces the adverse outcome and increases the positive outcome at issue. Here, also, meta-analysis may be helpful by providing a summary of the available research on the effectiveness of strategies that target different risk and need areas for preventing or reducing that adverse outcome. Figure 1, for instance, identified a number of predictive variables that are often addressed by various forms of counseling, e.g., externalizing behavior, general behavioral problems, family functioning, school participation, and the like. Similarly, Lipsey (2009) conducted a meta-analysis of 540 studies of interventions intended to reduce recidivism among juvenile offenders, examining the effectiveness of various forms of counseling programs along with other intervention approaches. Figure 2 summarizes the findings of the counseling studies. As shown, all of the different counseling approaches showed positive effects on delinquent behavior as indicated by reduced recidivism rates. An especially clear example of targeting an identified risk factor with resulting effects on the ultimate outcome at issue can be seen for general family counseling and family crisis counseling. These interventions target family functioning and related aspects of parent-child interactions, factors that Figure 1 above showed to be modestly predictive of subsequent antisocial behavior. Figure 2 below then shows, in turn, that counseling that addresses those issues does, in fact, result in reductions in delinquent behavior.

This kind of meta-analysis can be a useful starting point for developing or adapting programmatic strategies. A meta-analysis, of course, is not a "how-to-guide" for the specifics of what makes an evidence-informed strategy tick on the ground. For that, one must look carefully at specific experimental studies and variations to discern what strategy might work best for a particular adaptation or innovation. For example, some types of counseling can be harmful; some can help a bit; some can help a lot. Demonstrating the negative, rigorous studies by Dishion and colleagues have shown that aggregating delinquent youth in groups can trigger a cascade of accidental or covert reinforcements from peers for deviant behavior (Dishion & Patterson, 2006; Dishion , Spracklen et al., 1996), which in turn predicts much more serious criminal offenses in the future (Dishion, Ha & Veronneau, 2012; Fosco, Frank & Dishion, 2012).

 

Figure 2. Mean Effects for Counseling Interventions with Juvenile Offenders

Source: Lipsey, M. W. (2011). Using Research Synthesis to Develop "Evidence-Informed" Interventions. ASPE Forum: Emphasizing Evidence-Based Programs for Children and Youth. Washington DC.

Another consideration is the balance between the level of effort required for implementation and the payoff in terms of expected outcomes. Some counseling might be easy to do, with modest but consistent effects, like brief motivational interviews (Grenard, et al., 2007; Reinke, 2006). Other evidence-based strategies can require equivalent time to learn and implement, yet have very large impacts in both the short and longer term (Bach, Hayes, & Gallop, 2011). Yet another type of counseling might have larger effects, but be very difficult to learn or implement (Durlak, 2013). This in depth analysis goes beyond consulting various lists of evidence-based programs to consult the underlying studies that make up the evidence base.

To illustrate, a common reason for being re-arrested or experiencing revocation of probation involves drug use by adolescents and young adults. A careful analysis of the components used in counseling programs shows that contingency management protocols (reinforcements) for being drug-free are far superior to psychotherapy alone (Dutra, Stathopoulou et al., 2008). Thus, saying we have a counseling program in place is not sufficient to have an effective program that prevents recidivism. Understanding the active ingredients that comprise the broader categories of effective programs is required (Blase & Fixsen, 2013). These additional considerations weigh on designing an adaptation or innovation.

It is not always feasible to do a meta-analysis, however, if developers lack resources or capacity. Most importantly, there may be too few studies available for this technique to be useful. For example, to develop a program for preventing pregnancy among Latina adolescents, there may be only a few relevant studies. Therefore, additional strategies are needed that are based in research and can aid in program development. Evidence-based kernels provide another approach to identifying effective program practices, components, and active ingredients that are linked to specific behaviors. Evidence-based kernels (Embry & Biglan, 2008) are proven small units of behavioral influence (some of which are based on meta-analyses), that can be used to create new solutions to persistent or novel problems of human wellbeing or to construct adaptations of existing proven programs.

The concept of kernels arose in response to many of the challenges inherent in implementing evidence-based programs. Specifically, efficacy trials of evidence-based programs may demonstrate effectiveness; however, when taken to scale in real-world settings, it may be difficult to effectively replicate or sustain programs. Alternatively, challenges may arise that are outside of the scope of the specific intervention (Embry & Biglan, 2008). For instance, unanticipated events outside the implementing agency's purview may affect the ability of staff to carry out the intervention. Additionally, though many strategies have been used to identify evidence-based programs, evidence supporting effective program diffusion and dissemination is often modest. These challenges indicate that there is value in understanding the specific components of programs that operate as key ingredients and that are essential to program success and can help supplement or strengthen programs.

Kernels are supported by experimental studies that demonstrate their effectiveness and are commonly used strategies in prevention research. Kernels include strategies such as providing praise in the classroom, peer-assisted learning, using self-regulation techniques such as deep breathing or self-monitoring, or sending a note home from school to a child's parents. The essential characteristics of an evidence-based kernel can be summarized as follows.

Kernels are:

  • The smallest unit of scientifically proven behavioral influence.
  • Indivisible, that is, removing any part makes it inactive.
  • Produces quick, easily measured change that can grow much bigger over time.
  • Can be used either alone or in combination to create new programs, strategies or policies.
  • Are the active ingredients of most evidence-based programs.
  • Can be spread by word-of-mouth, by modeling, by non-professionals.
  • Can address historic disparities without stigma, in part because they are also found in cultural wisdom.

Embry and Biglan (2008) have identified in the research literature 52 discrete kernels, most of which can be used across the lifespan and in many different program settings. While this is not an exhaustive list of such tested behavioral interventions, those identified do provide an array of program elements known to work. Embry and Biglan characterize these kernelsFigure 4: A logic model framework for identifying elements in an evidence-informed model

 

To depict and organize the elements of an evidence-informed program, developers might fill in each oval (or use another strategy that works for them). The key is to clearly articulate and illustrate the following: What is the outcome(s) to be achieved? What are the risk, protective, and/or promotive factors that have been found to affect that outcome? What are the activities, approaches, and/or strategies that are going to be targeted to bring about change in the risk, protective, and promotive factors? And what inputs are needed to provide the activities, strategies and/or approaches? The logic model illustrates how the intervention is hypothesized to produce the intended results.

Program designers should also specify the inputs and outputs that will be expected, for example, the quantity of services delivered, classes taught, care provided, or mentoring sessions that will occur, so these can be tracked using data from a performance management system (also called a management information system) (Harty, 2006; Morino, 2011). It is critical to assess whether each input and activity is actually delivered; this can help determine whether a program is being implemented on the ground as intended by the developer(s) (Moore, Walker & Murphey, 2011; Castillo, 2011).

In addition, and critical to the task of developing an evidence-informed program, is whether the inputs, activities, and outputs are yielding the short-term outcomes that are desired. If the desired outcomes are not occurring, it is necessary to revisit data from the performance management system to identify ways to strengthen or revise inputs, activities, and outputs, so that the desired outcomes for children or youth are achieved. If the short-term outcomes are achieved, even though elements in the logic model were not delivered, that may suggest that they are not core components (Blase & Fixsen, 2013). In addition, usability testing (see Blase & Fixsen, 2013 for an overview) provides a "Plan/Do/Study/Act" approach to validating the core components of an intervention.

It is also critical to assess carefully the possibility of harm. The Latin American Youth Center in Washington, D.C., for example, found that lessons on domestic violence added to a parenting program unexpectedly had the effect of increasing domestic violence. Because they were monitoring performance management data in real time, they recognized this and altered the curriculum to avoid this harmful outcome (Castillo, 2011). This is also why it is important to build data feedback loops for ongoing programs, practices, or policies. Changes in time, history, target population or other conditions or contextual factors can cause a previously effective intervention to either lose its effects or become harmful.

Information on the quality with which the elements in the logic model were implemented is also a critical element of such a monitoring process (Durlak, 2013). The process of development, assessment, revision, and testing requires patience and rigor.

It should be noted that the logic model in Figure 4 is a mid-level model intended for individual programs and is insufficient for describing population-level change that must involve multiple-governmental policies, mass-marketing or social marketing, major logistics and delivery systems, multi-agency cooperation, multiple funding streams, etc. Logic models for these more complex, population-level approaches can be found in other sources (Embry, 2011; Embry, 2004; Glasgow, Vogt, & Boles, 1999; Keller, Schaffer, et al., 2002; Glasgow, Klesges, et al., 2004; Fawcett, Paine, et al., 1993; Fawcett, Boothroyd et al., 2003; Collie-Akers, Watson-Thompson, et al., 2010; Schober, Fawcett, & Bernier, 2012).

Testing the Elements of Your Evidence-Informed Program

When considering a new intervention, policymakers and stakeholders typically want to know whether or not it will achieve the intended results. Specifically, they may question whether the program will work in their community which may differ in important ways from where the program was originally tested. The answer to these important questions requires investing in a process of development, assessment, revision, and testing.

Information for assessing how programs unfold on the ground can come from administrative data, case records, assessments, and program observations. While much of the best evidence for proven programs, policies, and practices comes from very high-quality randomized trials, those may not be practical, affordable, or palatable in many efforts and may be premature in the early stages of developing new or adapted interventions. However, other models are often possible. These include applied behavior analysis designs and interrupted time-series design

Behavior analysis designs are characterized by the following attributes:

· Use of repeated measures (not just before and after), which may span days, weeks or months;

· Two or more people watching the same event can count the frequency, duration or intensity of the same behaviors with reasonable reliability; and

· The change in behaviors "reverses" if the intervention strategy is removed or stops; or

· The change in behavior can be demonstrated by successive use of the intervention across people, behaviors, or places if the behavior is not easily "unlearned" like learning to ride a bicycle.

The practicality and applicability of these types of designs to virtually every prevention problem is well articulated with many practical examples in a textbook (Mayer, Sulzer-Azaroff, & Wallace, 2012). These types of everyday experiments have great utility in helping identify what the real active ingredients are in any behavior change process. It is important to note that a majority of the most powerful prevention, intervention or treatment strategies on the various lists of best-practices have a history of these applied behavioral design studies, well before they were tested in a randomized trial. We argue that this is a key design principle in the tactics of scientific research and common sense: If you cannot reliably change human behavior in an applied behavior analysis design, you are unlikely to produce powerful results in a randomized trial (Sidman, 1960). These "everyday scientist" designs are especially useful for underserved, historically discriminated or small population groups or new problems at the early stages.

Interrupted time-series designs monitor behaviors over time and examine whether the introduction of a program or practice interrupts the previous pattern or trend in the data, hopefully for the better. Regression discontinuity procedures to estimate the causal effects of interventions by comparing observations lying closely on either side of the threshold for those receiving an intervention make it possible to estimate the treatment effect when randomization is infeasible (Thistlewaite & Campbell, 1960).

Direct observation represents another strategy that can inform iterative refinement of program models. Adults, both professionals and community members, as well as youth, may have a basis of experience that they can draw on as "everyday scientists" that can be useful at two levels: a) gaining insights in what might need to be revised, and b) "hooking" people into wanting and helping the change as opposed to denial, blocking and opposition. If we use these processes to effect large change, we first ask diverse stakeholders to imagine that the problem is solved. We then ask them to list what they would see, hear, feel, and do more of if the situation was solved or improved. Third, we ask them to list what they would see, hear, feel, and do less of when the situation was solved or improved. This exercise helps to define measurable short-term outputs and outcomes that have social validity. Furthermore, this exercise helps identify "early wins" that could reinforce, inspire and maintain longer-term outcomes that take sustained efforts to achieve.

Direct observation of the frequency, duration, and/or intensity of behaviors among even small numbers of people can inform program development. For example, the Triple P (Positive Parenting Program) to prevent child maltreatment and other problems (Nowak & Heinrichs, 2008; Prinz, Sangers, et al., 2009) began with direct observation of parent-child interaction, measuring the frequency, duration, and intensity of those interactions (Sanders & Glynn, 1981; Sanders, 1982a; Sanders, 1982b). Similarly, the Good Behavior Game (a classroom management technique that rewards children for on-task behaviors during instructional time) was found to prevent lifetime psychiatric, addictive, and criminal disorders and to increase high school graduation and college entry in more than 20 studies conducted in individual classrooms before the program was tested in a large random assignment study (Dolan, Kellam, et al., 1993). These simple observational studies assessed whether the frequency, duration, or intensity of behaviors could be switched on or off by the presence or absence of the intervention and also whether a sequential staggering of the implementation affected children's behavior. Results consistently indicated that the approach being assessed was effective and gradually led to development of the well-regarded Good Behavior Game.

Embry and colleagues have directly applied this activity to facilitate the adoption, implementation, and maintenance of the Good Behavior Game (Embry, 2002; Kellam, Reid, et al., 2008) and other evidence-based strategies. Specifically, they arrange for the implementation of strategies that produce immediate results—identified by stakeholders—that can be fostered quickly. This translates into higher commitment to longer-term results.

A great virtue of careful attention to these practices is that they allow "mid-course" corrections to improve results, which is vital in real-world settings. Importantly, these kinds of strategies can be used by diverse individuals, tribes, schools, neighborhoods, businesses or organizations, communities, scientific entities, and elected officials to develop, assess, revise and test strategies to influence human behaviors. When applied with patience, thought, and rigor, this process can develop evidence-informed strategies that change the targeted risk, protective, and promotive factors that, in turn, affect the outcome, both in theory and on the ground.

Conclusions

Prevention and intervention programs often evolve based on the personal experiences, good intentions, and opportunities in a community and/or the convictions of funders and policy makers. Sometimes these homegrown approaches are very effective; sometimes they don't work or are harmful; and other times they are somewhat effective and could be improved. It is possible to improve the likelihood of success by building more consistently on several types of existing knowledge bases and combining effective components in thoughtful ways to address new problems or new populations. Triangulating across information from research and evaluation, including meta-analysis and research-based kernels, as well as developmental theory, longitudinal and other research, and the wisdom of practitioners, can inform the development of programs that are more effective at achieving the outcomes desired for children, youth, and families.

In this paper we have described opportunities to incorporate research evidence into program design at five stages of the program development process:

  1. Defining the problem
  2. Identifying relevant risk, protective and promotive factors
  3. Selecting strategies most likely to influence targeted risk, protective, and promotive factors
  4. Using a logic model to assemble the intervention
  5. Testing the elements of your evidence-informed program

This is not a quick or easy endeavor; but investing the time and effort necessary to develop evidence-informed interventions should result in more effective programs and thus better child and youth outcomes.

References

Amminger GP, Schäfer MR, Papageorgiou K, Klier CM, Cotton SM, Harrigan SM, Mackinnon A, McGorry PD, Berger GE (2010). Long-chain omega-3 fatty acids for indicated prevention of psychotic disorders: a randomized, placebo-controlled trial. Archives of General Psychiatry. 67(2):146-54.

Annie E. Casey Foundation (2010). Early warning! Why reading by the end of third grade matters KIDS COUNT Special Report. Baltimore, MD: The Annie E. Casey Foundation.

Arthur, M.W., Hawkins, J.D., Pollard, J.A., Catalano, R.F., & Baglioni, A.J. (2002).

Measuring Risk and Protective Factors for Use, Deliquency, and Other Adolescent Problem Behaviors: The Communities that Care Youth Survey. Evaluation Review, 26 (6), 575-601.

Bach, P., S. C. Hayes and R. Gallop (2011). Long-Term Effects of Brief Acceptance and Commitment Therapy for Psychosis. Behavior Modification 36(March): 165-181.

Bandura, A. (1982). Self-efficacy mechanism in human agency. American Psychologist, 37, 122-147.

Bronfenbrenner, U. (1979). The Ecology of Human Development: Experiments by Nature and Design. Cambridge, MA: Harvard University Press.

Castillo, I. (2011). First, Do No Harm…Then Do More Good. Leap of Reason: Managing to Outcomes in an Era of Scarcity. Washington, DC: Venture Philanthropy Partners, 95-98.

Collie-Akers, V. L., J. Watson-Thompson, J. A. Schultz and S. B. Fawcett (2010). "A case study of use of data for participatory evaluation within a statewide system to prevent substance abuse." Health Promotion Practice 11 (6): 852-858.

Dzewaltowski, D. A., R. E. Glasgow, L. M. Klesges, P. A. Estabrooks and E. Brock (2004). RE-AIM: Evidence-Based Standards and a Web Resource to Improve Translation of Research Into Practice. Annals of Behavioral Medicine, 28 (2): 75-80.

Dishion, T. J. and G. R. Patterson (2006). The development and ecology of antisocial behavior in children and adolescents. Developmental psychopathology, Vol 3: Risk, disorder, and adaptation (2nd ed.). D. C. D. J. Cohen. Hoboken, NJ, US, John Wiley & Sons Inc: 503-541.

Dishion, T. J., K. M. Spracklen, D. W. Andrews and G. R. Patterson (1996). Deviancy training in male adolescents friendships. Behavior Therapy, 27(3): 373.

Dishion, T. J., T. Ha and M.-H. Véronneau (2012). An ecological analysis of the effects of deviant peer clustering on sexual promiscuity, problem behavior, and childbearing from early adolescence to adulthood: An enhancement of the life history framework. Developmental Psychology, 48 (3): 703-717.

Dolan, L., Kellam, S., Brown, C., Werthamer-Larsson, L., Rebok, G., Mayer, L., et al. (1993). The short-term impacts of two classroom-based preventive interventions on aggressive and shy behaviors and poor achievement. Journal of Applied Developmental Psychology, 14, 317–345.

Duncan, G., Ludwig, J., and Magnuson, K. (2007). Reducing Poverty through Pre-School Interventions. The Future of Children, 17, 143-160.

Durlak, J.A. (2013) The importance of quality implementation for research, practice, and policy. ASPE Research Brief. Washington, DC: U.S. Department of Health and Human Services, Office of the Secretary.

Dutra, L., G. Stathopoulou, Basden S.L., Leyro T.M., Powers M.B. & Otto M.W.(2008). A meta-analytic review of psychosocial interventions for substance use disorders. The American Journal of Psychiatry, 165 (2): 179-187.

Embry, D. D. (2011). Behavioral Vaccines and Evidence-Based Kernels: Nonpharmaceutical Approaches for the Prevention of Mental, Emotional, and Behavioral Disorders." Psychiatric Clinics of North America 34 (March): 1-34.

Embry D.D. (2004). Community-based prevention using simple, low-cost, evidence-based kernels and behavior vaccines. Journal of Community Psychology 32 (5): 575-591.

Embry, D. D. (2002). "The Good Behavior Game: A Best Practice Candidate as a Universal Behavioral Vaccine." Clinical Child & Family Psychology Review, 5(4): 273-297.

Embry, D. D. and A. Biglan (2008). Evidence-Based Kernels: Fundamental Units of Behavioral Influence. Clinical Child & Family Psychology Revie,w 11(3): 75-113.

Fawcett, S. B., A. L. Paine, V. T. Francisco and M. Vliet (1993). Promoting health through community development. Promoting health and mental health in children, youth, and families. D. S. G. L. A. Jason. New York, NY, US, Springer Publishing Co: 233-255.

Fawcett, S. B., R. Boothroyd, J. A. Schultz, V. T. Francisco, V. Carson and R. Bremby (2003). Building Capacity for Participatory Evaluation Within Community Initiatives. Journal of Prevention & Intervention in the Community 26 (2): 21-36.

Fosco, G. M., J. L. Frank and T. J. Dishion (2012). Coercion and contagion in family and school environments: Implications for educating and socializing youth. Handbook of school violence and school safety: International research and practice (2nd ed.). S. R. Jimerson, A. B. Nickerson, M. J. Mayer and M. J. Furlong. New York, NY, US, Routledge/Taylor & Francis Group: 69-80.

Furr-Holden, C. D., N. S. Ialongo, et al. (2004). Developmentally inspired drug prevention: middle school outcomes in a school-based randomized prevention trial. Drug & Alcohol Dependence, 73(2): 149-58.

Glasgow, R.E., Klesges, L.M., Dzewaltowski, D.A., Bull, S.S., Estabrooks, P.A. (2004). The future of health behavior change research: What is needed to promote the translation of research into health promotion practice? Annals of Behavioral Medicine, 27, 3-12.

Glasgow, R. E., T. M. Vogt and S. M. Boles (1999). Evaluating the public health impact of health promotion interventions: the RE-AIM framework. American Journal of Public Health, 89 (9): 1322-1327.

Grenard, J. L., S. L. Ames, R. W. Wiers, C. Thush, A. W. Stacy and S. Sussman (2007). Brief intervention for substance use among at-risk adolescents: a pilot study. Journal of Adolescent Health, 40 (2): 188-191.

Hamilton, J. & Bronte-Tinkew, J. (2007). Logic Models in Out-of-School Time Programs: What Are They and Why Are They Important? Washington D.C.: Child Trends. Available at: http://www.childtrends.org/files/child_trends-2007_01_05_rb_logicmodels…

Harty, H.P. (2006). Performance measurement: Getting results. Washington, D.C.: The Urban Institute.

Hawkins, J. D., Catalano, R. F., Kuklinski, M. R. (2011). Mobilizing communities to implement tested and effective programs to help youth avoid risky behaviors: The Communities That Care approach. Research Brief, Washington, DC.Child Trends.

Hecht, S. S. (1999). Tobacco Smoke, Carinogens, and Lung Cancer. JNCI Journal of the National Cancer Institute, 91 (14): 1994-1210.

Kellam, S. G., A. C. Mackenzie, C. H. Brown, J. M. Poduska, W. Wang, H. Petras and H. C. Wilcox (2011). The good behavior game and the future of prevention and treatment. Addiciont Science & Clinical Practice, 6(1): 73-84.

Kellam, S. G., Reid, J., & Balster, R.L., Eds. (2008). Effects of a universal classroom behavior program in first and second grades on young adult problem outcomes. Drug and Alcohol Dependence, 95(Suppl1): S1-S4.

Keller, L. O., M. A. Schaffer, B. Lia-Hoagberg and S. Strohschein (2002). Assessment, program planning, and evaluation in population-based public health practice. Journal of Public Health Management & Practice, 8 (5): 30-43.

Kirby, D. (2001). Emerging Answers: Research Findings on Programs to Reduce Teen Pregnancy (Summary). Washington, DC: National Campaign to Prevent Teen Pregnancy.

Abt Associates, Inc. (in progress). State of the Science and Practice in Parenting Interventions across Childhood. Literature Review and Synthesis. Washington, DC: U.S. Department of Health and Human Services.

Lipsey, M. W. (2009). The primary factors that characterize effective interventions with juvenile offenders: A meta-analytic overview. Victims and Offenders, 4, 124-147.

Lipsey, M. W. (2011). Using Research Synthesis to Develop "Evidence-Informed" Interventions. ASPE Forum: Emphasizing Evidence-Based Programs for Children and Youth. Washington DC.

Magnuson, K. (2007). Maternal education and children's academic achievement during middle childhood. Developmental Psychology, 43, 1497-1512.

Mayer, G. R , Sulzer-Azaroff, B. & Wallace (2012). Behavior analysis for lasting change. (2nd ed.) New York: Holz, Rinehart, & Winston.

McGrath-Hanna NK, Greene DM, Tavernier RJ, Bult-Ito A. (2003).Diet and mental health in the Arctic: is diet an important risk factor for mental health in circumpolar peoples? A Review. International Journal of Circumpolar Health. 62 (3):228-41.

Moore, K.A., Walker, K., & Murphey, D. (2011). Performance Management: The Neglected Step in Becoming an Evidence-Based Program. Leap of Reason: Managing to Outcomes in an Era of Scarcity. Washington, DC: Venture Philanthropy Partners, 95-98.

Morino, M. (2011). Leap of reason: Managing to outcomes in an era of scarcity. Washington, D.C.: Venture Philanthropy Partners.

National Research Council (2009). Preventing Mental, Emotional, and Behavioral Disorders Among Young People: Progress and Possibilities. Washington, DC: National Academies Press.

Nowak, C. and N. Heinrichs (2008). A comprehensive meta-analysis of Triple P-Positive Parenting Program using hierarchical linear modeling: Effectiveness and moderating variables. Clinical Child and Family Psychology Review, 11(3): 114-144.

O'Connell, M.E., Boat, T., & Warner, K.E., Eds. . (2009). Preventing Mental, Emotional, and Behavioral Disorders Among Young People: Progress and Possibilities. Committee on the Prevention of Mental Disorders and Substance Abuse Among Children, Youth and Young Adults: Research Advances and Promising Interventions. Washington, DC, Institute of Medicine; National Research Council.

Philliber, S., Kaye, J. W., Herrling, S., & West, E. (2002). Preventing pregnancy and improving health care access among teenagers: An evaluation of the Children's Aid Society-Carrera Program. Perspectives on Sexual and Reproductive Health, 34, 244-251.

Prinz, R. J., Sanders, M.R., Shapiro, C.J., Whitaker, D.J. & Lutzker, J.R. . (2009). "Population-based prevention of child maltreatment: The U.S. Triple P System Population Trial." Prevention Science, 10(March).

Richardson, A.J. (2012) Omega-3 fatty acids produce a small improvement in ADHD symptoms in children compared with placebo, Evidence Based Mental Health. Published online Feb 18 2012 doi:10.1136/ebmental-2011-100523.Reinke, W. M. and B. (2006).

The classroom check-up: A brief intervention to reduce current and future student problem behaviors through classroom teaching practices. 66, ProQuest Information & Learning: US.

Sanders, M. R. (2012). "Development, evaluation, and multinational dissemination of the Triple P-Positive Parenting Program." Annual Review of Clinical Psychology 8: 345-379.

Sanders, M. R. (1982a). "The effects of instructions, feedback, and cueing procedures in behavioural parent training." Australian Journal of Psychology, 34(1): 53-69.

Sanders, M. R. (1982b). "The generalization of parent responding to community settings: The effects of instructions, plus feedback, and self-management training." Behavioural Psychotherapy, 10(3): 273-287.

Sanders, M. R. and T. Glynn (1981). "Training parents in behavioral self-management: An analysis of generalization and maintenance." Journal of Applied Behavior Analysis, 14(3): 223-237.

Schober, D. J., S. B. Fawcett and J. Bernier (2012). "The Enough Abuse Campaign: Building the movement to prevent child sexual abuse in Massachusetts." Journal of Child Sexual Abuse: Research, Treatment, & Program Innovations for Victims, Survivors, & Offenders 21(4): 456-469.

Sidman, M. (1960). Tactics of scientific research: Evaluating experimental data in psychology. New York: Basic Books.

Silva, P.A. (1990). The Dunedin Multidisciplinary Health and Development Study: A 15 year longitudinal study. Paediatric & Perinatal Epidemiology, 4(1):76-107.

Small, S. A., Cooney, S.M. & O'Connor, C. (2009). "Evidence-Informed Program Improvement: Using Principles of Effectiveness to Enhance the Quality and Impact of Family-Based Prevention Programs." Interdisciplinary Journal of Applied Family Studies 58 (February): 1-13.

Sublette, M.E., Ellise, S.P., Geant, A.L., and Mann, J.J. (2011) Meta-analysis of the effects of eicosapentaenoic acid (EPA) in clinical trials in depression. Journal of Clinical Psychiatry, 72 (12):1577-84.

Theodore, A. D., Chang, J. J., Runyan, D. K., Hunter, W. M., Bangdiwala, S. I., & Agans, R. (2005). Epidemiologic Features of the Physical and Sexual Maltreatment of Children in the Carolinas. Pediatrics, 115(3), e331-337. doi: 10.1542/peds.2004-1033

Thistlewaite, D. & Campbell, D(1960). Regression-Discontinuity Analysis: An alternative to the ex post facto experiment. Journal of Educational Psychology 51: 309-317.

United Way of America (1996). Measuring Program Outcomes: A Practical Approach. Alexandria, VA: United Way of America Press.

Appendix

Defining the problem

Assessing community needs and resources. Available at: http://ctb.ku.edu/en/dothework/tools_tk_content_page_78.aspx

Center for Disease Control. CHANGE tool. Available at: http://www.cdc.gov/healthycommunitiesprogram/tools/change.htm

Communities that Care. Communities that Care youth survey. Available at: http://www.sdrg.org/ctcresource/CTC_Youth_Survey_2006.pdf

Youth comprehensive risk assessment. Available at: https://www.youthriskassessment.com/?action=download

The Community Toolbox. Taking action in the community. Available at: http://ctb.ku.edu/en/TakingActionInTheCommunity.aspx#Assess

Identifying Relevant Risk, Protective, and Promotive Factors

Communities that Care. Building protection: The social development strategy. Available at: http://www.sdrg.org/ctcresource/Community%20Building%20and%20Foundational%20Material/Building_Protection_Social_Dev_Strategy_Chart.pdf

Communities that Care. Index by risk factor. Available at: http://www.sdrg.org/ctcresource/Prevention%20Strategies%20Guide/indexbyriskfactor.pdf

Selecting Strategies Most Likely to Influence Targeted Risk, Protective and Promotive Factors

Hawkins, J.D. & Catalano, R. F. (2002). Tools for community leaders: A guidebook for getting started. Published for Communities that Care. Available at: http://www.sdrg.org/ctcresource/Community%20Building%20and%20Foundational%20Material/Tools%20for%20Community%20Leaders.pdf

Metz, A. J. R. (2007). A 10-step guide to adopting and sustaining evidence-based practices in out-of-school time programs: Part 2 in a series on fostering the adoption of evidence-based practices in out-of-school time programs. Washington D.C.: Child Trends. Available at: http://www.childtrends.org/files/child_trends-2007_06_04_rb_ebp2.pdf

Promise Neighborhoods.(2010) What is a kernel? Available at: http://promiseneighborhoods.org/what-works/kernels/

Assembling Your Intervention Using a Logic Model

Burke, M. (n.d.). Tips for developing logic models. Available at: http://www.rti.org/pubs/apha07_burke_poster.pdf

Harrel, A(n.d.) Evaluation strategies for human services programs.  Washington, D.C.: The Urban Institute. Available at: https://www.bja.gov/evaluation/guide/documents/evaluation_strategies_p3_7.html

James Bell Associates. (2007). Evaluation brief: Developing a logic model. Available at: http://www.jbassoc.com/reports/documents/developing%20a%20logic%20model.pdf

Office of Juvenile Justice and Delinquency Prevention. Performance measures: Logic models. Available at: http://www.ojjdp.gov/grantees/pm/logic_models.html

Sundra, D.L., Scherer, J., & Anderson, L.A. (2003). A guidebook on logic model development for CDC's Prevention Research Centers. Published by Prevention Research Centers Program Office. Available at: https://www.bja.gov/evaluation/guide/documents/cdc-logic-model-development.pdf

United States Department of Health and Human Services, Administration for Children and Families. Using logic models. Available at: https://www.childwelfare.gov/management/effectiveness/models.cfm

United Way. Measuring program outcomes: A practical approach. Available at: https://www.unitedwaystore.com/product/measuring_program_outcomes_a_practical_approach/program_film

University of Wisconson-Extension. Logic models. Available at: http://www.uwex.edu/ces/pdande/evaluation/evallogicmodel.html

W.K. Kellog Foundation.(2004). Using logic models to bring together planning, evaluation, and action: Logic model development guide. Available at: http://www.wkkf.org/knowledge-center/resources/2006/02/wk-kellogg-foundation-logic-model-development-guide.aspx

Testing the Elements of Your Evidence Informed Program

Grantmakers of Effective Organizations. (2007). Learning for Results. Washington, D.C.: Grantmakers of Effective Organizations. Available at: http://www.deaconess.org/UploadFiles/Documents/Learning%20for%20Results.pdf

Penna, R. (2011). Nonprofit outcomes toolbox. Hoboken, NJ: Wiley Nonprofit Authority.

www.Performwell.org

Schwartz, S.L. & Austin, M.J. (2009). Implementing performance management systems in nonprofit human service organizations: An exploratory study. Available at: http://www.mackcenter.org/docs/MIS%20Study%20Final%20Report.pdf

Wolk, A., Dholakia, A, & Kreitz, K. (2009). Building a performance management system: Using data to accelerate social impact. Cambridge, MA: Root Cause. Available at: http://rootcause.org/performance-measurement-book

Yohalem, N., Devaney, E., Smith, C. & Wilson-Ahlstrom, A. (2012). Building citywide systems for quality: A guide and case studies for afterschool leaders. Washington, D.C.: The Forum for Youth Investment. Available at: http://www.forumfyi.org/building_system_quality

Notes