Once strategies are selected, combining them into a coherent program becomes the next challenge. Logic models are often used to describe and assess existing programs (United Way of America, 1996; Hamilton & Bronte-Tinkew, 2007). However, a logic model approach can also provide an organizing framework or "causal model" for thinking about developing evidence-informed interventions.  Figure 4 illustrates the basic components of a logic model that might be employed to describe an evidence-informed program.
Figure 4: A logic model framework for identifying elements in an evidence-informed model
To depict and organize the elements of an evidence-informed program, developers might fill in each oval (or use another strategy that works for them). The key is to clearly articulate and illustrate the following: What is the outcome(s) to be achieved? What are the risk, protective, and/or promotive factors that have been found to affect that outcome? What are the activities, approaches, and/or strategies that are going to be targeted to bring about change in the risk, protective, and promotive factors? And what inputs are needed to provide the activities, strategies and/or approaches? The logic model illustrates how the intervention is hypothesized to produce the intended results.
Program designers should also specify the inputs and outputs that will be expected, for example, the quantity of services delivered, classes taught, care provided, or mentoring sessions that will occur, so these can be tracked using data from a performance management system (also called a management information system) (Harty, 2006; Morino, 2011). It is critical to assess whether each input and activity is actually delivered; this can help determine whether a program is being implemented on the ground as intended by the developer(s) (Moore, Walker & Murphey, 2011; Castillo, 2011).
In addition, and critical to the task of developing an evidence-informed program, is whether the inputs, activities, and outputs are yielding the short-term outcomes that are desired. If the desired outcomes are not occurring, it is necessary to revisit data from the performance management system to identify ways to strengthen or revise inputs, activities, and outputs, so that the desired outcomes for children or youth are achieved. If the short-term outcomes are achieved, even though elements in the logic model were not delivered, that may suggest that they are not core components (Blase & Fixsen, 2013). In addition, usability testing (see Blase & Fixsen, 2013 for an overview) provides a "Plan/Do/Study/Act" approach to validating the core components of an intervention.
It is also critical to assess carefully the possibility of harm. The Latin American Youth Center in Washington, D.C., for example, found that lessons on domestic violence added to a parenting program unexpectedly had the effect of increasing domestic violence. Because they were monitoring performance management data in real time, they recognized this and altered the curriculum to avoid this harmful outcome (Castillo, 2011). This is also why it is important to build data feedback loops for ongoing programs, practices, or policies. Changes in time, history, target population or other conditions or contextual factors can cause a previously effective intervention to either lose its effects or become harmful.
Information on the quality with which the elements in the logic model were implemented is also a critical element of such a monitoring process (Durlak, 2013). The process of development, assessment, revision, and testing requires patience and rigor.
It should be noted that the logic model in Figure 4 is a mid-level model intended for individual programs and is insufficient for describing population-level change that must involve multiple-governmental policies, mass-marketing or social marketing, major logistics and delivery systems, multi-agency cooperation, multiple funding streams, etc. Logic models for these more complex, population-level approaches can be found in other sources (Embry, 2011; Embry, 2004; Glasgow, Vogt, & Boles, 1999; Keller, Schaffer, et al., 2002; Glasgow, Klesges, et al., 2004; Fawcett, Paine, et al., 1993; Fawcett, Boothroyd et al., 2003; Collie-Akers, Watson-Thompson, et al., 2010; Schober, Fawcett, & Bernier, 2012).