Best Intentions are Not Enough: Techniques for Using Research and Data to Develop New Evidence-Informed Prevention Programs. Testing the Elements of Your Evidence-Informed Program

04/01/2013

When considering a new intervention, policymakers and stakeholders typically want to know whether or not it will achieve the intended results. Specifically, they may question whether the program will work in their community which may differ in important ways from where the program was originally tested. The answer to these important questions requires investing in a process of development, assessment, revision, and testing.

Information for assessing how programs unfold on the ground can come from administrative data, case records, assessments, and program observations. While much of the best evidence for proven programs, policies, and practices comes from very high-quality randomized trials, those may not be practical, affordable, or palatable in many efforts and may be premature in the early stages of developing new or adapted interventions. However, other models are often possible. These include applied behavior analysis designs and interrupted time-series design

Behavior analysis designs are characterized by the following attributes:

· Use of repeated measures (not just before and after), which may span days, weeks or months;

· Two or more people watching the same event can count the frequency, duration or intensity of the same behaviors with reasonable reliability; and

· The change in behaviors "reverses" if the intervention strategy is removed or stops; or

· The change in behavior can be demonstrated by successive use of the intervention across people, behaviors, or places if the behavior is not easily "unlearned" like learning to ride a bicycle.

The practicality and applicability of these types of designs to virtually every prevention problem is well articulated with many practical examples in a textbook (Mayer, Sulzer-Azaroff, & Wallace, 2012). These types of everyday experiments have great utility in helping identify what the real active ingredients are in any behavior change process. It is important to note that a majority of the most powerful prevention, intervention or treatment strategies on the various lists of best-practices have a history of these applied behavioral design studies, well before they were tested in a randomized trial. We argue that this is a key design principle in the tactics of scientific research and common sense: If you cannot reliably change human behavior in an applied behavior analysis design, you are unlikely to produce powerful results in a randomized trial (Sidman, 1960). These "everyday scientist" designs are especially useful for underserved, historically discriminated or small population groups or new problems at the early stages.

Interrupted time-series designs monitor behaviors over time and examine whether the introduction of a program or practice interrupts the previous pattern or trend in the data, hopefully for the better. Regression discontinuity procedures to estimate the causal effects of interventions by comparing observations lying closely on either side of the threshold for those receiving an intervention make it possible to estimate the treatment effect when randomization is infeasible (Thistlewaite & Campbell, 1960).

Direct observation represents another strategy that can inform iterative refinement of program models. Adults, both professionals and community members, as well as youth, may have a basis of experience that they can draw on as "everyday scientists" that can be useful at two levels: a) gaining insights in what might need to be revised, and b) "hooking" people into wanting and helping the change as opposed to denial, blocking and opposition. If we use these processes to effect large change, we first ask diverse stakeholders to imagine that the problem is solved. We then ask them to list what they would see, hear, feel, and do more of if the situation was solved or improved. Third, we ask them to list what they would see, hear, feel, and do less of when the situation was solved or improved. This exercise helps to define measurable short-term outputs and outcomes that have social validity. Furthermore, this exercise helps identify "early wins" that could reinforce, inspire and maintain longer-term outcomes that take sustained efforts to achieve.

Direct observation of the frequency, duration, and/or intensity of behaviors among even small numbers of people can inform program development. For example, the Triple P (Positive Parenting Program) to prevent child maltreatment and other problems (Nowak & Heinrichs, 2008; Prinz, Sangers, et al., 2009) began with direct observation of parent-child interaction, measuring the frequency, duration, and intensity of those interactions (Sanders & Glynn, 1981; Sanders, 1982a; Sanders, 1982b). Similarly, the Good Behavior Game (a classroom management technique that rewards children for on-task behaviors during instructional time) was found to prevent lifetime psychiatric, addictive, and criminal disorders and to increase high school graduation and college entry in more than 20 studies conducted in individual classrooms before the program was tested in a large random assignment study (Dolan, Kellam, et al., 1993). These simple observational studies assessed whether the frequency, duration, or intensity of behaviors could be switched on or off by the presence or absence of the intervention and also whether a sequential staggering of the implementation affected children's behavior. Results consistently indicated that the approach being assessed was effective and gradually led to development of the well-regarded Good Behavior Game.

Embry and colleagues have directly applied this activity to facilitate the adoption, implementation, and maintenance of the Good Behavior Game (Embry, 2002; Kellam, Reid, et al., 2008) and other evidence-based strategies. Specifically, they arrange for the implementation of strategies that produce immediate results—identified by stakeholders—that can be fostered quickly. This translates into higher commitment to longer-term results.

A great virtue of careful attention to these practices is that they allow "mid-course" corrections to improve results, which is vital in real-world settings. Importantly, these kinds of strategies can be used by diverse individuals, tribes, schools, neighborhoods, businesses or organizations, communities, scientific entities, and elected officials to develop, assess, revise and test strategies to influence human behaviors. When applied with patience, thought, and rigor, this process can develop evidence-informed strategies that change the targeted risk, protective, and promotive factors that, in turn, affect the outcome, both in theory and on the ground.

View full report

Preview
Download

"rb_bestintention.pdf" (pdf, 488.63Kb)

Note: Documents in PDF format require the Adobe Acrobat Reader®. If you experience problems with PDF documents, please download the latest version of the Reader®