The importance of quality implementation has been well-documented, but achieving quality is a complex and demanding process. Nevertheless, some useful lessons have been learned in implementation science:
Implementation is rarely perfect. Some slippage inevitably occurs when programs are conducted in new settings (Durlak & DuPre, 2008). This need not be a major concern as long as the problems are recognized and being dealt with and implementation quality remains high enough. There can be a variety of unanticipated implementation problems that arise related to such things as changes in leadership and staff, sudden budget re-authorizations, conflicts with transportation, scheduling, and emergencies, and competing job pressures. Fortunately, good judgment and guidance from implementation research and practice can help anticipate and deal with the challenges that might occur. A good monitoring and feedback system can help identify when problems may be hindering quality implementation and fixes can be made to improve implementation (e.g., DuFrene, Noell, Gilbertson, & Duhon, 2005; Greenwood, Tapia, Abbott, & Walton, 2003). To achieve quality implementation, the process needs to be given sufficient time. Also, public policy decisions should be based on evaluations of programs that have been implemented with quality. Otherwise, the relative value and cost-effectiveness of alternative programs cannot be determined.
Practitioners vary in their performance when implementing new programs. It is important to monitor each practitioner's performance and offer additional professional development as needed. People have different learning styles and learning curves; some can develop new skills quickly while others require more time and practice. Some lose motivation over time and may need professional development to rekindle enthusiasm. Others may simply not care about implementing the program and may need stronger incentives to carry out the program, or they may need to be replaced (Mihalic et al. 2008).
A pilot program is often a good idea. Because doing something new requires time and practice to achieve mastery, it may be a good idea to try a new program on a small pilot basis instead of launching into a large-scale project. For example, the Teen Pregnancy Prevention Program, administered by the Office of Adolescent Health, allowed grantees the opportunity to use the first 12 months as a phased-in implementation period. During this time, sites were encouraged to prepare for program implementation, including conducting a pilot (Margolis, 2011). A pilot program can help an organization "work out the kinks" regarding implementation and plan more effectively for a later more extensive program (see Blase & Fixsen and Embry & Lipsey briefs).
Don't implement an evidence-based program on your own. Advertisements demonstrating new products often carry the following admonition in various forms: "Professionals were used. Do not try this at home." This caution also applies to the implementation of evidence-based programs. One of the advantages of using an evidence-based program, compared to developing a new program, is that others have used it before and in some cases, they have developed strategies for overcoming obstacles and implementing the program effectively. Drawing on the expertise of outside professional assistance and experience is a key ingredient in quality implementation and successful outcomes. Evidence-based programs often come with developed training and technical assistance packages, fidelity guidelines, and monitoring processes. Indeed, high quality implementation is the joint responsibility of multiple stakeholders that typically includes funders/policy makers, program developers/researchers, local practitioners, and local administrators.
There may be rare cases in which a brief and simple program can be learned by reading a manual or participating in a short workshop or on-line training session, but these are rare exceptions to the rule that outside assistance is needed to achieve quality implementation. Moreover, it is wishful thinking that a few simple "magic bullets" will achieve important social goals.
Practitioners can find assistance in selecting and implementing evidence-based programs in various ways. For example, there may be a national replication office for a specific program. Other organizations can provide materials, training, and guidance for several models and provide information about consultants who are willing to provide professional development services for various programs. Some examples of these resources are provided in the Appendix of this report.
It is possible to adapt an evidence-based program to fit local circumstances and needs as long as the program's core components, established by theory or preferably through empirical research, are retained and not modified.