Of the 25 effective programs, 16 (64%) used experimental designs with randomization of subjects to varying levels of the intervention. This speaks to the strengths in the evaluations being conducted by youth development investigators in the last 15 years. It is common in reviews of prevention programs to hear the refrain that the state of evaluation is weak and underdeveloped. In fact, over half of this group chose to employ the rigorous approach of using random assignment.
How effective programs evaluate, anticipate and overcome roadblocks to the use of experimental designs needs to be studied. The 16 studies with strong experimental evaluations clearly were able to overcome the objections and obstacles commonly associated with random assignment. Several more studies, those that eventually reported using quasi-experimental designs, said they had begun the design as an experimental method, only to be "forced" to adapt the design because of issues, generally sociopolitical or environmental, that precluded full randomization. It is true that there are a range of practical and human impediments to using random assignment. These include objections from line staff and parents who feel random assignment excludes some children when they are at equal need, and issues of access to parental consent or permission. Programs such as Life Skills Training nonetheless managed to conduct rigorous evaluations, with long-term follow-up, for extremely large samples of youth populations. Such programs demonstrated that the various objections to using an experimental design on large-scale project (and to long-term follow-up) could be overcome. It is possible to see from the strongest evaluations, however, that clear commitment to the principles of random assignment frequently correlated with that evaluation's ability to deliver on its application. Programs such as Quantum Opportunities Program remained firm about randomly selecting youth who met program requirements and then recruited them, instead of relying on a sample of self-selected youths. Not only did this provide more rigor, but it also provided investigators some insight into issues around program "take up." In the Big Brothers/Big Sisters evaluation, evaluators did not want to withhold mentoring opportunities from research subjects. They used an experimental design in which they randomly assigned those who signed up to a mentor or put them on an 18 month wait list, during which time they would collect data from them but not provide a mentor.