When evaluations are unable for various reasons to use the experimental method to contrast two or more intervention conditions, quasi-experimental designs are often used. Quasi-experimental designs also use comparison groups and pre- and post-measurement to look for program effects, but these designs carry a heavier burden of proof because participants are not randomly assigned to program and comparison groups and thus there may be preexisting difference between groups. Nine (36%) evaluations used strong quasi-experimental designs to compensate for either being unable to use random assignment or, as was true for almost half these interventions, for ending up in the compromise position of "partial" random assignment. These quasi-experimental designs dealt with the absence of random assignment in numerous ways. They began by ensuring the comparability of participants in the program. They used methods such as matching individual factors and exploring subject differences before beginning the intervention. These evaluations analyzed differences noted after the intervention, to rule out sources of erroneously concluding that group differences were produced by the program. They included analysis of dropout between conditions, and exploration of other potential group differences that may have produced the differences observed in outcomes. In addressing the absence of random assignment these studies were persuasive that the participants in both the intervention and comparison conditions were comparable. If, for example, an evaluation indicated that a much higher number of youth in the comparison group dropped out compared with the intervention condition, we required that the evaluation analyze these differences and investigated the effects of differential attrition on their findings. If evaluators investigated these differences and produced evidence that provided confidence in their findings, the study was categorized as a rigorous quasi-experimental design.