The fundamental question of whether a positive youth development program can reliably demonstrate it had a meaningful effect on the children it targeted is central to issues of evaluation quality. The most reliable design for determining intervention effects is the experimental research design. This method involves randomization of participants to differing conditions or levels of the intervention, thereby allowing the investigator to eliminate systematic differences between the participants in the two conditions. This attempt to control for individual differences among participants significantly increases the likelihood that the intervention groups will contain subjects of the same average ability, which increases the ability to interpret differences between the two conditions as those produced by the intervention, allowing for the highest possible level of confidence in the conclusions. Although the experimental method is not the only method through which reliable differences may be discovered, it is the better choice because, more than any other research design, it removes or minimizes the uncertainty surrounding the conclusions about whether or not a study had effects. The second most reliable method is a rigorous quasi-experimental design that uses a nonrandomly assigned comparison group. The best quasi-experimental designs seek a comparison group whose participants are closely similar to the program group prior to intervention, and explore many possible sources of pre-intervention differences between the two groups in order to rule out these pre-intervention subject differences as sources of post-intervention differences. The more rigorous this investigation the greater the confidence that post-intervention differences are due to the intervention and not to preexisting subject differences.