An evaluator can detect problems with the method of random assignment in two ways. First, as part of an implementation study, the evaluator can conduct interviews with program staff in the research sites. These interviews can shed more light on how random assignment was accomplished in practice, as well as whether caseworkers or prospective clients had any opportunity to influence (either intentionally or unintentionally) the odds of a case being assigned to the experimental group instead of the control group.
A second method of assessing the method of random assignment is to compare the baseline characteristics of the experimental and control groups as a preface to an impact study. These comparisons would include statistical tests of differences of average levels for experimental and control cases. On average, there should be few statistically significant differences between the baseline characteristics of the experimental group and those of the control group. The occasional detection of a statistically significant difference between experimental and control cases does not prove that random assignment was flawed. However, the more statistically significant differences that are detected between experimental and control cases at baseline, and the larger these differences, the more likely it is that the assignment of cases to the respective categories was not entirely random.(3)
If both interviews with program staff and comparisons of experimental and control cases uncover irregularities in the process of random assignment, then there is a good chance that the process of random assignment was flawed. Otherwise, the assumption that the experimental and control groups are comparable to each other generally can be maintained, and experimental-control differences in subsequent outcomes can be attributed to differential exposure to welfare reform policies.