Approaches to Evaluating Welfare Reform: Lessons from Five State Demonstrations. 2. State Approaches

10/01/1996

In general, the five evaluations we reviewed handled instances of missing data in the same manner: observations with missing outcomes or background data were not included in the impact analyses. The loss of observations from missing data appears to have been small in most cases. For example, Minnesota's evaluation reported that only one percent of the research sample failed to complete a background information form, and less than five percent of the research sample was excluded from the six-month impact analyses because of missing welfare participation data. The Michigan and Minnesota evaluations proposed using sample selection procedures to adjust impact estimates in instances in which a large portion of the sample contained missing data; in practice, however, these procedures were not employed because the portions of the sample with missing data were small.

For the four random-assignment evaluations, subgroups usually were defined on the basis of baseline characteristics rather than events since random assignment. In certain instances, however, deletions from the sample may have reduced the strict comparability of the experimental and control groups. In Michigan's evaluation, for example, cases active for only one month were deleted from the research sample, reducing the size of the sample by between two and three percent after two years of data had been processed. The evaluator justified this decision because "a one-month eligibility period for AFDC or SFA is somewhat unusual and therefore suspect . . . we assumed that cases active only one month would have left with or without exposure to TSMF and should not be considered part of the demonstration.(6) As noted earlier, denied applicants for both AFDC and SFA were excluded from the Michigan sample because of data limitations; the evaluator argued that there was no evidence that these deletions introduced "important intergroup differences in baseline characteristics" of experimental and control cases.(7)

In one instance, an evaluator reported outcomes for subgroups defined by events since random assignment, but with an important qualification. In its report on six-month impacts from Minnesota's MFIP initiative, MDRC reported outcomes such as welfare benefits of active cases and earnings of employed single parents on welfare. Mean values were distinguished for experimental and control cases, but without any tests of statistical significance of experimental- control differences. A comment in the text noted that "the subset of the MFIP group for whom averages are shown may not be comparable to the subset of the AFDC group for whom averages are shown. Thus, effects of MFIP cannot be inferred from this table.(8)

In Wisconsin, certain outcomes were being collected only for cases that leave AFDC, but the evaluator had not decided procedures for correcting for selection into this sample. Because the WNW evaluation design is nonexperimental, the evaluator is giving more attention to collecting data that could be useful in modeling participation in particular welfare reform programs.