Of the five waiver evaluations reviewed, only one--Minnesota's MFIP evaluation-- included multiple experimental groups. In the three urban counties participating in the MFIP demonstration, the research sample included two experimental and up to two control groups. The full MFIP experimental group (E1) received both the financial incentives and the case management provisions of the welfare reform package. The partial MFIP experimental group (E2) received the financial incentives portion of the welfare reform package but continued to receive JOBS (job-training) services under the pre-welfare reform rules. The AFDC + JOBS control group (C1) was subject to the full set of control policies, while the AFDC-only control group (C2) was not eligible for JOBS services. By comparing differences between these groups, it is possible to distinguish the impact of the full welfare reform package (E1 - C1) from the impact of the case management portion of the welfare reform package (E1 - E2), the impact of the financial incentives portion of the welfare reform package (E2 - C1), and the impact of current JOBS services (C1 - C2).
In two other states--California and Michigan--a two-group random-assignment design was originally adopted to study impacts from an initial set of welfare reform waivers and was subsequently used to study impacts from a combination of two waiver packages. In California, the APDP was implemented in December 1992 and the WPDP in March 1994. A two-group random-assignment design was adopted, with experimental cases subject to whatever reform policies had been implemented and control cases to neither set of welfare reform policies. Random assignment of applicant cases was scheduled to continue through December 1996. Presumably, cases that went through random assignment before March 1994 could be studied for up to 15 months to infer impacts from the APDP, while cases that went through random assignment after March 1994 could be studied to infer impacts from the combination of the APDP and WPDP. For cases that went through random assignment before March 1994, impacts measured after March 1994 would need to be attributed to the APDP plus the WPDP implemented some time later.
In Michigan, the first set of TSMF provisions was implemented in October 1992, and an additional set of provisions approved under a second waiver was implemented in October 1994. As in California, the evaluation sample consists of a single experimental group and a control group, with the experimental group subject to all welfare reform policies implemented to date. Random assignment of applicants was scheduled to continue until October 1996. The evaluator is planning to distinguish impacts for cases that applied for assistance before October 1994 from impacts for those that applied after October 1994. The evaluator has no plans to compare the impacts of the first package with the impacts of the combination of the two waiver packages, because the characteristics of applicants were different between 1992 and 1994. In addition, there are no systematic plans for distinguishing the impacts of separate waiver provisions within each major reform package, although the timing of particular provisions might allow some inferences to be made. For instance, for recipient cases, work requirements were not implemented until April 1993, but the first set of impacts that the evaluator for this group reported were measured as of October 1993, after work requirements had already been implemented.
Colorado's CPREP program includes a variety of welfare reform provisions in a single package; CPREP is being evaluated using a two-group experimental design. Currently, no efforts are under way to estimate separate impacts of the different provisions of this package.
In Wisconsin, the evaluator proposed distinguishing impacts of individual components by comparing participants in those components. As noted earlier, impacts of individual components estimated through these nonexperimental methods are likely to be less reliable than impacts estimated through experimental methods, because there is no guarantee that the underlying comparisons are between equivalent groups of cases.
All five of the evaluations we reviewed include process studies based in part on interviews with program staff and clients on their experiences with welfare reform. These interviews and related analyses will not enable evaluators to attach numerical values to impacts from separate provisions of welfare reform packages; however, they may help to identify particular provisions of each package that are strongly associated with particular outcomes from welfare reform.