Overview of the Final Report of the Seattle-Denver Income Maintenance Experiment. The Treatment-Control Comparison


We have been discussing random assignment of treatment to subjects in an experimental study.  One "treatment" that is frequently tested is no treatment at all, in order words the "control" treatment.  Families or individuals enrolled in the control group are not eligible for any special benefit or services, but provide the same information to the experimenters as that provided by the enrollees in the experimental treatments.  The control families, of course, do not exist in a vacuum.  They are eligible for and participate in ongoing programs similar to the experimental treatments.  It is possible to imagine a situation in which, midway through an experiment, a regular government program is implemented that is exactly like the experimental treatment.  In such a case, all the difference in behavior between the experimental and control groups might disappear.  This is not to say that the treatment has lost its effect — merely that the difference in environments experienced by experimental and control families has disappeared.  In the case of SIME/DIME, for example, the members of the control group were potentially eligible for the AFDC and AFDC-UF programs and Food Stamps as well as for a variety of job training programs.  Any observed effects of the experiment, therefore, must be interpreted as the differential effects of the experimental treatment compared to existing government programs.  Thus any observed experimental-control differences in outcomes must be interpreted as estimates of the effect of replacing the early 1970s status quo with the experimental programs.

In addition to such external influences, certain occurrences within the experimental environment may pose problems for the interpretation of the results.  Two that are inevitable in any experiment are sample attrition and mismeasurement of behavior.  Attrition occurs when members of the experimental and control groups drop out of the sample and stop providing information to the experimenters.  Mis-measurement occurs when information used to measure the effect of the treatments turns out to be inaccurate.  Since most of the information used to evaluate SIME/DIME was supplied in personal interviews, the predominate form of mismeasurement here consisted of misreporting of income or hours worked and earning information.

If we could guarantee that the incidence of attrition and misreporting were random and thus identical for the experimental as well as the control groups, the observed experimental responses would provide undistorted estimates of treatment effects.  But the incentives to drop out of the experiment and to report income incorrectly may differ for the two groups.  With respect to the misreporting bias, experimentals might be expected to underreport in comparison with controls, because the less income they report the larger will be their benefit.  Of course, controls receiving AFDC face a similar incentive vis-a-vis the welfare office, but less so with respect to the SIME/DIME interviewers.  Controls' incentive to underreport earnings to the welfare office, however, may be relatively lower then experimentals', since AFDC benefit levels were lower than SIME/DIME benefit levels.  If controls report to the interviewers a higher fraction of their earnings than do experimentals, then the effect of the misreporting bias is to overestimate the actual work reduction effect.  Of course, if experimentals report a higher fraction of their earnings than controls the actual work reduction effect is underestimated.  With respect to attrition, people in the control group and in low-benefit treatment might be expected to drop out with greater frequency than those on the high-benefit plans — because they have less to lose by leaving the experiment.  But those on the high-benefit plans can be expected to have a larger behavioral response to the experiment.  Therefore, attrition may cause the observed experimental-control difference to overestimate the actual effect, unless the difference in attrition rates can be explained by measurable family characteristics and these characteristics are controlled for in the analysis.

Additional difficulties in interpreting results may arise in an experiment like SIME/DIME, where different types of treatments were tested in combination.  While SIME/DIME was designed so that the independent effects of the two different types of treatment (cash transfers and job counseling/training subsidies) could be separately measured, the separation of effects introduces additional complexity, and thus, potential controversy into the analysis of the experimental data.

Finally, there are inherent limitations in social experiments regardless of design quality.  Because experiments have finite durations, they may not completely represent the conditions of a fully implemented, permanent national program.  Participants may alter their behavior in response not only to the program options being tested, but to the experiment itself, a phenomenon known as the "Hawthorne Effect".  Furthermore, participants, knowing the experimental conditions are only temporary, may not respond in the same way they would to a permanent program.  For example, given the opportunity to participate in an income maintenance experiment, individuals may use it to increase their schooling; also a temporary activity.