1. In practice, few experiments achieve these ideals. For a discussion of these issues, see Burtless (1995a), and Heckman and Smith (1995).
2. In practice, the randomized experiments that have been implemented have offered job placement services to the experimental group but not to the control group. Differences in future outcomes, therefore, reflect both the impact of the services on the probability of obtaining a job and the impact of having a job on future outcomes.
3. While it is possible to control for differences in some indicators of skills, such as education, it is not possible to control for unobserved differences, such as ambition.
4. "Natural experiments" in which differences in job availability creates the initial differences in work experiences but not future differences in wages are difficult to imagine. For example, people living in depressed areas will have a lower probability of working initially and they would also have lower wages if they worked.
5. See Gueron and Pauly (1991, table 1.1).
6. In almost all cases, the earnings gains were larger than the program cost indicating that these strategies are cost effective.
7. See Friedlander and Burtless (1995) for the five-year follow-up.
8. The Baltimore program offered Aid to Families with Dependent Children (AFDC) recipients a variety of options, including training. The Portland JOBS program, which went even further in this direction by providing training and encouraging people to seek and accept "good" jobs rather than accepting any job, is another of a handful of jobs with multiple year impacts. (See Manpower Demonstration Research Corporation 1998.)
9. See Friedlander and Burtless (1995, table 1-2).
10. These programs also tend to be cost effective. The small benefits are achieved with even smaller costs.
11. The difference between men and women may reflect the combined impact of lower wage growth from part-time work and greater part-time work among women.
12. See Gottschalk (1997). Furthermore, if this age premium continues to increase, the gains will be even larger for current welfare recipients who take jobs.
13. For example, Schiller's (1994) study of wage gains of minimum wage workers largely reflects the expected absolute gains for youth as they move to jobs that are not likely to be available to the target population.
14. Even if one could condition on a large number of observable factors, this would still leave many unobservable differences that are correlated with who lands the job.
15. It is necessary to assume that these variables are exogenous.
16. In ongoing work, Connolly and Gottschalk estimate both wage growth and job duration models for females disaggregated by education. Preliminary results indicate that high school dropouts experience both lower wage growth within jobs and lower growth in starting wages across jobs than females with more education. In this sense, the jobs filled by high school dropouts are "dead-end jobs. " Connolly and Gottschalk also find that high school dropouts are less likely to leave their jobs. Furthermore, those dropouts with the lowest wage growth are the least likely to leave their jobs. In this sense, high school dropouts are "stuck" in "dead-end jobs."
17. Their focus is on whether welfare participation has a causal impact on wages but their estimates yield wage growth rates for recipients, which is the focus of this review. Since welfare participation is based on income, they adjust their estimates to take account of the fact that welfare recipients are likely to have lower wages simply as a result of selection into the program, whether or not being on the program has any causal effect on wages. The methodological problem is how to determine whether low wages cause participation or participation causes low wages. One approach is to find "instruments," which are variables that mimic what random assignment would have done. For example, Moffitt and Rangarajan use the AFDC guarantee and family size as variables that affect participation but not wages. If women with larger families or living in high-benefit states are more likely to participate in AFDC, then this exogenous variation can be used to infer the impact of participation on wages.
18. This is based on the coefficients on age and age squared of .024 and -.0002 in column 1 of table 6.6.
19. Moffitt and Rangarajan answer the latter question.
20. Bartik (1997, table 9). Working more hours or at a higher wage was associated with larger wage gains.
21. Committee on Ways and Means (1993, table 31) indicates that only 6.4 percent of AFDC mothers worked in 1991.
22. About a quarter lived in households receiving public assistance.
23. See Lennon and Newman (1995).
24. Child care costs are taken from Blank (1997, table 7.1) and the EITC is calculated under 1998 rules. The EITC is equal to 40 percent of earnings up to a maximum of $3,656, which is reached when earnings equal $9,140. Persons with income between $9,140 and $11,930 receive the maximum benefit. The EITC benefit is then reduced by 21.06 percent for earnings above $11,930.
25. See Holcomb et al. (1998, table 8.1). While some sites had higher wages, there is a clear inverse relationship between the percent of welfare recipients who found work and their average wage. Culpeper, Virginia, had the lowest average wage ($5.37) but the highest proportion of recipients who found work (66 percent).
26. Friedlander and Burtless (1995, table 4-1) show average total earnings throughout the experiment for four sites. These can be converted into annual earnings. Dividing by $5.50 yields the number of hours a person would have had to work at $5.50 rate in order to achieve the average annual earnings. Experimentals in Arkansas had the lowest average annual earnings ($1,490) and Baltimore had the highest ($4,221). Using 35 hours per week as full-time, this translates to .14 of full-time for Arkansas and .42 for Baltimore.
27. See Gottschalk (1997) for a discussion of this issue.
28. While it was expected that training would raise wages, most of the increase in earnings reflected increases in hours worked rather than higher wages. Programs for adult men showed very little effect. For a review of the literature on evaluation of training programs, see LeLonde (1995).