Two data sources were used for the analyses presented in this chapter:
- Case file data. MDRC staff reviewed JOBS case files for a random subsample of integrated and traditional group members.(1) These case file data provide information on participation in activities that occurred as part of the JOBS program.
- Survey data. A survey administered to a random subsample of integrated, traditional, and control group members asked a series of questions about sample members' involvement in employment-related activities during the two-year period following their entry into the evaluation.(2) These survey data provide information on participation in employment-related activities, both inside and outside the JOBS program, and were used to estimate the difference between participation rates for the integrated and control groups and between the traditional and control groups or in other words, the impact of the programs on participation.
The two data sources do not yield identical results. Most important, the case file data show substantially higher participation rates in the integrated program than in the traditional program, whereas the survey data show only a small difference. This discrepancy may be partly explained by the fact that the two data sources cover different cohorts of the Columbus evaluation sample: Case files were reviewed for sample members randomly assigned between October 1992 and March 1993, and the survey was administered to sample members randomly assigned between January and December 1993. Analysis for the early cohort of the survey sample (those assigned from January through March 1993) revealed larger differences in participation levels between the integrated and traditional programs than were found for the full survey sample. This, along with field research evidence that the traditional program strengthened its participation monitoring and enforcement procedures over time, suggests that there were larger participation differences between the two programs earlier in the follow-up period. Therefore, the results presented in this chapter based on case file data may somewhat overestimate the differences between the two programs.
The researchers are confident, however, that the general finding from the case file data that the integrated program generated more participation than the traditional one is valid. This confidence is based on three factors. First, case file data are considered the best source for participation in activities within a program. Second, the difference between participation levels in the two programs indicated by the case file data is very substantial; even if the traditional program succeeded in generating more participation over time, it is almost impossible that the difference between the programs was erased. Third, a higher participation rate in the integrated program is in line with some of the key results from the implementation analysis, namely, that the integrated case managers tracked participants more closely and provided more personalized attention.
It is not known why the survey does not show a larger difference between participation levels of the integrated and traditional groups. Various possible explanations were explored, but none proved to be true. Survey data are used in this chapter to measure whether the two programs increased participation above that of the control group level; as the last section of the chapter shows, the magnitude of the impacts is substantial enough that the precision of the program groups' participation level is not crucial.
One of the major reasons for conducting the test of integrated and traditional case management was to determine whether one approach was more effective in maximizing participation in welfare-to-work activities and in enforcing the "social contract" idea that people who receive welfare should be engaged in employment-focused services. The evaluation designers hypothesized that the integrated program would lead to a higher show-up rate to JOBS orientation and subsequently to a higher participation rate in JOBS activities, and thus would more effectively enforce the social contract. This hypothesis was based primarily on the belief that welfare recipients would take the threat of financial sanction more seriously when it came from a case manager who could impose the sanction herself. In fact, as reported in Chapter 2, traditional JOBS case managers told MDRC staff that it was sometimes difficult to persuade recipients to comply with program requirements because they could not impose sanctions themselves. In addition, the evaluation designers thought that recipients might have more difficulty avoiding participation requirements if they had to deal with one worker who knew their whole situation rather than two workers who each had limited information about their JOBS and AFDC statuses.