Skip to main content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Toward a Social Cost-Effectiveness Analysis of Programs to Expand Supported Employment Services: An Interpretive Review of the Literature

Publication Date

U.S. Department of Health and Human Services

Toward a Social Cost-Effectiveness Analysis of Programs to Expand Supported Employment Services: An Interpretive Review of the Literature

David Salkever

Westat, Inc.

December 31, 2010

PDF Version: http://aspe.hhs.gov/daltcp/reports/2010/supempLR.pdf (50 PDF pages)


This report was prepared under contract #HHSP23320095655WC between the U.S. Department of Health and Human Services (HHS), Office of Disability, Aging and Long-Term Care Policy (DALTCP) and Westat, Inc. For additional information about this subject, you can visit the DALTCP home page at http://aspe.hhs.gov/_/office_specific/daltcp.cfm or contact the ASPE Project Officer, Pamela Doty, at HHS/ASPE/DALTCP, Room 424E, H.H. Humphrey Building, 200 Independence Avenue, S.W. , Washington , D.C. 20201 . Her e-mail address is: Pamela.Doty@hhs.gov.

The opinions and views expressed in this report are those of the authors. They do not necessarily reflect the views of the Department of Health and Human Services, the contractor or any other funding organization.


TABLE OF CONTENTS

ACKNOWLEDGMENTS AND DISCLAIMER
INTRODUCTION
I. SOCIAL COSTS OF PROVIDING SUPPORTED EMPLOYMENT (SE) SERVICES
1. Comments on Conceptual Issues Relating to Costs of Implementing SE Programs
2. Results of Specific Studies of Program Costs
3. SE Implementation Costs and Cost Offsets from Other Vocational Programs
4. Other Evidence of IPS SE Cost Offsets on Mental Health Treatment Costs
5. What Does the Employment-Treatment Cost Relationship Tell Us About SE Impacts on Treatment Costs?
6. Summary of Results Relating to Costs of Services and Service Cost Offsets
7. The Impact of IPS SE on Net Consumption by IPS SE Clients: Conceptual Issues
8. IPS SE Impacts on Clients' Earnings
9. IPS SE Impacts on Non-Earned Income/Total Income
II. EVIDENCE ON EFFECTIVENESS OF IPS SERVICES
1. Employment-Related Dimensions of IPS Impacts
2. Non-vocational Dimensions of IPS Effectiveness
3. Recent Tests of Enhancements to the IPS Model
4. Variations in IPS Effectiveness with Client Characteristics
5. Limitations of Studies to Date
6. Summary and Conclusions
REFERENCES
NOTES

ACKNOWLEDGMENTS AND DISCLAIMER

The author is grateful to Richard Frank (Deputy Assistant Secretary at the Office of the Assistant Secretary for Planning and Evaluation (ASPE), Office of Disability, Aging and Long-Term Care Policy (DALTCP)), Pamela Doty (project officer at ASPE/DALTCP) and Vidhya Alakeson (formerly with ASPE) for their guidance and insights, and to the Centers for Medicare and Medicaid for providing financial support for the study through an intra-agency transfer to ASPE.

Any opinions, findings, and conclusions or recommendations expressed in this report are those of the author and do not necessarily reflect the views of the U.S. Department of Health and Human Services (HHS).

This report was prepared under a task order contract #HHSP23320095655WC between HHS’s ASPE and Westat.

For additional information on this project, contact Mustafa Karakus at Westat, 1600 Research Boulevard, Rockville, Maryland 20850. His e-mail address is: mustafakarakus@westat.com.

INTRODUCTION

The mounting evidence of the effectiveness of supported employment (SE) services based on the Individualized Placement and Support (IPS) program model, in conjunction with the fact that relatively few of the many persons with serious mental illness (SMI) in publicly funded programs have access to such services, raises an obvious policy question: Should efforts be undertaken to promote and expand access to these evidence-based services for persons with SMI?

Assuming that at some point in the near future specific proposals for such efforts by state and/or federal agencies will be advanced, they will require objective evaluations that should be undertaken from a broad societal perspective. The current review therefore seeks to organize and interpret the growing literature on the effects of IPS SE programs from this same perspective in the hope that it can serve to inform these needed objective evaluations.

While a thorough and comprehensive evaluation of a specific policy proposal should provide valuations of both costs and benefits, the challenges to valuing some of the most important benefits of IPS SE programs are formidable. A substantial range of policy impacts can, however, be captured by applying the framework of social cost-effectiveness analysis (CEA) as formulated by Meltzer (1997, 2006). Within this framework, a comprehensive measure of social cost of a mental health services program is defined simply as costs of consumption net of earnings of the persons affected by the program. The costs of consumption can be further categorized into:

  1. The resource costs of implementing the program (i.e., costs of providing program services to the program clients).

  2. The cost impacts of the program on clients’ use of other mental health treatment and rehabilitation services.

  3. Impacts on clients’ costs of other publicly funded (or third-party funded) services that are also consumed by clients (e.g., Medicaid covered services for somatic health care).

  4. Impacts of the program on clients’ private consumption costs (i.e., the costs of goods and services purchased by clients directly with funds that they have access to (from their earnings, or from transfer income or grants from public or private sources).

The sum of these four categories minus program impacts on clients’ earnings represent the impact of the program on consumption net of earnings. In principle, the program impacts on the components of cost, and on earnings, should be measured over a time horizon long enough to capture all relevant program impacts.

The initial sections of this review, organized into Part I, are intended to parallel these components of social costs net of earnings. After some brief comments in Part I, Section 1 on conceptual and definitional issues. Section 2 focuses on the costs of providing IPS SE services. Section 3 reviews studies that have reported on both the costs of IPS SE services and cost offsets as IPS SE services are substituted for other types of rehabilitation services. Section 4 focuses on additional evidence and analyses from the literature on IPS SE program impacts on costs of other mental health and somatic health services. Section 5 comments on the relevance, for measuring IPS SE program impacts of data on the associations between employment status and costs of mental health and health services. Section 6 summarizes the main conclusions relate to program costs and service cost impacts. Section 7 discusses some conceptual issues relating to measurement of IPS SE program impacts on private consumption net of earnings, while Section 8 discusses evidence relating to program impacts on earnings, and Section 9 discusses evidence relating to program impacts on non-earned income and total income.

In Part II of the review we turn our attention to evidence relating to measures of effectiveness that could be used in developing customized employment (CE) ratio measures of IPS SE program cost-effectiveness from a social perspective. Section 1 and Section 2 of Part II focus on employment-related dimensions of effectiveness, and non-vocational dimensions of effectiveness. Section 3 is a brief discussion of recent tests of enhancements to the IPS SE model intended to increase effectiveness, and Section 4 considers the subject of variations in effectiveness by client/patient characteristics. Finally, Section 5 discusses study limitations, and Section 6 is a brief conclusion.

I. SOCIAL COSTS OF PROVIDING SUPPORTED EMPLOYMENT (SE) SERVICES

1. Comments on Conceptual Issues Relating to Costs of Implementing SE Programs

Before reviewing the results of studies from the literature, it is useful to begin by considering several conceptual/definitional issues.

First, under the assumption that economies or diseconomies of scale are not very substantial in provision of SE services, our primary interest in empirical findings should be directed to average unit costs rather than total costs. This raises the question of what the appropriate unit of service (output) should be. While this unit could be defined as an individual contact with a client, the variety of activities involved in providing SE services may argue against this. For example, job-development work by an employment specialist (ES) may not entail any direct client contact. In addition, contacts of the ES with the client may vary substantially in time or content, and the same is probably true for ES contacts regarding job development with prospective employers.

These concerns may argue for using the client as the unit of output, regardless of the number and type of services required by the client. Since this definition of unit cost requires a time frame, it is likely that the most useful time period will be the year (since this is typically the unit of time corresponding to accounting, budgeting, and financing decisions). However, if the unit of output is the client-year, then heterogeneity in unit costs for individual clients may be an important phenomenon, with some clients requiring large numbers of services and/or time-intensive services within a given year while other clients do not.1

It also seems reasonable to expect that service cost per client should also diminish with the time that the client is involved in an SE program, since first-year service requirements are more substantial while service requirements in following years may in fact be lower, especially if the client is placed in a stable job situation and has a long tenure in that job. The literature provides at least limited support for the idea that service cost per client should diminish with the time. Cook et al. (2005) report a modest decline in hours of vocational service per client from approximately 3 hours per month in the early months of SE services to approximately 2 hours per month after 24 months. Drake et al. (1999) reported, in an randomized clinical trial (RCT) of IPS SE versus enhanced vocational rehabilitation (EVR) services, that both groups experienced declines from the 2 month point to the 18 month point in percent receiving vocational services (95% to 61% for IPS and 84% to 57% for EVR), though some of the decline for IPS SE clients may have been due to declines in IPS service availability in the latter phase of the study (Dixon et al., 2002). Bush et al. (2009) characterize several previous studies as indicating that “participants [in SE] relied on vocational services less over time”.2 Bond and Kukla (2011) studied 142 IPS clients who were employed at least 10 hours per week in a competitive job and had begun a competitive job within 6 months prior to their study observation period of 24 months. They observed an initial contact rate for clients with their ESs of three per month and a decline in this rate to approximately 1 per month during the first 7 months of observation, with little change in this rate for the remaining 17 months of their study.3 A similar pattern, with a slightly smaller decline over time, was reported by McGuire et al. (2010) for 91 persons over a 24-month period following their randomization to an IPS treatment arm. The average number of IPS contacts dropped from 9.04 in the first quarter of study treatment to a range of 4.98-5.70 per quarter for quarters 3-8.4 Salyers et al. (2004) examined long-term follow-up experience of a small number of clients (n = 36) from two SE programs in New Hampshire and observed that after 10 years 86% of these clients were still receiving SE services (though they do not indicate the volume or intensity of these services).5

A related issue is defining the end of a period of client service. It seems likely that at least some SE programs will not have a well-defined routine process for designating client discharges from SE services and thus may not have readily available data on the number of clients being served at any point in time. Moreover, even among programs that do have well-defined processes for discharging clients, these processes may vary considerably from program to program. Financial records of payments to the SE agency, presumably by third parties (such as Medicaid, vocational rehabilitation [VR] or Developmental Disabilities Administration), could be used as indicators of numbers of clients for whom an SE agency is being paid; however, it may not be common practice for agencies to generate statistics about length of time (within the year) on the reimbursable client rolls for each client in an SE program, and therefore about average length of time (within the year) per client.

A second issue is possible heterogeneity in the scope of SE services from one provider to another. Some services that could be viewed as primarily vocational in nature, such as cognitive remediation, could also be viewed as therapeutic services; this raises the possibility that clients of some SE providers may receive these services within the SE program (and the costs of the services are therefore included in the costs of the SE program) while clients of other providers may receive these services from personnel (and costs) not captured in the fiscal data for the program. In addition, some clients may not receive these services at all. For example, in the case of cognitive remediation, while this has been tested as an adjunct to “standard” SE services (e.g., Greig et al., 2005; Wexler and Bell, 2005), it is not part of the fidelity assessment for SE services based on the Individual Placement and Support (IPS) model. Of course, one other aspect of heterogeneity in SE services across providers is that fidelity to the IPS SE model itself in fact varies widely across SE providers, and this presumably has some influence on variations in the level of unit costs observed in the data.

Third, other dynamic considerations may argue for viewing average unit cost at a particular time as a somewhat incomplete indicator of relevant costs for SE services over a longer time horizon. These considerations include: (1) the possibility of start-up costs (including delays in initially reaching the projected “steady-state” client caseload); and (2) the learning-by-doing phenomenon; both of these suggests that one may expect to see a decline in unit costs as SE providers gain more experience in providing services.

Fourth, we will address the issue of “offset” costs separately. While it has been suggested that the net impact of SE services on overall costs of mental health rehabilitation services is small when clients would otherwise have been receiving other vocational or psycho-social rehabilitation (PSR) services, we focus in the next sub-section of our review on the costs of SE per se without regard to any such “offset” costs.

Finally, while we will focus primarily on unit costs in this review, the reader should bear in mind that an empirical basis for assuming that economies or diseconomies of scale are not important in the provision of SE services is not established.

2. Results of Specific Studies of Program Costs

2.1 The HMC Maryland Study

A recent study by Health Management Consultants (HMC, 2006) provides detailed cost estimates based on FY 2005 and FY 2006 data from seven SE agencies in Maryland. Data were derived from agency records and reports supplied by the agencies to HMC. Results indicated that the annual cost per full-time ES fell in a narrow range ($47,824-$65,462) for all but one of the agencies; the outlier agency reported a much larger figure that included unusually high costs for rent and for public relations. On the basis of these results, HMC suggested a bottom-line figure of about $60,000 per year per ES. (Note that this is a figure intended to include fully allocated overhead costs; detailed information on the allocation process was not provided.)

Translating this result into an annual cost per client requires that we factor in the numbers of clients per ES in the various agencies. HMC reported an average ratio of 13.9 clients per ES and a range in this ratio from 9.6 to 22.3. This suggests an average cost per client of about $4,300 per year. If we weight the results by the numbers of clients served, the resulting per client average turns out to be about $5,000 because the two largest agencies, that account for about two-thirds of all the clients served by the seven agencies studied, reported per client costs of $6,138 and $4,815. (Of the seven agencies in this analysis, two did not provide any SE services that met the evidence-based practice (EBP) standard for IPS, while the remaining five provided EBP services in all or some of their sites. Excluding the two non-EBP agencies increased the average cost per client to about $5,100).

HMC also proposes an “efficiency” measure of average cost per working client. Since only 62.5% of all clients in their study were working, this measure is higher-than-average cost per client. They report an average (across all seven agencies) of $6,987. (This was calculated as the sum across agencies of all SE costs divided by the sum of the number of working clients in each agency.)

Finally, HMC notes that average caseloads per ES could be increased substantially and argues that this would lower the per client cost; an increase from the study mean of 13.9 to the EBP toolkit recommendation of 20 would reduce per client costs by more than 40% if it is assumed that adding more clients did not entail additional costs for non-ES staff or other items.This is an extreme assumption since HMC notes that some other types of costs that are sensitive to caseload, such as travel, are also important; but it still seems likely that a large reduction could be achieved with higher staffing levels. Implications for service effectiveness are not known.6

2.2 Latimer et al. (2004)

This study obtained program cost data from seven programs in seven different states that were rated as achieving fidelity scores of 70 or higher (out of 75) on an IPS fidelity rating. Though the programs were all rated as “high-fidelity”, they varied widely in full-year clients per full-time budgeted ES (from 6.9 to 34.9). (Since the ratio was based on budgeted positions, ratios based on actual full-time equivalent staff would presumably have been slightly higher because of temporary vacancies.) The seven agencies also varied in the client turnover ratio calculated as number of clients served in a year divided by the full years of clients served (from 1.4 to 3.1), and total cost per full-year client also varied (from $1,602 to $8,391) in 2001 dollars. Adjusting the latter figures for consumer price index (CPI) growth from 2001 to 2005 (annual averages) of 10.28%, yields a range of figures from $1,767 to $9,254. Corresponding means, calculated as the ratio of the mean agency total cost to the mean agency number of clients, are $2,898 (in 2001 dollars) and $3,196 (in 2005 dollars).

Several differences between these results and the HMC results for Maryland are noteworthy. First, there is the obvious difference in that per client costs were found to be higher in Maryland. Second, the Latimer data imply strong economies of scale, with larger agencies having considerably lower costs per full-year client. As noted above, there was a five-fold range in cost per client and this corresponded to a seven-fold difference in caseload size, with the smallest agency having the highest costs.

2.3 Cimera (2008)

Another recent study of SE costs (Cimera, 2008) takes a different approach than the previous papers. Rather than examining accounting data from SE agencies, he studies payments by the state VR agency in Wisconsin to providers of VR services. This approach has the advantage of allowing cost comparisons between groups of clients defined by the nature of their mental disorders; on the other hand, the study is not able to document the specific nature of the services provided, and to determine the extent to which these services conform to the IPS model. It also does not provide evidence that the VR agency’s payments correspond to costs derived from accounting data.

Cimera reports the average annual cost to the Wisconsin VR program. For FY 2002-2005, for persons with “psychotic mental illness”: the figures are $3,628 of FY 2002, $3,653 for FY 2003, $2,529 for FY 2004 and $6,404 for FY 2005. It is interesting to note that the FY 2005 figure is about 30% higher than the corresponding HMC figure for Maryland. Cimera also reports analogous figures for two sub-groups of these clients: those with “significant” disabilities and those with the “most significant” disabilities. Averaging over all 4 years, the per client costs for clients in these two groups were $3,565 and $3,932 respectively.7, 8

3. SE Implementation Costs and Cost Offsets from Other Vocational Programs

The concept of cost offsets generally refers to the idea that implementation of an SE program may reduce the dollar amounts of resources devoted to other services, such as other vocational services, other PSR services, treatment services for mental health problems, and treatment services for somatic health problems. Such cost offsets may be of particular relevance for assessing cost-effectiveness of expanding SE services from a public sector perspective since the services whose costs are being offset are largely financed by governmental funds. (As discussed later in this review, however, these cost offsets are also relevant from the broader “societal” perspective as reflected in a “social” CEA.)

The potential nature and magnitude of cost offsets will depend critically on the nature of the specific SE implementation policy under evaluation. Several studies that have examined the costs of SE services have done so specifically in the context of substituting SE services for more “traditional” rehabilitation services (though the nature of these “traditional” services have varied from study to study).

Clark et al. (1996) compiled data from two day-treatment programs operated by a CMHC in New Hampshire that were converted to IPS SE programs at two different time periods in the early 1990s, in 1990 and in 1992. Prior to conversion, the average annual cost per client of the day programs were $8,739 and $6,597; after conversion these costs (including the SE program costs) dropped to less than $2,000 in both programs. After conversion, the IPS programs had recorded costs of $1,920 and $1,878 per client per year. The implication of these figures appears to be that the cost offset of closing the day-treatment services more than covered the cost of the IPS SE programs. Overall community treatment cost trends were not as clear, however, because: (1) both sites reported fairly large increases in case management costs; and (2) the timing of downward trends in other outpatient services costs were not clearly related to the timing of the conversions to IPS services. In addition, data on hospital use in one of the two sites appeared to have some validity problems, while in the other site cost per client of hospitalization had been increasing before the SE program start-up and continued to increase after the start-up. These complications led the authors to the more conservative conclusion that the costs of conversion to IPS services were fully offset; they did not venture firm conclusions about cost savings over above this full offset of the IPS costs.

In a randomized trial comparison of IPS SE versus a group skills training (GST) VR program, Clark et al. (1998), report virtually no difference between the costs of these two programs. Average 18-month costs per client in 1992 dollars, adjusted to a 12-month basis, were $3,757 for IPS and $3,688 for GST. Adjusting these costs to 2005 dollars based on CPI produces figures of $5,230 and $5,134. Since the adjustment by the all-items CPI (rather than a health sector CPI) seems conservative, the similarity between these figures and those reported in the HMC study (above) is striking. The similarity in costs between the two programs suggests that replacing a GST program with an IPS program would not increase costs. The evidence for a greater than 100% offset in vocational/rehabilitation costs (i.e., in shifting from the GST to the IPS SE program) is weaker than in the day-treatment conversion study discussed above, but Clark et al. (1998) argue that this reflects the fact that external pressures to reduce overall community mental health treatment costs were not as strong in the setting for the IPS versus GST study as in the setting for the conversion study.9 (Of course, as shown in the 1996 paper, the considerable expense of providing rehabilitation services within a day-treatment/partial hospitalization framework allowed for large cost offsets when the day-treatment programs were essentially terminated after the start-up of the IPS SE programs.)

Another randomized trial, involving inner-city residents with SMI (Drake et al., 1999; Dixon et al., 2002) compared an IPS SE program with an EVR service. Mean per client costs over an 18-month period for the two interventions were almost equal ($4,295 for IPS SE versus $4,438 for EVR in 1995 dollars; in 2005 dollars, annual cost of $3,669 for IPS SE versus $3,792 for EVR). This suggests that replacing an EVR program with an IPS SE program would fully offset the costs of the IPS SE program.

One other study conducted a randomized trial comparison of “accelerated entry” SE (as in the IPS model) to a more traditional “gradual entry program” that included 4 months of prevocational training (Bond et al., 1995). According to data on this study reported by Bond et al. (1995a), mean program costs for the two groups were as follows: $6,103 (= $4,667 for day-treatment, + $1,436 for SE) for the traditional program versus $4,463 (= $1,443 for day-treatment, + $3,020 for SE) for the accelerated entry program. While the apparent cost savings for the SE program were large, statistical significance of the cost differential was not reported.

From the perspective of a state or federal policy maker considering a program to expand SE services, the significance of the apparent implication of these findings (i.e., that incremental costs for substituting IPS SE for other vocational interventions are small) will of course depend on the extent to which expansion of IPS SE services constitutes a substitution for other vocational services that are currently offered. While definitive statistics are scarce, it does appear that there is substantial scope for substituting IPS SE services for other vocational interventions. For example, it is estimated that the number of persons with schizophrenia in the adult United States population is approximately 2 million, and that less than 25% of all persons with SMI receive any form of vocational assistance.10 This would imply that, as a rough approximation, 500,000 adults are receiving vocational assistance, but numbers receiving SE services have been estimated at about 10% of that figure.11 Thus, it seems reasonable to conjecture (as an order-of-magnitude estimate) that there is scope for a roughly ten-fold expansion of IPS SE services targeted specifically at persons with SMI currently served by other types of vocational services.

4. Other Evidence of IPS SE Cost Offsets on Mental Health Treatment Costs

Even if the policy initiative under consideration involves extending SE services to a target population that is not now receiving any other forms of vocational services, for which SE services would be direct substitutes, there may still be cost offsets in terms of reduced costs for non-vocational services. In view of the level of expenditures by Medicare, Medicaid and state and local public mental health agencies on treatment of persons with SMI, these other cost offsets could potentially be far more important than savings from direct substitution of SE for other vocational services.

However, evidence for such cost offsets is very limited at present. In the early study on accelerated treatment cited above (Bond et al., 1995), little difference in hospital admission rates or days of stay were observed between the accelerated and traditional groups (the accelerated group had about 10% greater use measured in either days or in admissions over a 12-month follow-up period). As we previously noted, Latimer’s review (2001) of that earlier study cited a savings in day-treatment cost of $3,224 per client per year that more than offset a $1,616 excess cost of SE services in the accelerated entry group. He also reported that the accelerated group had $658 in savings per client for other treatment and rehabilitation services combined (drop-in center, outpatient services, medication clinic, clubhouse, psychiatrist, and substance abuse counseling). The combination of these cost differences is just under $2,300 in net savings for the accelerated program. The small difference in hospital use (noted above) suggests a reduction in this net savings figure when inpatient costs are included; also, Latimer does not report any significance test for overall net cost savings between the two groups.

Before-after evidence from the studies by Clark et al. (1996, 1998), however, provides conflicting evidence of overall treatment cost offsets. As noted above, in the 1996 study of day-treatment conversions to SE, the evidence was at least suggestive of net cost savings from SE, primarily because of the very large reductions in day-treatment (partial hospitalization) costs from before to after conversion. Results reported in the 1998 paper, by contrast, showed large declines between the 18-month baseline and the 18-month follow-up periods for inpatient treatment costs in both the IPS SE and GST interventions (mean declines of $11,982 and $10,570 respectively). While it is possible that simply providing access to employment-oriented vocational services could produce these very large inpatient cost offsets, an alternative and perhaps more plausible explanation is regression to the mean due to clients selecting into these programs in response to a period of unusually high disability and dysfunction. The fact that these large declines were observed for both programs, and that the decline for IPS SE was modestly but not significantly larger, precludes us from attributing much of the decline to the greater effectiveness of IPS SE services. Before-after changes in mean outpatient treatment costs per client were far lower (an increase of $613 for IPS SE and a decrease of $355 for GST). Thus before-after changes for both programs suggested substantial net cost offsets (relative to not having such programs available) largely because of the decline in inpatient treatment costs exceeded the increase in vocational program costs (about $5,500 per client), but attribution of most of the declines to the programs themselves seems debatable.

Several other day-treatment conversions studies from roughly the same time period also failed to show convincing evidence of substantial treatment cost savings attributable to IPS SE programs. Becker et al. (2001) reported declines in hospitalization following conversion of two day-treatment centers to SE services in Rhode Island, but also reported a decline for a comparison group that remained in day-treatment, with no significant differences in the declines between the two groups. Bailey et al. (1998) reported on a before-after study of 32 long-term day-treatment clients in a mental health center in New Hampshire who voluntarily switched to an IPS SE program and were followed for 1 year. In this case, the authors made before-after comparisons for the IPS clients and found that “days of crisis housing, days of hospitalization, outpatient mental health service utilization, and service costs did change.” They noted that their expectations of reduced outpatient mental health service costs and use were not borne out in part because: (1) in the before period the SE clients spent many hours in a sheltered workshop program that were not counted as service utilization hours; and (2) after switching to IPS some clients required intensive initial IPS services and those who were not working “participated extensively in skills training groups”.

In the RCT comparison of IPS SE versus EVR for inner-city clients with SMI, Drake et al. (1999) did observe modest declines from baseline to follow-up in inpatient days for both study groups but the neither the declines nor the difference in declines between the groups were statistically significant.12 Follow-up period (18-month) overall mental health treatment costs were not significantly different between the two groups (Dixon et al., 2002); inpatient costs for IPS clients were higher (but not significantly higher), by approximately $4,500, but baseline inpatient costs were also higher for IPS, and per client costs for outpatient mental health treatment were almost the same in the two groups. Baseline costs for other services were not reported.

Henry et al. (2004) report the results of a comparative analysis of hospitalizations and emergency room (ER) visits for IPS SE versus a comparison group of propensity-matched clients at a single site in Massachusetts. The period covered by the study was May 1995 through December 1999. For both outcome measures, the IPS SE group showed significantly lower rates of service use during the study period. They note, however, that the comparison group had significantly higher levels of impairment and significantly lower levels of mental health service use, suggesting that the propensity matching did not produce sufficiently comparable groups. To explore these differentials, the authors examined interactions between IPS SE and a grouping of high versus low levels of mental health service use; they reported slightly smaller but still significant main effects and significant interaction effects indicating that IPS SE clients with high mental health services use had the lowest rates of hospitalization and ER use, while IPS SE clients with low mental health services use still had hospitalization and ER use rates that were significantly lower than their comparison group counter parts. The authors interpreted their findings as suggesting that negative IPS SE impacts on hospital and ER use are greater when mental health services are integrated with employment support services. They note, however, that their results may in fact be the product of selection bias on observable factors (not included in their propensity analysis). Moreover, as a result of their transformation of their hospital and ER use outcome variables into rankings, it is not possible to ascertain the magnitude of the effects that they report. Given these limitations, it is difficult to view the results of this study as strongly supporting the hypothesis that IPS SE services produce substantial savings in the form of lower hospital and ER costs.

In summaries of research findings, literature reviews have uniformly failed to find support for the proposition that IPS SE leads to substantial treatment cost savings. Latimer (2001) observes that research findings “offer little hope for a significant reduction in other health care costs” (besides VR costs) “following the introduction of SE….”13 Bond (2004), in a review the focused mainly on employment outcomes, also noted that “…by itself, enrollment in SE has no systematic impact on non-vocational outcomes…such as rehospitalization.” Similar conclusions were expressed in reviews by Latimer (2005) and by Schneider (2003).

Perhaps the strongest direct evidence of an impact of IPS SE on reducing medical costs comes from a six-site European randomized trial (Burns et al., 2007) that randomly assigned 312 patients with SMI who were interested in competitive employment to either IPS SE or an alternative “train-and-place” vocational service program. Over an 18-month follow-up period, data on 289 patients were available and showed significantly lower probability of a hospital admission (20% versus 31%) and percent of time in hospital for patients assigned to IPS (4.6% versus 8.9%). The relatively high rate of hospital admissions for both groups may be an indication of stringent inclusion criteria (e.g., having psychosis and a major role dysfunction for at least 2 years) and/or differences in practice patterns (i.e., greater use of inpatient admissions) vis-à-vis the United States.

In an interesting extension of their analysis, Burns et al. (2009) also compared subjects, within each of their two study groups, who were currently working with those who were not to assess whether the pattern of within-group differences varied by study condition. While statistical tests were not reported, they did present evidence that the differences in non-vocational outcomes, including the probability of hospitalization, were consistently larger for the “train-and-place” vocational services group than for the IPS SE group. They interpret this finding as suggesting that “IPS was more successful in getting less well functioning and symptomatic patients into employment.”

5. What Does the Employment-Treatment Cost Relationship Tell Us About SE Impacts on Treatment Costs?

It has been suggested in the recent literature (Bush et al., 2009; Drake et al., 2009) that empirical relationships between employment and treatment costs can help to inform us about the impacts of IPS SE on treatment costs. The logic of the argument is that since there is clear evidence that IPS SE promotes employment, evidence of a strong negative relationship between employment and treatment costs is a good indicator of expected treatment cost savings that arise from more widespread implementation of IPS SE services for persons with SMI.

Bush et al. (2009) present data from a 10-year follow-up of 187 persons, with co-occurring substance use disorder and SMI, who participated in the New Hampshire Dual Diagnosis Study. Employment outcomes for these 187 persons were tracked and all study participants were divided into three groups based on their trajectory of hours in competitive employment (defined as “any paid position in the regular job market”) over this follow-up period. These three groups were: a steady work group (n = 51), a late work group (n = 57) and a no work group (n = 79). The steady work group had a level of work hours in the baseline year which appears to have been an order-of-magnitude greater than the other two groups, and the annual average of work hours for the steady work group increased very clearly over the first 5 follow-up years, with a leveling off and a slight downward trend over years 6-10 of the follow-up. Over the full 10-year follow-up period, the mean yearly hours of work for this group was 5,060 (or roughly 10 hours per week). Because the work levels for the other two groups were much lower, both in the baseline and in the follow-up, and because the authors did not find any significant differences in follow-up service use or costs between these two other groups, they were combined into a single “minimum work” group whose mean yearly hours of work in the follow-up period was 411.14

Two service use measures were computed and compared for the steady work group versus the (combined) minimum work group: total outpatient service hours15 and total days in institutions (which included psychiatric hospitalizations and incarceration). An overall cost measure combining both of these measures of service use as also computed.

Baseline differences in annual means for these measures between the steady work and minimum work groups were fairly small: 178 versus 157 service hours, 26 versus 44 days in institutions, and $30,953 versus $37,898 for costs. In year 1 of the follow-up, outpatient service use hours for both groups increased substantially, but inpatient days dropped substantially. In subsequent years, outpatient service hours dropped substantially for the steady work group but remained fairly level for the minimum work group; for both groups, inpatient days did not show a clear downward trend from the follow-up year 1 levels (24 days for minimum work and 4 days for steady work), though costs for both groups did continue to decline. By follow-up year 10, costs had dropped to $9,732 and $17,949 for the steady work and minimum work groups respectively.

The authors tested for differences in the time pattern of utilization and cost impacts between the steady and minimum work groups by estimating a regression model, adjusted for baseline covariates, that allowed for group-specific differences in (1) baseline levels and (2) changes from baseline to first-year follow-up, and rates of change from first-year follow-up to tenth-year follow-up. Results indicated no differences for the outcome measures at baseline between the two groups, no significance in changes from baseline to first-year follow-up except a larger decline in costs for the steady work group that was marginally significant (p = 0.097), significantly greater downward trends from first-year to tenth-year follow-up for outpatient service hours and total cost for the steady work group, and a downward trend from first-year to tenth-year follow-up in a 0-1 hospitalization indicator that was more negative but not significantly different for the minimum work group. The authors point to this pattern of results as indicating that reductions in service use and costs are preceded temporally by increases in employment and that this supports the argument that their data show a causal relationship flowing from employment to reduced costs and service use.

The authors do acknowledge the potential problem of unobserved variables in a naturalistic study such as this, and specifically mention illness level, motivation, and response to treatment as potential confounders. While they go on to argue that other evidence relating to these specific confounders probably cannot account for the observed time pattern in their results, it is of course true that there are many other potential confounders that can induce a negative correlation between hours of work and service use or costs. A further problem in viewing these results in supporting the case for a nexus between IPS SE services and reduced costs is the fact that data indicating use or non-use of SE services were only collected on 19 of the 187 clients during the study period.16

Another very recent small-scale study (Schneider et al., 2009), based on experience in the United Kingdom, differed from Bush et al. (2009) in that the follow-up period was much shorter but all persons in the study were recipients of SE services. The paper examines baseline versus 12-month follow-up costs of services used for 142 clients, comparing 32 who were already working pre-baseline and remained in the same job, 32 who obtained worked just prior to baseline or during the follow-up period, and 78 who remained unemployment throughout the follow-up period. For both the baseline and 12-month follow-up interviews, clients were asked to self-report service use over the prior 3 months. A broad range of health and social services were included in the service cost calculations, including mental health services (“appointments with a psychiatrist, psychologist, community psychiatric nurse, attendance at a day center, counseling or therapeutic group work,…inpatient mental health care”), primary care (“general medical practitioner, district nurse, community physiotherapist, dentist or optician”), local authority costs (“day centers run by social services, home care and social work inputs”), voluntary day center costs (“day care run by not-for-profit agencies which are independent of the public sector”) and other secondary National Health Services costs (“hospital outpatient appointment and inpatient care for needs other than mental health”). Employment support agency costs were also included and varied for each client with the number of their recorded contacts with the agency.

Among the three study groups, a clearly significant decline in service costs (excluding employment support service costs) was only observed for the persons who entered employment during the study (L 40.00 versus L 30.34 per week), with all of this decline due to reduced mental health services cost. An even larger and marginally significant decline (p = 0.067) over the study period in service costs was observed for the persons employed prior to and during the entire follow-up (L 47.86 versus L 28.31 per week). For the group that remained unemployed during the follow-up, service costs (excluding employment support service) were virtually unchanged from baseline to follow-up. In contrast, all groups had significant change in employment service costs with those not employed and those remaining employed experiencing large declines in costs while those who moved into employment during the follow-up reported an increase in these costs. Summing both types of costs, increases in costs were only observed for those who moved into employment, while the other two groups experienced relatively large declines in costs.

Interpretation of these results in terms of employment impacts is unclear, given the correlational nature of the observed association. The decline in service costs for the longer-term employed persons and for the persons moving into employment might suggest a negative relationship, but a net cost reduction for the second group would only occur if post-follow-up employment service costs declined instead of increasing. Finally, since all study participants were in a SE program, inferences relating to the impact of SE versus alternative vocational service programs (or versus no vocational service at all) cannot be made.

Taking together the evidence presented by these two studies, along with the recent study by Burns et al. (2008) support the hypothesis of a negative correlation between working and mental health treatment costs. As noted above, it has been suggested that this evidence supports the thesis of a negative causal impact of employment on treatment costs. This has been viewed as evidence for the view that employment ameliorates, or at least does not exacerbate, mental health problems. In the context of the SE literature, this evidence along with the strong evidence of a causal IPS SE impact on employment rates has been interpreted as supporting the hypothesis that IPS SE provides substantial treatment cost savings. However, in the absence of studies that more effectively control for biases due to omitted variables, and due to the impracticality of random assignment to employment, these causal interpretations of the “effect” of working on mental health treatment costs do not provide a persuasive basis for estimating treatment cost savings from IPS SE.17

6. Summary of Results Relating to Costs of Services and Service Cost Offsets

In order to draw conclusions about the annual unit costs of providing IPS SE services, we will focus on costs per client during the initial time period (typically 1 year) during which the client receives services. We have briefly noted that fragmentary evidence supports the expectation that costs for supporting each client after their initial year of services will decline but certainly does not fall to zero on a longer-term basis (perhaps up until typical “retirement” age for most clients or the clients drop-out of services for other reasons). This is a relevant consideration for analyzing the implementation of expanded support for IPS SE services unless it can be demonstrated that the initial employment gains over the first 12 or 24 months of service can in fact be largely maintained over a longer time period with little additional support service. I am not aware of any evidence that supports this expectation at present. Moreover, the evidence concerning job turnover, combined with the expectation that clients who lose or leave jobs will generally require additional SE services to obtain and keep new jobs, argues against such an expectation. I therefore conjecture that a reasonable time pattern of SE service costs over time for each new client would involve a somewhat higher initial year cost, a somewhat lower cost in subsequent years, and a further decline in costs due to attrition from SE services. I will include some illustrative calculations of this time pattern below.

As for the costs per client during the initial year of service, the paper by HMC (2006) and Latimer (2004) appear to provide the most useful benchmarks, suggesting a unit cost figure (in 2005 dollars) in the range of $3,500-$5,000 per client. Both studies also present data that suggests the potential for relatively large reductions in unit costs if client-staff ratios can be maintained at levels that are above those generally observed now but that are still viewed as consistent with high-fidelity IPS service (e.g., about 20 clients per ES). For that reason, it may be sensible for policy planning purposes to focus on the lower cost figure of this range.

Several other studies cited above provide annual unit cost figures that are quite comparable. Cimera (2008) reports annual figures (in 2005 dollars) of $3,565 for clients with “significant” disabilities and $3,932 for those with the “most significant” disabilities. The reported figures from Clark et al. (1996) for the early 1990s of $1,920 and $1,878 per client per year, when converted to 2005 dollars are approximately $2,700. In the randomized trial studied by Drake et al. (1999) and by Dixon et al. (2002), the mean annual cost per client for an IPS SE program, in 2005 dollars, was $3,669. Note, however, that two of the studies cited above suggest figures somewhat higher than the $3,500 lower end of my suggested range. Clark et al. (1998) report annual cost per client from a randomized IPS trial in 1992 of $3,757, which (based on the CPI) is equivalent to $5,249 in 2005. Also, the Bond et al. (1995) study reviewed by Latimer (2001) reported a per client figure of $3,020 for the early 1990s, which would amount to roughly $4,500 in 2005 dollars.

Viewing the figures just discussed as per client annual costs for the initial year of support services, it is useful to consider an order-of-magnitude “guesstimate” of the longer-term per client costs for SE programs. Based on the fragmentary evidence of the time patterns of services for individual clients noted above, I will provide an estimate based on the assumption that in each subsequent year of SE service, the cost per client is approximately one-half of the initial year cost. Even less evidence is available on attrition patterns of SE clients so I will use two alternative assumptions: one is that 10% of clients drop out per year (implying an average length of SE service of 5 years) and the second is that 5% of clients drop out per year (for an average length of SE service of 10 years).18 Assuming a $3,500 per client initial year cost and a 3% annual real rate of discount, the foregoing assumptions imply a long-term per client cost of $10,581 per client with a 10% annual attrition rate and $17,139 per client with a 5% annual attrition rate. (These figures are in present value of 2005 dollars.)

It should be emphasized that all of the cost figures cited in this section thus far represent implementation costs for an IPS SE program versus no program at all, that is, with no cost offsets. As noted above, the evidence from the “conversion” studies where individuals were moved from “traditional” rehabilitation programs, including day programs and “train-and-place vocational programs, to IPS SE programs is consistent with the argument that IPS SE is not substantially more expensive and indeed may be less expensive than these “traditional” programs. This suggests that the incremental implementation costs of a policy of expanding IPS SE programs by replacing “traditional” programs will in fact be quite small. However, in the absence of specific policy proposals and information on the target populations it is not possible to gauge the extent to which an offset of these “traditional” programs would occur. Moreover, the fraction of patients with SMIs who are in fact regularly served by these programs may be fairly small, particularly for the most costly day programs.19

Finally, as noted above, there is very limited hard evidence, at least at present, for the argument that the implementation costs of expanded SE programs will be offset by other treatment cost savings. However, the case for other types of social cost offsets of expanded SE services, relating to reductions in net consumption and increases in earnings, may well be somewhat stronger. We turn in the next sections of this report to an examination of the evidence for these other social cost offsets.

7. The Impact of IPS SE on Net Consumption by IPS SE Clients: Conceptual Issues

As noted above in our introduction, from a societal CEA perspective, the costs of IPS SE programs are equivalent to impact of the programs on the net consumption of IPS SE clients, where net consumption equals consumption minus earnings, and where consumption is defined broadly to include all goods and services consumed by these clients. In the preceding sections on SE costs, we have examined two components of consumption impacts, namely, costs of SE services and cost impacts of SE on use of other mental health treatment and rehabilitation services.

A number of other components of SE clients’ consumption are difficult to measure and typically not reckoned in CEA studies of SE programs. These include “maintenance” items of private consumption, such as rent, food costs, clothing costs, etc. In principle, consumption of services provided through public or charitable organizations (rather than purchased directly by the clients) should also be included in total consumption figures; these may, however, be very difficult to capture. For example, it has often been noted that persons with SMI have higher-than-average numbers of contacts with law enforcement. Measuring and costing these contacts is, however, very challenging and impacts of IPS SE programs on the costs of these contacts have not been measured in the literature. Impacts on other publicly funded services that are also consumed by clients may be easier to measure or estimate, such as Medicaid-funded somatic health care services.

Let us define clients’ consumption, excluding the SE program service costs, the mental health treatment costs, and the costs of all other publicly funded services (Medicaid, police services, etc.), as their private consumption. Basically, this consists of goods and services purchased by clients directly with funds that they have access to (from their earnings, or from transfer income or grants from public or private sources). Let us also define private net consumption as their private consumption minus their earnings. Thus, the problem of computing a societal cost figure, given that we can capture costs for most SE, health treatment, mental health treatment and (perhaps) other public services through other data, is the problem of measuring SE program impact on clients’ private net consumption.

If client’s private consumption could be easily observed, we could obtain SE program impact on private net consumption simply by measuring impacts on private consumption and on earnings. As already noted, however, private consumption will generally be quite difficult to measure (and in fact has not been measured in SE evaluation studies). Alternatively, we could make the assumption that the SE program has no (or hardly any) impact on private consumption and then the impact on net consumption is simply the negative of the impact on earnings.

Another alternative, that appears to be feasible, is to look at SE program impact on total client income as the impact on client private consumption (based on the assumption that SE impact on client saving will be negligible).20 To implement this approach, we need only assess SE program impacts on all the important components of client income: earnings, public transfer payments, private transfer payments (e.g., from family members), and other non-earned income (e.g., interest on financial assets). Thus, our measure of the SE program impact on net-private consumption is just the impact on total private income (which includes an earnings impacts) minus the impact on earnings.

Note that it is not clear a priori whether the direction of impact of an SE program on private net consumption of clients will be positive or negative. Increases in earnings could be expected to reduce other private non-earned income, as public or private transfer payments to clients are reduced, and thereby reduce private net consumption. Conversely, increased earnings may increase consumption (including consumption of goods or services needed for work) and such increases in consumption will reduce any negative SE impact on net consumption.

Note also that from a societal CEA perspective, the distinction between earnings in “competitive” jobs and earnings in jobs that are not competitive (i.e., “sheltered”, “set-aside”, “transitional”) is potentially important. The economic rationale for looking at private consumption net of earnings as component of societal cost is that the dollar amount of earnings is offset by the dollar amount of goods produced by the worker (if the worker is in fact paid her/his marginal product). There may be good reason to believe that in many non-competitive jobs, earnings exceed marginal product, which would imply that increases in earnings due to an SE program do not fully translate to reductions in net-private consumption (holding gross private consumption constant). On the other hand, there may be circumstances where non-competitive wages received by workers (e.g., those in below-minimum-wage sheltered or set-aside jobs) are actually less than their marginal product, in which case increases in such earnings due to an SE program actually understate the corresponding reductions in net-private consumption.21

Two other minor qualifications to our proposed approach should also be noted. First, it seems reasonable to measure both earnings impacts and private consumption impacts on an after-tax basis; if clients’ consumption of public services is already captured via other measures (such as impacts on Medicaid costs), increases in taxes paid (either as income taxes, payroll taxes, or sales taxes) could be viewed as portions of private income that are not in fact consumed by the client and therefore should not be counted as consumption.22 In addition, it has been argued (in the context of cost-benefit analysis) that increases in earnings also correspond to additional increases in the value of the worker’s marginal output because of the presence of sales taxes (Bailey, 1980). In the present review, we will generally disregard these qualifications as presumably negligible in magnitude, though we note at the outset that this probably introduces a small positive bias into our conceptualization of the impact of SE programs on net-private consumption.23

In the next two sections of this review we first report on available findings from the literature on SE impacts on earnings and then on SE impacts on non-earned private income.

8. IPS SE Impacts on Clients’ Earnings

While the most commonly used measures of SE program impact on clients’ success in the labor market are employment rates and weeks and hours of work, a number of studies have also reported estimates of impact on earnings. Others have not reported earnings, but have reported SE impacts on wage rates and hours and/or weeks of work.

8.1 Studies Reporting Earnings Impacts

In the “accelerated entry” randomized trial by Bond et al. (1995, 1995a) noted above, accelerated participants averaged over twice as much in employment earnings as did gradual participants ($1,525 versus $574) over a 12-month period. Several other earlier studies reviewed by Latimer (2001) also reported increases in earnings following entry of patients into IPS but one of these studies (Rogers et al., 1995) was a very small before-after study with not comparison group and did not report any significance test for the observed earnings increase; the second study (Bailey et al., 1998) reported a statistically significant increase in earnings for long-term day-treatment clients transferring to an SE program but did not report the amount of the earnings increase.

In their quasi-experimental comparison of two CMHCs in Rhode Island that converted from day-treatment to IPS SE with a third CMHC that did not convert, Becker et al. (2001) presented results separately for the 77 study participants who had no previous competitive work experience in the prior 5 years and the 37 study participants who did have such experience. For both groups, differences in average earnings, among the three CMHCs, in the 2-year follow-up period were substantial. For the first group (no work history), the two CMHCs that had converted reported means of $518 in earnings versus only $61 for the CMHC that did not convert; for the second much smaller group (with some work history) the two conversion CMHCs reported mean earnings of $3,675 and $1,553 versus a mean for the third (non-conversion) CMHC of $1,228. Variances in these means were large, however, so that statistical significance of these differences at the 5% level was not observed.

Results reported by Clark et al. (1998a) from the randomized comparison of IPS SE and GST, along with the results of baseline versus follow-up comparisons for both groups, provide strong evidence of a positive, significant, and substantial IPS SE effect on earnings. In the 18-month follow-up period, mean earnings per person were $3,185 for IPS versus $1,800 for GST. Corresponding figures on changes in earnings from baseline were +$854 for IPS versus -$139 for GST.

Lehman et al. (2002) report on a randomized comparison if IPS SE with a “comprehensive PSR program, only a component of which was a vocational service.” Only a third of the non-IPS control group received any vocational services at all during the intervention, with services consisting mostly of skills training and vocational support groups. Average wages earned per month for the IPS subjects rose very rapidly in the first 4 months of the study and generally remained in the range of $40-$50 per month through month 18, then dropping to the $35-$40 range per month for the final 6 months of the study. Control group average earnings remained at $10 or below for the first 8 months of the study and fluctuated monthly in the $10-$30 range for the rest of the study. It was noteworthy that during the last 5 months of the study the IPS average earnings figure was only about 1.5-2 times that of the control group, while for most of the previous months the earnings differential was much larger (in absolute and relative terms). The authors suggest that the very low rates of employment and earnings for the control group (compared to experience in other IPS studies) may have been due to the lack of emphasis on vocational services in the control group’s rehabilitation program and to the high rate of co-occurring substance use diagnoses (50% in the past year at baseline) for both study groups.

Mueser et al. (2004) carried out a randomized trial “to compare the IPS model to a PSR program using transitional employment, and to a standard vocational service involving an array of vocational programs in a group of inner-city clients with mainly African-American or Latino backgrounds.” A total of 204 clients were randomized into one of the three study conditions. Results over the 24-month follow-up period provided strong evidence of positive earnings effects for IPS. Average pr client earnings over the follow-up period were $2,095 for the IPS group, $1,124 for the standard vocational service group, and $721 for the PSR group. Corresponding average earnings in competitive jobs were $2,078, $616, and $239; the larger differences across the groups reflect the much stronger emphasis on competitive work in the IPS approach.

Earnings results from the inner-city randomized trial of IPS SE program versus an EVR service (Drake et al., 1999) were reported separately for three different types of jobs and for all job combined. Average earnings per person over an 18-month follow-up period were $1,875 for IPS versus $154 for EVR. Corresponding averages for non-competitive jobs were: (1) $43 (IPS) versus $1,335 (EVR) for sheltered jobs; and (2) $81 (IPS) versus $516 (EVR) for National Industries for the Severely Handicapped (NISH) jobs. Averages for total earnings were $2,000 (IPS) versus $2,005 (EVR).

Similar findings on earnings emerged from a more recent study (Bond et al., 2007) based on a randomized trial of IPS versus a “diversified placement approach (DPA), which emphasizes work readiness and offers a range of vocational options, including agency-run businesses and agency-contracted placements with community employers.” The trial recruited 194 subjects over the period August 1999-March 2002, randomized these subjects and successfully follower 187 of these subjects over a 24-month follow-up period for each subject. Average differences in competitive earnings over this 24-month follow-up were $5,034 for IPS versus $2,675 for the DPA group; corresponding averages for earnings from all employment were $5,199 (IPS) versus $5,244 (DPA).

Looking across the various studies, several conclusions emerge. First, IPS consistently shows superiority in increasing earnings from competitive employment. Second, differences between IPS and comparison groups in total earnings depend critically on the type of vocational program provided in the comparison group. When the comparison group is provided a vocational program with a strong employment focus and vigorous outreach (as in the EVR and DPA comparisons noted above), clear differences in total earnings outcomes in favor of IPS are not observed; but when the comparison group is a “traditional” day program or PSR program without a strong employment emphasis, or a program that focuses primarily on skills training with job finding and placement as a secondary activity, positive IPS effects are observed for both competitive and total earnings. Fourth, in view of the differences between IPS effects on competitive earnings versus IPS effects on total earnings, implications of these effects for private net consumption (in a societal CEA context) may depend on the relationship between non-competitive earnings and marginal product noted above.

Finally, it should be noted that the dollar magnitudes of the earnings differentials reported above could not, in most cases, be translated to dollars of purchasing power in 2005 because the earnings figures covered a range of different dates and adjustments for inflation were not reported in most of the studies. However, it is relevant to note that upward adjustment to dollars for 2005 would involve inflations (based on the CPI) on the order of 28% (for 1995 to 2005) to 13% (from 2000 to 2005). Also note that the length of the periods over which earnings differentials were reported ranged from 12 months to 24 months.

8.2 Projecting Earnings Impacts from Impacts on Hours and Weeks Worked

An alternative approach to estimating IPS SE impacts on earnings is to estimate IPS impacts on hours of work and multiply this impact by an appropriate hourly wage figure.24 As in the case of earnings, only a minority of published studies have reported SE impacts on either annual hours of work, annual weeks worked, or average hours per week worked by those with jobs.

Looking specifically at the 11 published RCTs of high-fidelity IPS SE reviewed by Bond et al. (2008), four of these studies reported data on annual hours of work. Bond et al. (2007) reported that mean annual hours for the IPS group that were 64.5 hours less than for the comparison (DPA) group, but when the comparison was restricted to hours worked in competitive jobs, the IPS group averaged 155.8 hours per year more than the DPA group. Drake et al. (1999) reported mean annual hours for the IPS group that were 196.3 hours more than for the comparison (EVR) group, but did not report total hours for either group including non-competitive jobs. Mueser et al. (2004) reported that the IPS group averaged 117.25 hours per year more than the group assigned to PSR and 70 hours per year more than the control group with “standard services.” Looking specifically at hours of work in competitive jobs, the mean annual additional hours for IPS were 166.3 (versus PSR) and 134.9 (versus standard services). In the New Hampshire RCT of IPS versus GST (Drake et al., 1996), average total annual hours worked were 399.2 for IPS versus 132.8 for GST; it appears that the differential between IPS and GST for competitive jobs is even larger, but it cannot be computed directly from the figures in the article.

Seven of the RCT’s in the Bond et al. (2008) reported information on IPS SE impacts on annual weeks worked. Bond et al. (2008) cite Drake et al. (1999) as reporting 10.1 mean annual weeks in competitive jobs for the IPS group versus 0.8 weeks for the comparison (EVR) group; they also cite Lehman et al. (2002) as reporting 6.0 mean weeks worked in competitive jobs for IPS versus 1.6 weeks for the control group. Neither Bond et al. (2008) nor the original studies report total weeks for either IPS or the control group including non-competitive jobs. Mueser et al. (2004) reported that the IPS group averaged 15.09 weeks worked per year versus 5.7 weeks for the group assigned to PSR and 9.52 weeks per year for the control group with “standard services.”

Looking specifically at hours of work in competitive jobs, the mean annual weeks for IPS were 14.86, 1.69, and 9.52 for IPS, PSR and standard services respectively. Bond et al. (2007) reported mean annual weeks worked of 17.31 for the IPS group versus 21.94 for the comparison (DPA) group; when the comparison was restricted to weeks worked in competitive jobs, the IPS group averaged 16.15 versus 8.17 hours for the DPA group. The three remaining studies for which Bond et al. (2008) reported comparisons in competitive weeks worked were for two international studies (Latimer et al., 2006; Wong et al., 2008) and one study that involved a combination ACT-IPS intervention (Gold et al., 2006).

8.3 Long-Term Trajectories of Earnings Impacts

Evidence on long-term earnings trajectories is obviously very limited, largely because of the expense of conducting long-term follow-up evaluations and (in some cases) the scaling back or elimination of funding for IPS SE services.

Salyers et al. (2004), reporting on the long-term follow-up experience of 36 clients from two SE programs in New Hampshire, observed that after 10 years 17 of the 36 clients were currently employed, working a mean of 13.7 hours per week at a mean hourly wage of $6.55.25 This suggests a mean weekly earned income of approximately $90 per week for those who were currently employed, and a mean for the entire group of approximately $43 per week.26 (For the 33 clients with any reported work over the 10-year follow-up, the mean weekly hours and hourly wages at the most recent job they held were similar to those reported for the currently employed.)

While comparable figures for the entire 10-year follow-up period were not reported, the authors indicate only five of the 36 clients did not work at all during the entire follow-up, and 12 were “consistently employed for at least 5 years”. Finally, an important qualification to all these reported figures is that they are based on follow-up from a before-after study with no control group comparison; thus they almost certainly overstate the long-term impact of SE services relative to any relevant control or comparison group.

Results from a much shorter follow-up period were reported by McHugo et al. (1998). Clients from the 18-month randomized trial of IPS versus GST (Clark et al., 1998) were re-interviewed to obtain data on their hours worked over the 24 months that followed completion of trial study period. Mean hours of work were significantly for the 24-month follow-up period were significantly larger for the IPS group (815.4 hours versus 436.2 hours), and mean total wages over the same period were also significantly larger for the IPS group ($5,407.19 versus $2,624.79). Comparing the hours of work in the 24-month follow-up to the initial 18-month study period revealed no significant changes over time within each group. Thus, it appears that IPS impact on earnings (relative to the GST control group) did not decay in the 24-month follow-up.27, 28

9. IPS SE Impacts on Non-Earned Income/Total Income

Reported impacts on non-earned income and total income are very scarce in the current literature. The only published study to date is the comparison of IPS SE and a GST alternative (Clark et al., 1998). The authors reported a significant 18-month difference in earnings favoring IPS ($3,185 versus $1,800) and in taxes paid ($706 versus $415).29 Non-significant differences in income indicated higher total income for IPS ($15,552 versus $14,276) and slightly lower government benefit payments for IPS ($9,992 versus $10,368). Since the magnitudes of the earnings differential and the total income differential were virtually the same, the study results support a conclusion of zero IPS SE impacts on social cost arising from differences in private consumption net of earnings between the two study groups.

A different conclusion emerges in this study, however, from the comparison of IPS versus GST changes from baseline in income and earnings. The change in income was $788 greater for the GST group, and the change in earnings was $992 greater for the IPS group, so the change in social costs for private consumption was $1,780 greater for the GST group.30

II. EVIDENCE ON EFFECTIVENESS OF IPS SERVICES

Multiple dimensions of IPS effectiveness have been mentioned in the SE evaluation literature. The most prominent dimensions, of course, are employment-related (measured by employment rates, hours of work, job tenure, job quality, and other measures of job satisfaction); however, non-vocational dimensions such as symptom reductions, health or mental health status, and perceived quality of life have also been examined. While measurement of IPS impacts in these different effectiveness dimensions is useful for carrying out cost-effectiveness comparisons between IPS and other employment-focused interventions, in principle a social CEA of IPS services should be able to reduce these diverse quantitative dimensions of effectiveness to a single preference-weighted measure that can be used as the denominator of a CE ratio for comparison with other mental health interventions using the same effectiveness measure. Unfortunately, a widely accepted measure of this type does not now exist.31 Moreover, use of broader health-state-preference based effectiveness measures (e.g., preference-weighted changes in the probability of health states which are sometimes labeled as "quality adjusted life years" [“QALYs”]) would be required for cost-effectiveness comparisons with interventions directed primarily at somatic health problems. There does not appear to be any SE literature that has applied any of these broader measures of effectiveness, with the result that at this time we lack a basis for CEA comparisons of IPS interventions with other non-mental health interventions in the health sector.32

In this review, I will focus on four general topics relating to IPS effectiveness. First, relying mainly on the available recent literature reviews, I will briefly summarize the evidence on employment-related dimensions of effectiveness. (Much of this evidence is related to the evidence on earnings that I discussed in the previous section of this paper.) Second, I will review the much more limited findings relating to non-vocational dimensions of effectiveness. Third, I will review a small number of recent studies that have tested enhancements to the current “standard” IPS model. Fourth, I will briefly review the literature on searching for variations in IPS effectiveness that appear to be related to individual client characteristics. This last topic is of interest for reaching conclusions about the groups of clients who would benefit most from broader implementation of IPS programs.

1. Employment-Related Dimensions of IPS Impacts

1.1 Comparisons of Employment Rates

There is a large volume of compelling evidence from randomized controlled trials demonstrating that IPS SE programs achieve higher rates of competitive employment compared to control groups in more traditional rehabilitation services, and that they generally achieve higher rates of employment overall. Summaries of these results are provided in a number of literature reviews published within the past 15 years (Bond, Drake, Mueser et al., 1997; Drake, Becker, Clarke and Mueser, 1999; Bond et al., 2001; Latimer, 2001; Twamley, Jeste and Lehman, 2003; Schneider, 2003; Bond, 2004; and Bond, Drake and Becker, 2008). The most frequently used competitive employment outcome measure is the rate (fraction, percent) of persons who are competitively employed at a particular point in time (most commonly the final follow-up) or the fraction (percent) with any competitive employment over the study period. Study follow-up periods typically range from 12 months to 24 months.

The recent review by Bond et al. (2008) compiles results from seven RCTs in the United States and four from outside the United States reported in the literature over the period 1996-2008. For the seven United States studies, the mean rate of competitive employment at any point in the RCT was 62% (and 68% if one apparent outlier study was excluded); the mean for the control groups was 24% (27% with the apparent outlier excluded). For the four non-United States studies, the IPS mean was 59% and the control mean was 21%. The consistency of these results across a variety of populations, locations, and control conditions is clear. It is particularly interesting to note that two of the RCTs focused on populations that might be expected to be less responsive to the IPS intervention.

Twamley et al. (2008) studied 50 adults age 45 and older (mean age of 50.5 years), a group for whom human capital theory would suggest that “investment” of time and effort in obtaining employment would have a lower rate of return because of a shorter expected payoff period. This disincentive may, however, have been mitigated by the study inclusion criterion of a desire to work.33 The authors also noted the greater obstacles to work due to higher comorbidity rates in this population. Nevertheless, over a relatively short study period (12 months) competitive employment rates of 57% were achieved in the IPS arm of the study versus 27% for the controls.

Lehman et al. (2002) randomized an inner-city group of subjects with a high rate of lifetime substance abuse diagnoses (75%) and a very high rate of SSDI or SSI receipt (89%). Not surprisingly, low rates of competitive employment were observed for both study groups over the 24-month study period, though the differential between IPS (27%) and controls (7%) was still significant.

Bond et al. (2008) also compare the findings of their review, relating to IPS versus control differences in competitive employment rates, to the findings reported in five other recent reviews (including some studies of other SE programs, including some which were not “high-fidelity”). They observe that these findings were generally similar, with IPS competitive employment rates being at least twice as high as that for clients in the control conditions.

While the results reviewed by Bond et al. (2008) suggest a substantial consistency across studies, it is also interesting to note that there were considerable differences in the control conditions for the studies that they reviewed. In most of the United States studies, the control conditions were vocational programs but in the case of Lehman et al. (2002) and in one of the two control groups in Mueser et al. (2004), the control condition was a standard PSR program that did not have an exclusive focus on vocational outcomes. Not surprisingly, these two studies reported low rates of competitive employment (7% and 18% respectively) relative to control conditions in other United States studies.

In contrast, several of the other control conditions had a very strong focus on employment including non-competitive employment. In Drake et al. (1999) the control condition was an “EVR programs” that “used stepwise approaches that involved prevocational experiences, primarily paid work adjustment training in a sheltered workshop”, even though they also “endorsed competitive employment as their goal”. In Bond et al. (2007), the control condition was a "DPA, which emphasizes work readiness and offers a range of vocational options, including agency-run businesses and agency-contracted placements with community employers.” Accordingly, results for the control groups in these studies tended to show high overall rates of employment even though the rates of competitive employment were not unusually high. In the Drake et al. (1999) study, employment rates for IPS versus control were reported for three categories of jobs: competitive (61% versus 9%), sheltered (11% versus 71%), and NISH set-aside jobs (3% versus 9%). In the Bond et al. (2007) study, employment rates for IPS versus DPA were as follows: competitive -- 75.0% versus 33.7%; agency-run business -- 0% versus 25.3%; and other paid non-competitive jobs -- 5.4% versus 15.8%. For these two studies, overall employment rates were about the same between IPS and control. In several other studies reviewed by Bond et al. (2008) that reported such relevant data, non-competitive jobs were an important outcome. In Lehman et al. (2002), for both IPS and control groups, about 45% of all jobs reported were non-competitive. The same was true for the two control conditions in Mueser et al. (2004), but in the latter study the IPS group did not report any non-competitive jobs. (Of course, in both these studies the overall employment rates were still considerably higher for the group than for the controls.)

Because the length of follow-up for these studies is two years or less, available evidence on longer-term IPS impacts on employment rates is not available from any IPS RCT studies, or from any quasi-experimental studies of IPS impacts. The only available data of some relevance appear to be from two recent descriptive studies that are limited to persons who received IPS services.

McGurk et al. (2006) present some interesting 4-year follow-up results from a purely descriptive study of a small number (30) of IPS SE clients in an ongoing program. They report that “(o)f the 14 people competitively employed with supports during the first 2 years in SE, nine continued to work with supports during years 3-4, four became unemployed, and one continued to work without supports.” They also found that “(o)f the 15 clients who did not work during the first 2 years in SE, three left the SE program (and remained unemployed), 11 stayed in the program but remained unemployed during years 3-4, and one worked sporadically without supports.” In short, for this group of clients in a program that continued to function over a 4-year period, the employment outcomes observed over the first 2 years were mostly maintained, with some diminution, in years 3 and 4. Of course, in the absence of data on a control or comparison group, we cannot translate these observed outcomes in evidence about longer-term IPS SE impacts.

Similar descriptive findings were presented for a longer follow-up period (8-12 years) by Becker et al. (2007) for clients who had access to IPS services in ongoing operation over the longer-term follow-up period. Of the 78 IPS SE clients in two original studies, 38 were followed up in re-interviews. At the time of the original 18-month follow-up interview, 14 of the 38 clients were receiving IPS services and 11 of these were competitively employed. Of the remaining 24 clients, 20 were receiving no employment services and four were receiving non-IPS services; employment rates for these 24 clients were not reported. During the longer-term follow-up period, all clients reported some work and 27 reported working in more than half of all the months of the longer-term follow-up period. At the specific time of the follow-up re-interview, 27 of the 38 were working with 18 working in competitive jobs, four in set-aside jobs, two in sheltered work, and three in volunteer positions. Of the 27 currently working four were working more than 20 hours per week.34 As in the McGurk et al. (2006) study, comparable data for a comparison group of non-IPS participants (during the initial follow-up period) were not collected or reported, so inferences about impacts of IPS services cannot be drawn.

1.2 A Comment on Comparisons of With Non-competitive Outcomes

The preceding discussion raises the question of the extent to which the focus of employment outcome measurement should be on competitive jobs versus all types of jobs (including non-competitive jobs). We note that strong and persuasive arguments have been made for focusing on competitive jobs, based on commonly accepted views about the nature of the “recovery” process and the desirability of integration of persons with severe mental illness into the broader community. It may, however, be argued that there are also relevant differences between outcomes of no employment versus outcomes of non-competitive employment, and that some positive (albeit lesser) weight in overall effectiveness should be accorded to the latter outcome, even though this may risk diverting agency resources and priorities from the more desirable goal of competitive employment outcomes.35

The practical challenge in measuring effectiveness impacts of IPS programs, relative to other employment programs that do not exclusively focus on competitive work outcomes, is to determine whether the appropriate weight given to the non-competitive outcomes should be zero or should be some positive value (albeit less than the value of a competitive outcome). To some extent this determination could reflect other measurable differences in job outcomes that have often been noted in the literature on IPS interventions, such as differences in pay and differences in non-monetary outcomes such as job satisfaction and self-esteem. Relevant data on these particular job outcomes are, however, often not reported in the literature.36

1.3 Impacts on Other (Competitive) Employment Outcomes

In addition to impacts on competitive employment rates, possible impacts of IPS on hours of work, weeks of work, and job tenure have also been documented in the literature. Bond et al. (2008) report that in the four RCTs in their review that reported such data, for study subjects who held any competitive job, the rate of working in a competitive job for 20 hours or more at any time during the follow-up period for each study averaged 43.6% for IPS versus 14.2% for controls.37 Data from Bond et al. (2008) for seven RCTs (five in the United States) indicate the average weeks worked per year in a competitive job (again for those who held any competitive job) were 19.2 for IPS versus 18.9 for the controls. There was, however, considerable variability in this result across studies. Several studies (Drake et al., 1999; Mueser et al., 2004) found that IPS subjects with any competitive jobs worked roughly twice as many weeks as their counterpart control subjects, while the remaining United States studies reported much smaller differences but still more weeks of work for IPS subjects, and the two non-United States studies reported slightly more weeks of work for the controls.

Another outcome indicator that has been of particular concern in the IPS literature is job tenure. Bond et al. (2008) report data for six RCTs (including one non-United States study) on weeks worked at longest competitive job. Differences between IPS and control subjects are quite small, with the overall average being 22.0 weeks for IPS subjects versus 16.3 for controls. An exception to this pattern is the Hartford study (Mueser et al., 2004) in which the IPS subjects averaged 25.5 weeks in the longest-job while control subjects averaged only 4.4 weeks. Bond et al. (2008) also note that the relatively short follow-up periods for these RCTs truncates some jobs and creates a downward bias in the measure of duration for the longest-held job. While they suggest that this limitation argues for focusing on an alternative measure such as weeks worked, it is worth noting that the latter measure is at best a very indirect measure of job turnover rates. In a more recent manuscript (Bond and Kukla, 2010), an extended time frame is used to measure the longest-job and job turnover for 142 clients from high-fidelity IPS programs who were each followed for 24 months after they began a competitive job. Results indicated an average job tenure of 10.0 months in their initial job and an average of 1.92 jobs during the 24 month observation period. These results confirm the limitations in the longest-job statistics from the RCTs noted earlier.

Several studies have also reported on IPS versus control or alternative treatment comparisons for job satisfaction. In their RCT of IPS versus GST (Drake et al., 1996), the authors reported that “(a)mongthose who were working at the 18-month interview, there were…no group differences in satisfaction with job.” Similarly, in their RCT with inner-city clients, Drake et al. (1999) report that “…Indiana Job Satisfaction ratings (averaged over all jobs and all rating periods) were high for both groups and revealed no group difference….” In the Hartford study (Mueser et al., 2004), the same rating scale was applied and revealed no differences between groups.

In the Bond et al. (2007) RCT comparison of IPS versus DPA, job satisfaction was measured, for the first paid job and the longest held paid job, at 2 weeks, 3 months and 6 months after the start of the job. IPS participants reported higher job satisfaction at 2 weeks, for both first and longest-jobs, but no differences was observed at 3 months and 6 months. It is also interesting to note that with the exception of the RCT of IPS versus GST (Drake et al., 1996), the comparisons just cited included persons holding non-competitive jobs as well as competitive jobs.

2. Non-vocational Dimensions of IPS Effectiveness

Several of the RCTs reviewed by Bond et al. (2008) reported on detailed tests for non-vocational outcomes, but none reported significant differences between IPS treatment groups and control/comparison groups. Drake et al. (1999) administered the Global Assessment Scale, the expanded Brief Psychiatric Rating Scale, the Rosenberg Self-Esteem Scale, and sections of the Quality of Life Interview. They observed increases over time in both study groups and interpreted this result to mean that:

“…participating in a vocational program is associated with improvements…in non-vocational outcomes such as quality of life and self-esteem. Because participants in the two programs engaged in similar rates but different types of work, our data suggest that type of work is less important than participating in a vocational program for the effect on non-vocational outcomes.”

In their analysis of the Hartford RCT, Mueseret al. (2004) assessed psychiatric symptoms, overall functioning, social functioning and social networks, quality of life, and self-esteem with interviews conducted at baseline and every 6 months of the 2-year follow-up. They did report evidence that clients in the PSR group showed more satisfaction with their social relationships over time than clients in the other two study groups (IPS or standard services). However, no other evidence for group differences in non-vocational outcomes was observed.

In their RCT of IPS SE versus GST, Drake et al. (1996) gathered similar information on global functioning, quality of life, self-esteem, and psychiatric symptoms. Improvement over time in several of these domains was observed for both study groups, but no significant group differences were reported. Lehman et al. (2002) measured quality of life, self-esteem, work motivation, medication attitudes, general health, and social network. However, no data comparing their treatment and control groups on these measures were reported. One other RCT (Bond et al., 2007) reported that data for treatment versus control group comparisons on non-vocational outcomes (social networks inside and outside the workplace, hospitalizations, independent living, psychiatric symptoms, and quality of life) would be reported in later publications but we are not aware that this has yet occurred. Finally, Twamley et al. (2008) reported that persons in their trial who obtained competitive work had a significant increase (relative to those persons who did not obtain competitive employment) in the Global Satisfaction items from the Quality of Life Index, but results of intent-to-treat comparisons between their two study groups were not reported.38

3. Recent Tests of Enhancements to the IPS Model

A fundamental departure of the IPS model from previous VR approaches is the notion that training activities prior to job placement are not as effective as placing someone in employment and then tailoring any post-placement training and support to the needs of the job. It is therefore of interest that recent tests of enhancements to the IPS model have involved the addition of training activities that are not linked to specific jobs but rather are targeted that more general obstacles to success on the job. The motivation for testing these enhancements to IPS arises from concerns about short job tenure and unsuccessful terminations (being fired or voluntarily quitting without having another job already in place) (McGurk et al., 2005; Mueser et al., 2005).

One such promising enhancement is cognitive training. In a recent RCT, McGurk et al. (2005) tested the addition of a cognitive training intervention (the “Thinking Skills for Work Program”) to ongoing SE services for clients who had experienced a job failure (defined as being fired from a job held for less than 3 months or a voluntary quit during the same 3-month time frame without having another job in place). While the sample size for the study was small (44 clients), 12-month follow-up differences were significant, with the treatment group reporting more jobs, more hours of work and more wages. For each of these outcomes, the treatment group result was more than ten times that of the control group.

A second type of enhancement, tested by Mueseret al. (2005) was the workplace fundamentals program, a skills training program designed to improve clients skills in “identifying workplace stressors” and “problem solving,” and thereby to improve job performance. In this study, 35 clients in an IPS SE program were randomly assigned, within approximately 2 months after obtaining a job, to either the training program or to the control group. Each group was then followed for an 18-month period, but no significant differences in job tenure, job turnover, hours or days worked, or wages were observed. The authors note that this result contrasted with an earlier RCT finding (Wallace and Tauber, 2004) that the workplace fundamentals program had in fact reduced turnover; they note that the earlier study was limited to clients with a recent history of unsuccessful job outcomes and they speculated that the lack of significance in their own results may be due to the use of inclusion criteria that did not require recent job failures.

Finally, in a recent study from Hong Kong, Tsang et al. (2009) used a three-group RCT design to compare: (1) an enhanced IPS SE program that was integrated with a social skills training program; (2) a standard IPS SE program; and (3) a “traditional” VR program. The authors reported that at the 15-month follow-up, the employment rate for the enhanced IPS group was significantly greater than that for the standard IPS group (78.8% versus 53.6%) and that both were far above the rate for the traditional group (7.3%). The enhanced IPS group also showed significantly longer job tenure than the standard IPS group. Results at 7-month and 11-month follow-ups paralleled the results at 15 months. While the study appears to provide strong evidence that social skills training is a useful enhancement to standard IPS, several caveats are required. First, the generalizability of the intervention and results from Hong Kong to a United States setting is unclear. Second, Bond et al. (2008) cited an earlier report on the Hong Kong study and commented that their own literature review excluded this study because they could not verify the fidelity of the IPS intervention and because the outcomes included “partially competitive” employment. It is not clear if these concerns apply to Tsang et al. (2009), since the publication has an extensive discussion of fidelity assurance procedures and since there is no mention in the paper of “partially competitive” employment.

4. Variations in IPS Effectiveness with Client Characteristics

The population of persons with severe/chronic mental illness who are potential candidates for an IPS SE intervention is heterogeneous in terms of diagnoses, mental health status and symptoms, education, work history, recipiencystatus (especially vis-à-vis SSDI and SSI), and other relevant socio-demographic and economic characteristics. Consequently, it is useful to know whether the effectiveness of IPS SE services to this population may be strongly influenced by the use of any targeting strategies that focus these policies on particular sub-populations.

Some direct evidence from the literature on variations in the IPS “treatment effect” among types of SE clients can be inferred by comparing results of studies directed at different populations (e.g., inner-city residents, middle-aged and older adults) but differences among these studies also include variations in other factors such as comparison conditions (e.g., “traditional” vocational programs, PSR programs, diversified placement programs, etc.) that preclude attribution of observed differences in results entirely to differences in client characteristics.

A clearer test of variations in effectiveness by client characteristics can be provided within individual studies by estimating interactions of the treatment variable with these characteristics. Several of the studies reviewed here in fact tested for such interactions. Drake et al. (1996) tested for interactions of IPS with age, work history, and diagnosis (schizophrenia versus affective disorders) and found no significant effects. Mueser et al. (2004) tested for interactions on employment outcomes between the three study groups and the following client characteristics: gender, education, diagnosis, work history (employed in the past 5 years versus not), and substance use disorder at baseline. Again there were no significant interaction effects observed.

We note that these findings are consistent with the interpretation, from Drake et al. (1996), that “(t)helack of interactions between program type and individual characteristics indicates that IPS was the preferred program for all of the groups that were considered.” Given the relatively small sample sizes in these studies, it is also reasonable to question the power of these analyses to reject the null hypothesis of no interaction effects. This concern was one of the motivations for the recent meta-analysis of Campbell et al. (2009). This analysis used data from four RCTs (Bond et al., 2007; Mueseret al., 2004; Drake et al., 1996; Drake et al., 1999). Bivariateinteractions on effect sizes and signs were investigated between IPS and the following types of patient characteristics: work history, demographic characteristics, benefit recipiency, homelessness, primary diagnosis, psychiatric symptoms, substance abuse, and prior hospitalizations. In the large majority of the comparisons, the IPS treatment effect was significant for all patient sub-groups, leading the authors to conclude that IPS is an effective interaction for all patient sub-groups.

However, if we are to use a CEA framework to examine possible issues about targeting of new IPS programs, information is required on the size of the IPS impacts, within each sub-group, on the specific effectiveness measure employed. For example, if the employment rate is the relevant effectiveness measure, we need to have estimates within each sub-group on the increase in the probability of employment resulting from IPS versus the control condition. Unfortunately, a standardized mean difference effect size, as is commonly used in meta-analysis and as used by Bond et al. (2009) for IPS impacts on employment rates, does not provide information on this increase in employment probability. The same comment would apply to standardized effect size measures for the continuous outcome variables (weeks worked and job tenure) examined in Bond et al. (2009). In these cases, the unstandardizedIPS impacts within each sub-group would be more relevant from a cost-effectiveness perspective for purposes of targeting. Moreover, as the comments from Bond et al. (2009) suggest, a larger data set would be required to simultaneously control statistically for multiple client characteristics if we seek more detailed criteria for defining sub-groups and making any proposed targeting more precise.39, 40

5. Limitations of Studies to Date

We have already noted or implied in previous comments several concerns about limitations in the current literature, including in particular: (1) the relatively short time frame for follow-up and the measurement of costs and effectiveness; and (2) the relatively small size of the samples in the individual studies. With regard to the first limitation, one solution that has emerged is to mount more extended follow-ups that draw upon self-reported data from clients.

We also anticipate progress with respect to the second limitation on several fronts. First, application of meta-analysis methods, as in Bond et al. (2009) is being employed to combine data sets and bring additional power to hypothesis testing and effectiveness measurement. However, in combining data sets from different study sites, we need to choose sites that are homogeneous with respect to both the intervention and the control/comparison condition. Bond et al. (2008) has noted a concern about intervention homogeneity regarding the sites included in the Employment Intervention Demonstration Project (Cook et al., 2005); however control condition homogeneity is also important for combining estimates of treatment effects (since these are, of course, estimates of the difference between the control and treatment conditions). Thus, in a pooled statistical analysis across sites that differ widely in their control conditions, what is required is allowing for interactions between site and treatment to obtain treatment effects that vary by site; simply controlling for a site effect that uniformly impacts both control and treatment groups within a site is not sufficient.

Second, there is at least one large-scale study that is currently under way (Frey et al., 2008) with a study sample size of more than 2,000 persons; hopefully this will be the first of a series of such studies. (Note, however, that this study involves a combined intervention of IPS SE and systematic medication management.) With the additional power of meta-analyses and larger-scale studies, it should be possible to explore more detailed questions relating, for example, to IPS interactions with client characteristics and to IPS effects on relatively infrequent but important outcomes such as use of mental health care resources for inpatient treatments or psychiatric crisis services.

A third concern is the very limited amount of research results pertaining to IPS program impacts on non-earned income, included income from public sector benefit programs. As noted above, this information can easily combined with information on earnings impacts to measure more accurately the social costs of IPS programs. A fourth concern relating to costs is the need to develop more information on mental health service cost and somatic health services cost impacts of IPS services.

A fifth concern is the need to make study results more broadly generalizable by extending the trials of IPS interventions to populations that have thus far been excluded from or under-represented in the studies we have reviewed. An important example is individuals with a recent-onset of their schizophrenia or other serious mental disorder. One recent small-scale RCT in Australia demonstrated significant impacts on employment and hours of work for first-episode patients (Kilackey et al., 2008), and a recent RCT in the United States that adapted IPS to incorporate supported education as well as employment has been successfully implemented (Neuchterleinet al., 2008) though results of the intervention have not yet been reported. Several other trials that incorporate IPS models for supported education and employment are also being fielded under the auspices of NIMH RAISE project.

Sixth and finally, the research to date has also provided little information on the subject enrollment process and the reasons why some clients choose not to participate in IPS services. This information could help us to understand these selection factors, to understand the generalizabilityof the research results, and perhaps assist in developing other approaches to client recruitment that would expand the reach of available IPS services. While we have some very limited information which suggests that persons enrolling have characteristics similar to the general population of persons served in the public mental health system (Drake et al., 1996), we also have data from a variety of sources which indicate that a large fraction of individuals with severe mental illness do not have jobs now but would like to work. It seems likely that limits on the geographic availability of IPS program services tend to restrict the take-up rates for such services, and it has been reported that the inclusion criteria for IPS trials have been fairly lenient (apart from the requirement that clients should have an interest in competitive work) (Bond et al., 2009). However, we have little or no information about: (1) the fraction of the eligible population within IPS program catchment areas who in fact enrolled; or (2) the reasons for not enrolling of those who were eligible.

Finally, we should bear in mind important differences between randomized trials and the actual working of policies to implement and expand access to and use of IPS SE services. The enrollment process in a randomized trial is different, and perhaps more resource-intensive, than the outreach and intake process for a newly-established or expanded IPS SE program. Thus, lessons from the experience with enrollment in RCTs may not necessarily apply to the take-up experience of new or expanded IPS SE programs outside the context of an RCT. Similarly, the client experience in an RCT may differ because of the need to participate in the research aspects of the trial, and this may have consequences for attrition and perhaps even effectiveness of the intervention. Therefore, we should not view the already substantial RCT literature as obviating the need for careful evaluation (either retrospective or prospective) of a policy to expand access to and use of IPS SE services outside of the RCT context.

6. Summary and Conclusions

The available literature demonstrates a consistent and substantial short-term impact of IPS SE services on competitive employment outcomes; evidence of longer-term impacts is more sparsebut suggests that when these services continue to be available over a longer time period, competitive employment gains may be fairly durable. We also have accumulated a fair amount of evidence about the short-term cost per client-year of IPS programs but we have much less information about per client-year costs on a longer-term basis.

With regard to cost offsets, the literature contains at least several examples that suggest a fairly small incremental cost of IPS SE services for persons moved from day-treatment programs or other resource-intensive vocational programs. This suggests that incremental cost projections for policies to expand use of IPS SE services should take into account the current service use profile of the populations targeted for these expansions. In a time of stringent resource constraints, policies that expand IPS use while reducing expenditures on other “substitute” services seem to be a very attractive first option. Moreover, the fragmentary data cited above suggest that the scope for IPS SE expansion as a substitute for other vocational services is fairly large; and the facts that: (1) only about 25% of adults with SMI are currently receiving any vocational services; and (2) the majority of these adults express a desire to work (McQuilkenet al., 2003; Mueser et al., 2001) suggest that there is a potential for extending IPS SE services to persons not currently receiving vocational services.

The consistent evidence of IPS effectiveness in promoting competitive employment suggest that broader policy efforts to expand IPS use (to persons not now receiving vocational services) may also be warranted on social CEA grounds, but that they should be monitored and evaluated carefully. Our evidence base to date on longer-term IPS program costs, and on virtually all other elements of social costs enumerated in the Introduction to this review, is very limited and clearly needs to be expanded before longer-term and more definitive judgments about the desirable scope of policies to expand IPS services can be reached.41

On the effectiveness side of the ledger (i.e., the denominator of the social CE ratio for expanding IPS), we need to consider how we can synthesize and/or modify the various dimensions of effectiveness in the current literature to produce a single comprehensive effectiveness measure (akin to a “QALY”) that can be used to systematically compare further investments in IPS services against investments in other types of mental health services and investments in somatic health services. I am not aware that developmental work on such measures is now proceeding.

Finally, we need to focus additional research and development efforts on increasing IPS effectiveness via “tailoring” or enhancements of “standard” IPS services to groups of persons who are not now being reached by “standard” services or for whom outcomes have been less positive. Examples include recent-onset patients for whom the IPS model will presumably have to expand to include supported education, and cognitive enhancement interventions for IPS clients who have experienced difficulties in obtaining or holding jobs.

REFERENCES

Bailey EL, Ricketts SK, Becker DR, Xie H, and Drake RE. Do long-term day treatment clients benefit from supported employment? Psychiatric Rehabilitation Journal 22(1): 24-29 (1998).

Bailey MJ. Reducing Risks to Life.Washington, DC: American Enterprise Institute (1980).

Becker DR, Bond GR, McCarthy D, et al. Converting day treatment centers to supported employment programs in Rhode Island. Psychiatric Services 52: 351-357 (2001).

Becker D, Whitley R, Bailey EL, and Drake RE. Long-Term Employment Trajectories Among Participants With Severe Mental Illness in Supported Employment. Psychiatric Services 58: 922-928 (2007).

Bond GR. Supported employment: Evidence for an evidence-based practice. Psychiatric Rehabilitation Journal 27(4): 345-359 (2004).

Bond GR, Dietzen LL, McGrew JH, and Miller LD. Accelerating entry into supported employment for persons with severe psychiatric disabilities. Rehabilitation Psychology 40(2): 75-94 (1995).

Bond GR, Dietzen LL, Vogler K, KatuinCH, McGrew JH, and Miller LD. Toward a framework for evaluating costs and benefits of psychiatric rehabilitation: three case examples. Journal of Vocational Rehabilitation 5: 75-88 (1995a).

Bond GR, Drake RE, and Becker DR. An update on randomized controlled trials of evidence-based supported employment. Psychiatric Rehabilitation Journal 31(4): 280-290 (2008).

Bond GR, and Kukla M. Is Job Tenure Brief in IPS Supported Employment Programs? Unpublished manuscript, Department of Psychiatry, Dartmouth College (August 4, 2010).

Bond GR, and Kukla M. Impact of follow-along support on job tenure in the individual placement and support model. Journal of Nervous and Mental Disease 199(3): 150-155 (March 2011).

Bond GR, Resnick SG, Drake RE, et al. Does competitive employment improve nonvocational outcomes for people with severe mental illness? Journal of Consulting and Clinical Psychology 69(3): 489-501 (2001).

Bond GR, Salyers MP, Roudebush RL, et al. A randomized controlled trial comparing two vocational models for persons with severe mental illness. Journal of Consulting and Clinical Psychology 75(6): 968-982 (2007).

Bond GR, Xie H, and Drake RE. Can SSDI and SSI beneficiaries with mental illness benefit from evidence-based supported employment? Psychiatric Services 58: 1412-1420 (2007).

Burns T, Catty J, Becker T, et al. The effectiveness of supported employment for people with severe mental illness: a randomized controlled trial. Lancet 370: 1146-52 (2007).

Burns T, Catty J, White S, et al. The impact of supported employment and working on clinical and social functioning: Results of an international study of individual placement and support. Schizophrenia Bulletin 35(5): 949-958 (2009).

Bush PW, Drake RE, Xie H, McHugo GJ and Haslett WR. The long-term impact of employment on mental health service use and costs for persons with severe mental illness. Psychiatric Services 60: 1024-1031 (2009).

Campbell K, Bond GR, and Drake RE. Who benefits from supported employment: A meta-analytic study. Schizophrenia Bulletin (advance access published August 6, 2009).

Clark RE. Supported employment and managed care: Can they coexist? Psychiatric Rehabilitation Journal 22: 62-68 (1998).

Clark RE, Bush PW, Becker DR, and Drake RE. A cost-effectiveness comparison of supported employment and rehabilitative day treatment. Administration and Policy in Mental Health 24(1): 63-76 (September 1996).

Clark RE, Dain BJ, Xie H, et al. The economic benefits of supported employment for persons with mental illness. Journal of Mental Health Policy and Economics 1: 63-71 (1988).

Clark RE, Xie H, Becker DR, and Drake RE. Benefits and costs of supported employment from three perspectives. Journal of Behavioral Health Services and Research 25(1): 22-34 (February 1998).

Cook, JA, et al. Integration of psychiatric and vocational services: A multisite randomized, controlled trial of supported employment. American Journal of Psychiatry 162(10): 1948-1956 (October 2005).

Cook JA, et al. Estimated payments to employment service providers for persons with mental illness in the Ticket to Work program. Psychiatric Services 57: 465-471 (April 2006).

Dixon L, Hoch JS, Clark R, Bebout R, Drake R, McHugo G, and Becker D. Cost-effectiveness of two vocational rehabilitation programs for persons with severe mental illness. Psychiatric Services 53: 1118-1124 (2002).

Drake RE, Becker DR, Clark RE, and Mueser KT. Research on the individual placement and support model of supported employment. Psychiatric Quarterly 70(4): 289-301 (Winter 1999).

Drake RE, McHugo GJ, Bebout RR, Becker DR, Harris M, Bond GR and Quimby E. A randomized clinical trial of supported employment for inner-city patients with severe mental disorders. Archives of General Psychiatry 56: 627-633 (July 1999).

Drake RE, Skinner JS, Bond GR, and Goldman HH.Social Security and mental illness: Reducing disability with supported employment. Health Affairs 28(3): 761-770 (May/June 2009).

Frey WD, Azrin ST, Goldman HH, et al. The Mental Health Treatment Study. Psychiatric Rehabilitation Journal 31(4): 306-312 (2008).

Greig TC, Zito W, Wexler BE, and Bell MD. Effects of cognitive remediation on supported employment: A randomized clinical trial. Schizophrenia Bulletin 31: 525 (2005).

Health Management Consultants, Inc. Evaluation of the Adequacy of the Rates for Evidence Based Best Practice Supported Employment Services in the Public Mental Health System. Report to the State of Maryland Department of Health and Mental Hygiene, Mental Hygiene Administration and Medicaid Administration (December 2006).

Henry AD, Lucca AM, Banks S, Simon L, and Page S. Inpatient hospitalizations and emergency service visits among participants in an individual placement and support (IPS) model program. Mental Health Services Research 6(4): 227-237 (December 2004).

Killackey E, Jackson HJ, and McGorryPD. Vocational intervention in first episode psychosis: A randomized controlled trial of individual placement and support vs. treatment as usual. British Journal of Psychiatry 193: 114-120 (2008).

Kraemer HC. Reconsidering the odds ratio as a measure of 2x2 associationin a population. Statistics in Medicine. Special Issue: Statistical Methodology in Alzheimer's Disease Research II 23(2): 257-270 (January 30, 2004).

Latimer EA. Economic impacts of supported employment for persons with severe mental illness. Canadian Journal of Psychiatry 46: 496-505 (August 2001).

Latimer EA, Bush PW, Becker DR, Drake RE and Bond GR. The cost of high-fidelity supported employment programs for people with severe mental illness. Psychiatric Services 55(4): 401-406 (April 2004).

Lehman AF, Goldberg R, Dixon LB, et al. Improving employment outcomes for persons with severe mental illness. Archives of General Psychiatry 59: 165-172 (2002).

McGuire AB, Bond GR, Kukla M, and ClendenningD. Prediction of Competitive Employment Outcome from Service Intensity in IPS Supported Employment. ACT Center of Indiana, unpublished manuscript (2010).

McGurk SR, and Mueser KT. Cognitive and clinical predictors of work outcomes in clients with schizophrenia receiving supported employment services: 4-year follow-up. Adm Policy Ment Health & MentHealth Serv Res 33: 598-606 (2006).

McHugo GJ, Drake RE, and Becker DR. The durability of supported employment effects. Psychiatric Rehabilitation Journal 22(1): 55-61 (Summer 1998).

McQuilken M, Zahniser JH, Novak J, Starks RD, Olmos A, and Bond GR. The Work Project Survey: Consumer perspectives on work. Journal of Vocational Rehabilitation 18: 59-68 (2003).

Meltzer D. Accounting for future costs in medical cost-effectiveness analysis. Journal of Health Economics 16: 33-64 (2007).

Meltzer D. Future costs in medical cost-effectiveness analysis. In Jones AM, ed., The Elgar Companion to Health Economics. Cheltenham, UK: Edward Elgar Publishing (2006).

Mueser KT, Clark RE, Michael Haines M, et al. The Hartford Study of Supported Employment for Persons With Severe Mental Illness. Journal of Consulting and Clinical Psychology 72(3): 479-490 (2004).

Mueser KT, Salyers MP, and Mueser PR.A prospective analysis of work in schizophrenia. Schizophrenia Bulletin 27: 281-296 (2001).

National Disability Rights Network. Segregated and Exploited: The Failure of the Disability Service System to Provide Quality Work (January 2011).

Neuchterlein KH, Subotnik KL, Turner L, et al. Individual placement and support for individuals with recent-onset schizophrenia: Integrating supported education and supported employment. Psychiatric Rehabilitation Journal 31: 340-349 (2008).

Perkins DV, Born DL, Raines JA, and Galka SW. Program evaluation from an ecological perspective: Supported employment services for persons with serious psychiatric disabilities. Psychiatric Rehabilitation Journal 28(3): 217-224 (Winter 2005).

Rogers ES, Sciaparra K, and MacDonald-Wilson K. A benefit-cost analysis of a supported employment model for persons with psychiatric disabilities. Evaluation and Program Planning 18(2): 105-115 (1995).

Rosenheck R, Stroup S, Keefe RSE, et al. Measuring outcome priorities and preferences in people with Schizophrenia. British Journal of Psychiatry 187: 529-536 (2005).

Salkever DS.Tickets without takers? Potential economic barriers to the supply of rehabilitation services to beneficiaries with mental disorders. In Bell S and Rupp K (eds.), Paying for Results in Vocational Rehabilitation: Will Provider Incentives Work for Ticket to Work? Washington, DC: The Urban Institute (2003).

Salyers MP, Becker DR, Drake RE, Torrey WC, and Wyzik PF. A ten-year follow-up of a supported employment program. Psychiatric Services 55(3): 302-308 (March 2004).

Schneider J. Is supported employment cost effective? A review. International Journal of Psychosocial Rehabilitation 7: 145-156 (2003).

Schneider J, Boyce M, Johnson R, et al. Impact of supported employment on service costs and income of people with mental health needs. Journal of Mental Health 18(6): 533-542 (December 2009).

Tsang HWH, Chan A, Wong A, and LibermanRP. Vocational outcomes of an integrated supported employment program for individuals with persistent and severe mental illness. Journal of Behavior Therapy and Experimental Psychiatry 40: 292-305 (2009).

Twamley EW, Narvaez JM, Becker DR, Bartels SJ, and Jeste DV. Supported Employment for Middle-Aged and Older People with Schizophrenia. American Journal of Psychiatric Rehabilitation 11(1): 76-89 (January 2008).

Wexler BE, and Bell MD. Cognitive remediation and vocational rehabilitation for schizophrenia. Schizophrenia Bulletin 31(4): 931-941 (2005).

NOTES

  1. See, for example, Table 1 in Cook et al. (2005) for information on variability in hours of vocational service received by different clients.

  2. See refs. 13-18 in Bush et al. (2009).

  3. An earlier study with a similar research design (McHugoet al., 1998) followed enrollees in both IPS and an alternative vocational program for 24 months after an initial 18-month study period. Those enrollees who chose to use vocational services in the follow-up only used on average 1 hour of service per month. (Separate figures were not reported for the IPS enrollees.)

  4. This study also provides a brief review of findings from earlier literature relating to IPS service intensity and its correlation with employment outcome.

  5. It should also be noted that any observed patterns of variations in SE costs over long follow-up periods may also be strongly affected by changes in funding (or, in the extreme case, discontinuation of programs).

  6. HMC also noted that the operating losses of the SE providers would be eliminated by this increase in caseloads. Moreover, they observed that margins would be even more positive if there was an increase the number of clients for whom extended support payments are made (presumably because they are working), per employee specialist, from the study average of 4.8% to 7.4%. Assumptions about changes in costs accompanying such a change were not stated.

  7. Note that all costs cited in this paragraph are adjusted to 2005 dollars, using the CPI.

  8. One other recent study (Perkins et al., 2005) presented information on costs of SE services in community mental health centers (CMHCs) in Indiana but reported costs separately for employed versus not employed clients; information needed to produce an overall average cost per client per year was not provided.

  9. On this last point, see also Clark (1998) and Drake et al. (1999).

  10. These statistics are from the National Alliance on Mental Illness web site accessed on February 27, 2011.

  11. According to the Substance Abuse and Mental Health Services Administration's 2009 Center for Mental Health Services Uniform Reporting System Output Tables, 51,027 adults received evidence-based SE services from providers who were part of a the State Mental Health Agency system. Figures on adults with SMI receiving any vocational services are at best suggestive. The number of all persons in sheltered workshops as of January 2010 appears to be in excess of 400,000 (National Disability Rights Network, undated); presumably a large fraction of these persons have SMI though the precise number is not known. Other types of programs that often provide at least some vocational services include partial hospital (day-treatment) programs, clubhouses, and other psycho-social rehabilitation programs.

  12. The declines for both groups could reflect regression to the mean phenomena, more general mental health service system trends or, possibly, the fact that both study interventions resulted in offsets of inpatient costs. A treatment-as-usual control group would be needed to test the last of these possibilities.

  13. One study cited by Latimer did report substantial reductions in hospitalizations and ER/crisis services, as well as in treatment costs from before to after enrollment in an SE program (Rogers et al., 1995). The importance of these findings was, however, diminished by the very small size (19 subjects), and the reliance on before-after comparisons with no comparison or control group. Latimer also notes that the intervention in this study was apparently very intensive, (costing $7,128 per client per year in 1990 dollars), and suggests that the intervention actually approximated an assertive community treatment (ACT) program because of the wide range of services (included case management services) provided by the ESs. Latimer suggests that the intervention in this case did not in fact conform with fidelity to the SE model.

  14. The authors do present graphic evidence, however, of differences in baseline and follow-up work hours for the two components of the “minimum work” group. In particular, the “late work” group did have a baseline work level that was small and did not increase until year 3 of the follow-up, after which it increased steadily but remained well below the level of the mean hours for the steady work group. The “no work” group had a mean of zero hours in the baseline and remained very low in the follow-up years with essentially no upward trend.

  15. Outpatient services hours were the sum of direct service hours (including therapy, medication checks, day-treatment, and case management) and mental illness management services.

  16. The possible overstatement in significance levels (i.e., understatement of p values) may also be a concern here; clustering of error terms within individuals over time, and between individuals from the same mental health center may have occurred and there is no indication that the authors corrected for clustering in computing standard errors for their regression coefficients.

    In addition, it would be interesting to know how the decision to group the late work and no work groups impacted the significance of the authors’ findings. They do indicate that these two groups did not differ significantly in their service use and cost outcome measures; but it would also be helpful to know the statistical results obtained from their regressions when either: (1) all three groups were allowed to have differential outcome trends; or (2) the no work group was kept separate and the steady work and late work groups were combined into a single grouping.

  17. It should be noted that evidence of positive associations between employment and other positive non-vocational outcomes (improvements in symptoms, satisfaction with leisure, finances, and self-esteem) has also been cited as evidence of a possible employment effect on treatment cost savings. For a recent brief discussion and references, see Burns et al. (2008). More direct evidence of IPS SE on these positive non-vocational outcomes is discussed below.

  18. Note that attrition here refers to drop-out from SE services from all causes, including retirement, mortality, declining health, functional recovery, etc.

  19. There is also a question as to whether other non-vocational services provided by these “traditional” programs would, or should, continue to be provided even in the presence of expanded SE programs.

  20. Note that even if there is an SE program impact on clients’ savings in the short run, the longer run (life-cycle) impact on savings may still be zero if clients’ bequests are not affected.

  21. One would expect that the more typical case is where the SE program reduces non-competitive earnings while increasing competitive earnings. Here again the correspondence between net earnings impacts on net-private consumption depend upon the relationships of the two types of earnings to the worker’s marginal product.

  22. However, if payroll tax payments increase the individual’s future consumption by increasing their future Social Security benefits, we cannot argue that they are unrelated to the individual’s private consumption.

  23. Practical measurement issues raised by these qualifications, such as the problem of measuring clients’ sales tax payments on their private consumption, are also formidable.

  24. This is the approach used by Drake et al. (2009).

  25. One of the 17 currently working clients was reported as holding a “volunteer” job.

  26. Figures reflect purchasing power of dollars in 1999 and 2000, and are approximate because mean earnings are not exactly equal to the product of mean hours and mean hourly wages.

  27. It should be noted that during the 24-month follow-up period, the mix of vocational services received by each study client was not restricted to the services they were assigned to during the randomized trial. Also, during the follow-up period, 37% and 44% of the IPS and GST groups (respectively) received no vocational services.

  28. Becker et al. (2007) also provide some fragmentary information that relate indirectly to earnings in a much longer-term follow-up. They report in 8 to 12-year follow-up re-interviews of 38 IPS SE clients, 89% were receiving benefits from Social Security including 26% receiving Supplemental Security Income (SSI) and 74% receiving Social Security Disability Insurance (SSDI); 71% reported being covered by Medicaid. Thus, to the extent that earnings gains continued over a longer-term for these clients, they were still modest relative to the eligibility thresholds for public sector benefits. These figures are also similar to the finding from Salyers et al. (2004) that 97% of clients followed up after 10 years were still receiving SSI or SSDI and 91% were still on Medicaid.

  29. Tax payments include individual income and payroll taxes as well as employer payroll taxes.

  30. This study also reports differences in levels and before-after changes in tax payments for IPS versus GST, but the implications for social cost of private net consumption are unclear because the reported tax figures appear to include employer and employee payroll tax payments as well as individual income tax payments. While it might be argued that income tax payments reduce private net consumption, the implications of payroll tax payments for the individual’s future social security benefits undercut this argument.

  31. For a recent example of an exploratory effort to develop such a measure, see Rosenheck et al., 2005.

  32. Of course, we recognize that comparisons with other public programs in non-health sectors would require a more broadly applicable “QALY” type measure, or (more realistically) a monetary measure (based on willingness-to-pay principles) for the benefits of the IPS program.

  33. The authors also not that "all participants in the study had worked in the past, and 84% had a history of consecutive employment for at least 12 months."

  34. Separate results were not reported for paid jobs or for competitive jobs.

  35. We also note that one could distinguish between no employment, volunteer employment, and paid non-competitive employments, on the grounds that even volunteer employment should be given some positive weight relative to no employment.

  36. The issue has been aptly framed by Drake et al. (1999):

    "Ultimately…the answer to this question may involve value: Do we believe it is better to integrate people with mental illness into mainstream society, or do we want to maintain separate work settings and keep them segregated from society?"

    While most people would presumably strongly agree with a preference for integration, incorporating the value of integration into a social CEA also requires that it be quantified.

  37. This is the average across all study participants.

  38. Other previous studies have examined differences between persons with SMI who are competitively employed and those who are not in terms of non-vocational measures such as symptoms, quality of life, and self-esteem. (See, for example, Bond et al., 2001.) As in the case of studies of the relationship between employment and mental health services (see Section I.5 above), the causal nature of such relationships is unclear.

  39. It should also be noted that measures such as unstandardizedodds ratios of log odds ratio coefficients from multiple logistic regressions, while they are also used a kind of effect size measure, are also problematic in this context for use with binary outcomes such as employment rates. The problem is that the odds ratio between IPS and control conditions for one patient sub-group may be much larger than that for another patient sub-group even though the marginal impact on employment probabilities of IPS is the same for both sub-groups. For a discussion of this point in a different context (see Kraemer, 2004).

  40. One other study (Bond, Xie et al., 2007), which pooled data across four RCTs and looked at variations in IPS effects by SSDI/SSI beneficiary status, is also relevant to this discussion. Significant IPS positive effects on competitive employment rates for all four beneficiary status groupings (SSDI only, SSI only, dual recipients, and non-beneficiaries); the size of the IPS versus control difference in rates was somewhat smaller for non-beneficiaries but was not substantially different across the four groups.

  41. Another important reason for a better understanding of IPS program costs is the relationship of costs to any proposed payment arrangement intended to encourage providers to introduce/expand the IPS services that they offer. Prior experience with incentive payment arrangements for providers under the Social Security Ticket to Work Program suggests that implementing payment arrangements that will in fact elicit a substantial provider response may be a challenge for policy makers (Cook et al., 2006; Salkever, 2003).


FEDERAL FINANCING OF SUPPORTED EMPLOYMENT AND CUSTOMIZED EMPLOYMENT FOR PEOPLE WITH SERIOUS MENTAL ILLNESS REPORTS AVAILABLE

Federal Financing of Supported Employment and Customized Employment for People with Mental Illnesses: Final Report
Executive Summary   http://aspe.hhs.gov/daltcp/reports/2011/supempFRes.htm
HTML   http://aspe.hhs.gov/daltcp/reports/2011/supempFR.htm
PDF   http://aspe.hhs.gov/daltcp/reports/2011/supempFR.pdf
Toward a Social Cost-Effectiveness Analysis of Programs to Expand Supported Employment Services: An Interpretive Review of the Literature
HTML   http://aspe.hhs.gov/daltcp/reports/2010/supempLR.htm
PDF   http://aspe.hhs.gov/daltcp/reports/2010/supempLR.pdf

To obtain a printed copy of this report, send the full report title and your mailing information to:

U.S. Department of Health and Human ServicesOffice of Disability, Aging and Long-Term Care PolicyRoom 424E, H.H. Humphrey Building200 Independence Avenue, S.W.Washington, D.C. 20201FAX:  202-401-7733Email:  webmaster.DALTCP@hhs.gov


RETURN TO:

Office of Disability, Aging and Long-Term Care Policy (DALTCP) Home [http://aspe.hhs.gov/_/office_specific/daltcp.cfm]Assistant Secretary for Planning and Evaluation (ASPE) Home [http://aspe.hhs.gov]U.S. Department of Health and Human Services (HHS) Home [http://www.hhs.gov]