Report on Alternative Outcome Measures: Temporary Assistance for Needy Families (TANF) Block Grant . Federal-Level Experiences in Using Outcome-Based Performance Measures in the Welfare and Workforce Development Systems

12/01/2000

In spite of the potential challenges in using outcome-based performance measures in welfare-to-work programs, at the federal level, both welfare and workforce development programs have increasingly focused on the use of these measures to gauge program success and effectiveness. This section discusses the evolution of each of these outcome-based performance based systems  and, to the extent possible, discusses how they have addressed the specific challenges discussed above. In addition, the paper examines similarities as well as differences between these two systems.

The Use of Outcome Measures in the Welfare System

Welfare-to-work programs administered by the U.S. Department of Health and Human Services are increasingly shifting toward using outcome-based performance measures to gauge program success. The welfare-to-work program that preceded the TANF program  the Job Opportunities and Basic Skills Training (JOBS) program  did not explicitly require the federal government to establish outcome-based performance measures. Rather, this program primarily relied on two process measures  participation rates and the proportion of funds spent on long-term welfare recipients  as its primary measures of program accomplishments. Some studies found that the JOBS program was not sufficiently focused on employment, in part due to the nature of its performance measurement system (U.S. General Accounting Office, 1994, 1995).

The legislation governing the JOBS program did require the U.S. Department of Health and Human Services to develop recommendations for outcome-based measures. In its 1994 Report to Congress (U.S. Department of Health and Human Services, 1994), HHS developed a timeframe for developing these measures with measures to be put in place in 1996. However, the passage of PRWORA in 1996 superseded these plans. Possible performance measures mentioned in the 1994 Report to Congress included: percent of the cash assistance caseload that received aid for more than a specified period, the JTPA performance measures (see below), increases in employment and earnings of program participants after leaving the JOBS program, and retention of JOBS participants in unsubsidized employment.

The statute governing the TANF program contains more explicit guidance concerning the development of outcome-based performance measures. Like the JOBS statute, PRWORA also detailed participation rates states are required to meet. However, it also required HHS to develop a "high performance bonus" to reward states based on their success in attaining the goals of the act and to distribute a bonus to reward states based on their success in reducing out-of-wedlock births. For the high performance bonus, the law gave HHS  working with the states  discretion over what measures should be used. Congress was much more specific regarding the performance measure for out-of-wedlock births.

For the initial three years of the high performance bonus, HHS developed interim guidance that included outcome-based performance measures that reflect states performance in moving individuals from welfare to work (U.S. Department of Health and Human Services, 1998, 1999, and 2000). The guidance included four key work measures for the high performance bonus: (1) the job entry rate; (2) the success in the workforce rate (includes measures of both job retention and earnings gains); (3) the increase in job entry rate; and (4) the increase in the success in the work force rate. States use quarterly Unemployment Insurance (UI) records and other administrative data to calculate these measures. Bonuses are awarded to the ten states with the best performance on each measure.

In the final rule for bonuses to be awarded in FYs 2002 and 2003, HHS retained the work measures (but changed the data source to the Federal Parent Locator Service/National Directory of New Hires) and added a measure on family formation and stability (using Census Bureau data), and three measures of states success in supporting work and self-sufficiency by providing low-income working families with health insurance (using data submitted by the states), food stamps (using Census Bureau data), and child care assistance (using Census Bureau and data submitted by the states). Awards totaling $200 million per year will be awarded for bonus years 1999-2003.

The TANF program also includes a bonus to decrease out-of-wedlock births. For this bonus, the five states with the largest decrease in the ratio of out-of-wedlock births to total births (which also have a reduction in their abortion rates) will receive a bonus.(4)  A total of $100 million per year is available for this bonus.

While it is too early to assess the effects of the performance measurement system for the TANF program, it is important to note that the welfare system includes a number of mechanisms to deal with the potential issues in using outcome-based measures discussed above. By including measures based on program improvement, the TANF program adjusts somewhat  although imperfectly for the lack of level playing field. Using a program improvement measure allows states that may be facing difficult economic conditions or serving a difficult caseload that would not otherwise receive a bonus to obtain an award. The system also uses a range of measures  including job placement, job retention, and earnings progression  to gauge success. This provides some opportunity for programs with different or multiple goals to compete for a bonus. In addition, the work measures for the high performance bonus include both working adults who leave TANF as well as those who remain on TANF. This dual focus reduces the impact of state program design and payment standards on state performance.

Finally, because the measures are based on the performance of all cash assistance recipients, the likelihood of "creaming," i.e., serving only the most employable welfare recipients, is reduced. The TANF participation rates, which also require participation by a broad segment of the welfare population, also serve to counterbalance the potential creaming effect of the measures.

The Use of Outcome Measures in the Workforce Development System

The WIA (and its predecessor JTPA) program, administered by the U.S. Department of Labor (DOL), provides a range of employment-related services to different types of disadvantaged individuals including adults, welfare recipients, and youth. The JTPA program, in particular, has had extensive experience using outcome-based performance measures to gauge the success of its employment and training programs. Because of this longer experience, more studies have been conducted on both the experiences and effects of the JTPA performance measurement system than on performance measurement within welfare programs. This section describes the JTPA and WIA performance measurement systems, as well as findings on the effectiveness of these systems.

The JTPA Program

Several studies have examined the experience of the JTPA program in developing and using outcome-based performance measures (Bartik, 1994; Barnow, 1999; Dickinson and West, 1988; Zornitsky and Rubin; 1988). When JTPA was enacted in 1982, the legislation included specific requirements for outcome-based performance standards. As described by Barnow (1999), the JTPA system had two primary goals: to monitor how well the state and local levels of government were performing in achieving the goals and objectives of the law and to improve performance by giving program operators incentives to achieve these goals and objectives.

Under JTPA, the U.S. Department of Labor was responsible for determining the performance measures for the local Service Delivery Areas (SDAs)  the entities that operated the program at the local level. The primary role of states was to decide how bonus money should be distributed among the SDAs and how any performance-based sanctions should be imposed. In addition to the federally-set performance measures, states could propose supplementary performance measures to be used for allocating bonuses.

The JTPA performance measurement system had four core measures for adults and relied on survey data collected from program participants to calculate these measures. (Administrative data were used to compute performance on two youth measures.) SDAs were expected to meet or exceed performance standards specific thresholds were set at the federal level. Based on the most recent program experiences for all SDAs, DOL set the standards for the core performance measures at levels where 75 percent of the SDAs would be expected to exceed these minimum performance expectations. The measures for adults are listed below; the national standards for 1996/97 are noted in parentheses (Barnow, 1999):

  1. The adult follow-up employment rate, defined as the proportion of adult respondents who were employed at least 20 hours per week during the 13th week after termination (59 percent);
  2. The adult follow-up weekly earnings, defined as the average weekly earnings for all adults who were employed for at least 20 hours per week during the 13th week after termination ($281);
  3. The welfare adult follow-up employment rate, defined in the same manner as the adult follow-up employment rate but for adult welfare recipients only (50 percent);
  4. The welfare adult follow-up weekly earnings, defined in the same manner as the adult follow-up weekly earnings but for adult welfare recipients only ($244).

The JTPA performance measurement system included both monetary rewards and programmatic sanctions for SDAs that exceeded or failed to meet the performance standards. States were given control over which SDAs received positive and negative incentives. Up to five percent of JTPA funds were set aside to be used by states to reward SDAs who exceeded the performance standards. SDAs that exceeded the standards could receive additional funding, and activities undertaken with those funds could be exempted from performance standards. Thus, good performance provided SDAs with more flexibility to try new approaches or to serve more at-risk groups (Barnow, 1999). On the negative side, programs that failed to meet the standards set for them for in two consecutive years were subject to reorganization by the Governor (meaning the program could be restructured or restaffed).

In its early years of implementation, the JTPA performance measurement system was criticized because it promoted creaming and other unintended consequences (Barnow, 1992; Bartik, 1994; Zornitksy and Rubin, 1988). Unlike TANF, each SDA had some control over who was enrolled in program services and participation in the program was voluntary. This resulted in a stronger potential for creaming. Studies found that while the creaming tendencies were not universal across all SDAs, the standards did result in a focus on the less disadvantaged in some localities. Dickinson and West (1988) found that the JTPA performance standards did not prevent SDAs that had a strong commitment to serving hard-to-serve groups from targeting and serving those groups. In addition, Heckman et al. (1996) found that JTPA case workers accepted the least employable applicants into the program in spite of their effect on performance standards  in part due to the fact that they preferred to assist the most disadvantaged clients. However, Dickinson and West (1988) also found that those SDAs that had the strongest focus on meeting the standards were also less likely to serve disadvantaged groups.

In addition to enrolling the most advantaged among those eligible for program services, the JTPA system was also thought to be encouraging SDAs to offer low-cost services (an early JTPA performance measure included measures of program costs per program terminee), such as job search assistance, rather than more intensive services such as long-term training. Concerns were raised because more intensive and longer-term training was believed to have a greater impact on earnings in the long run. (Barnow, 1999)

In response to these issues, the 1992 amendments to JTPA required states to adjust their performance standards to reflect differences in economic conditions and in the demographic characteristics of the program participants in each SDA (this had previously been at the discretion of the state). States were allowed to use DOL adjustment factors or an alternative procedure approved by DOL.(5)  In addition to providing a mechanism to level the playing field, it was intended that these adjustments would give states incentives to serve more disadvantaged individuals (Barnow, 1999). The amendments also prohibited any performance measures based on costs.

To some extent these efforts appear to have mitigated creaming in the JTPA program. While local SDAs did not always understand how the adjustment model would affect their performance on the measures (Barnow, 1992; Zornitsky and Rubin, 1988), Dickinson and West (1988) found  in a study done before the adjustment model was mandatory  that the SDAs that did use the adjustment model significantly increased the percentage of disadvantaged groups served.

Overall, the effect of the outcome-based standards on program performance in JTPA is mixed. As noted above, Barnow (1999) did not find a strong link between program effectiveness and performance on the JTPA standards. In addition, there appears to be considerable variation in the extent to which the performance standards influenced the local SDAs. Dickinson and West (1988) found, for example, that about 42 percent of the SDAs they studied tried to maximize their measured performance, one-fourth tried only to exceed their standards slightly, and about one-third tried merely to meet their standards in order to avoid program sanctions.

The variation in how SDAs responded to the performance standards may in part be due to the decentralized nature of the JTPA performance system  where SDAs faced differing financial incentives. Because some states rewarded only the best performers while others distributed funds broadly among all SDAs that met or exceeded standards (Barnow, 1999), SDAs faced differing levels of financial incentives. In addition, because reorganization is such an extreme measure, states often used their discretionary authority to modify the standards so that poor performers would not fail in two consecutive years (Barnow, 1999). The fact that few SDAs actually faced the most severe penalties could have also influenced their response to the measures.

Workforce Investment Act

Building on the system developed under its predecessor, the WIA statute also places a strong emphasis on outcome-based performance standards. WIA requires that a comprehensive performance accountability system be developed with the following components: a focus on results defined by "core indicators" of performance; measures of "customer satisfaction" with programs and services; a strong emphasis on the continuous improvement of services; annual performance levels and improvement plans developed during negotiations with federal, state, and local partners; and awards and sanctions based on state and local performance.

The WIA performance system continues some aspects of the JTPA system  but with some critical differences (DOL, 1999(a); DOL, 1999(b); DOL, 2000). Table 3 provides a comparison of the performance measures used under the two systems. The performance measures for adults include: entry into unsubsidized employment; retention in unsubsidized employment six months after entry into employment; earnings gains in unsubsidized employment six months after job entry; and attainment of a recognized credential in relation to the achievement of educational or occupational skills, by those who enter into unsubsidized employment. Customer satisfaction will be measured based on the responses of both program participants and employers. States may also develop additional indicators of performance. Unlike JTPA which relied on a survey of program participants, because of the reduced cost associated with data collection, states are required to use quarterly wage records (UI data) to compute performance on employment-related measures.

Table 3
Comparison of JTPA and WIA Performance Standards
JTPA Performance Measures WIA Performance Measures
  • Percent of adults employed for at least 20 hours per week 13 weeks after program exit
  • Average weekly earnings for those employed at least 20 hours per week 13 weeks after program exit
  • Percent of welfare recipients employed for at least 20 hours per week 13 weeks after program exit
  • Average weekly earnings for welfare recipients employed at least 20 hours per week 13 weeks after program exit
  • Percent who enter into unsubsidized employment
  • Retention rates in unsubsidized employment six months after entry into employment
  • Earnings gains in unsubsidized employment six months after job entry
  • Attainment of a recognized credential in relation to the achievement of educational or occupational skills, by those who enter unsubsidized employment.
  • "Customer" (i.e., program participants and employers) satisfaction

WIA requires that the expected levels of performance on each core indicator be negotiated between the Department of Labor and individual states  an approach that is different from the SDA-level standards used in JTPA. The agreed-upon level of performance for each state must reflect how it compares with other states (taking into account differences in economic conditions, participant characteristics, and the proposed service mix and strategies). Each local workforce investment area can negotiate with the state and reach agreement on the local level of performance expected on each core indicator, taking similar factors into account.

Like JTPA, WIA has an incentive system with both rewards and sanctions. Rather than providing incentive grants to states, if a state fails to meet the adjusted levels of performance in two consecutive years, the state allocation can be reduced by up to five percent. The Department of Labor is required to award an incentive grant to each state that exceeds its performance levels for WIA (as well as those required for the Vocational and Applied Technology Education Act (Perkins Act)). States must set aside part of their allocation to provide incentive grants (or bonuses) to localities, at the discretion of the Governor. Localities that fail to meet the core indicators of performance for two consecutive years may be required to reorganize.

Because the population that can be served under WIA is broader than that served under JTPA, WIA does not require states to use a statistical model to adjust their performance standards. Instead, WIA levels the playing field by providing for negotiated performance standards at both the state and local level. Among the factors which must be considered in the negotiations process are how the levels compare to other state or local programs, taking into account such factors as economic and demographic characteristics and service design. (Other factors include promoting continuous improvement in performance measures and attaining a high level of customer satisfaction). Statistical analyses may be taken into account as part of these negotiations. In addition, by including a relatively broad range of measures to gauge program performance including customer satisfaction and credential attainment  and allowing states to add additional measures, the WIA system has some accommodation for programs with different goals.

Overall, the welfare and workforce development performance measurement systems have some common elements. The most striking similarity is the type of core measures used  both systems use measures based on the job placement rate, earnings progression, and job retention (at least for a preliminary period under TANF). Both systems also rely primarily on a bonus system for rewarding states on the selected outcome-based performance measures.

However, there are several key distinctions between the two systems. First, TANF combines the bonuses for achievement of outcome-based performance measures with penalties based on process measures, such as the work participation rate requirements. WIA links financial penalties as well as bonuses to the performance measures. Second, the new WIA system relies on standards that are negotiated at the federal and state levels. In addition, the WIA/JTPA system also allows adjustments to performance expectations based on economic conditions and demographic characteristics. In contrast, the welfare system relies on overall rankings of states and measures of improvement. Finally, compared to TANF, the workforce development system is more uniformly decentralized. Under WIA, awards and sanctions apply to both states and localities and the state plays a major role in how funds are distributed within the state. Under TANF, states have the discretion to decide if funds should be distributed at all to local agencies.