Over the years, a range of terms has been used to describe the different types of performance measures used to gauge program success. Some studies have tried to achieve consensus on definitions of key terms in performance measurement, particularly for use in welfare-to-work and other employment programs (Brown and Corbett, 1997; Hatry, 1999; Martin and Kettner, 1996; Midwest Welfare Peer Assistance Network, 1999; U.S. Department of Health and Human Services, 1994). In particular, these studies draw a crucial distinction between process measures and outcome measures.
- Process measures. Process measures address administrative or operational activities of the program. These types of measures usually reflect the "means" to getting to an end result rather than the goal itself. Examples of process measures include participation rates reflecting the type and level of service received through the program, and the percentage of applications for assistance which are acted upon in a timely manner.
- Outcome measures. Outcome measures focus on the goals which the program hopes to achieve. In most cases, these measures focus on the outcomes for a group of individuals involved in the program. In welfare-to-work programs, key outcome measures typically include job placement rates, employment retention rates, or wage rates.
While this distinction is conceptually important, in practice, there is often some uncertainty about whether a particular measure should be considered a process or an outcome measure. This is not merely the result of confusing terminology, but reflects the reality that there is often a continuum between pure process measures and pure outcome measures. For example, in a program for teen parents, the number of people served and the cost per participant are process measures. Depending on the specific goals of the program, outcome measures might include the fraction of participants who have not had a subsequent child two years later, or the fraction of participants who are employed. Measures that fall in the middle of the continuum might include the fraction of participants who attend high school, the fraction of participants who have received a certificate of General Educational Development (GED), or the fraction who meet the program's internal definition of successful completion. Such intermediate goals are sometimes referred to as "interim outcome measures" because they represent an important milestone even though they are not the ultimate goal of the program. Other sources refer to such measures as "outputs."
Performance measurement, or the measurement of the results (or outcomes) and efficiency of services or programs (Hatry, 1999), has been the subject of growing interest at all levels of government in recent years. In particular, there has been a recent movement toward increased use of outcome measures, rather than process measures. A number of broad trends have contributed to this growth.
In response to critics who have expressed skepticism about the value of government services, many providers of government services have turned to outcome-based measures in order to prove the utility of their efforts. Specifically, this new accountability focus now requires providers to show not just that they have in fact spent the public money on the activities for which it was designated, and not just that they have been efficient in serving as many people as possible with the available funds (process measures), but also that the statutory goals of the programs are being met and that recipients are better off as a result (outcome measures).
To some degree, this trend represents the spread of techniques used in the private sector - including a focus on measurement of product quality and customer satisfaction and on the establishment of numerical targets for improvement. Similar techniques had also been used in the Defense Department since the 1950s to compare the expected costs and effectiveness of various proposed weapons systems. However, such techniques had not often been applied to the provision of social services. Many providers of social services had only rudimentary capacities to track what happened to the recipients of their services.
With the enactment of the GPRA in 1993, Congress required all federal agencies to identify the goals of their programs and to report annually on their progress in achieving these goals. GPRA seeks to shift the focus of federal management and decisionmaking from a preoccupation with process measures such as the number of tasks completed or units of service provided to a more direct consideration of the results or outcomes of programs - that is, the real differences the tasks or services provided make in people's lives (Hinchman, 1997).
The recent devolution of policy and program design and funding to the state and local level has also increased the attention paid to performance measurement. In a number of areas, including human services policy, federal policy makers have created block grants, which give states great flexibility in their use of funds within broad program parameters. In such an environment, the most logical way to hold states accountable for their use of public funds is to monitor program outcomes.
The increased emphasis on holding public agencies accountable for the attainment of program goals and the outcomes of their clients is also reflected in the numerous state initiatives to develop and use performance measures. In some states, the welfare agencies are participating in comprehensive performance measurement systems which focus on establishing indicators or benchmarks of progress toward goals across programs and agencies. In other states, performance measures are used internally by agencies to monitor the performance and accountability of local offices or of contractors.
However, the shift toward use of outcome-based performance measures has occasioned some controversy, particularly when financial consequences have been attached to agencies' success or failure in achieving specified targets or standards. The major reason is that even the most effective programs are only one element out of many that affect participants' outcomes. As Forsythe (2000) notes, "almost by definition, high-level outcome measures track social changes that are influenced by factors that are not under the direct control of operating agencies," such as the overall state of the economy, the underlying social and demographic characteristics of participants, and societal attitudes about the roles of men and women. Most program administrators are understandably leery of being measured - and possibly rewarded or penalized - on results which they can not fully control. Yet, the only measures which are totally under the control of program operators are process measures, such as the number of clients served. This issue is discussed in more detail below.