Performance Improvement 1996. Chapter I. The Evaluation Program in the U.S. Department of Health and Human Services

02/01/1996

The mission of the U.S. Department of Health and Human Services (HHS) is to protect and promote the health and social and economic well-being of all Americans and, in particular, those least able to help themselves--children, the elderly, persons with disabilities, and the disadvantaged--by helping them and their families develop and maintain healthful, productive, and independent lives. Accomplishing this mission through program activities and evaluating their performance is the task of the following HHS agencies and offices:

  • Administration for Children and Families (ACF).
  • Administration on Aging (AoA).
  • Agency for Health Care Policy and Research (AHCPR).
  • Agency for Toxic Substances and Disease Registry (ATSDR).
  • Centers for Disease Control and Prevention (CDC).
  • Food and Drug Administration (FDA).
  • Health Care Financing Administration (HCFA).
  • Health Resources and Services Administration (HRSA).
  • Indian Health Service (IHS).
  • National Institutes of Health (NIH).
  • Office of the Secretary (OS).
  • Substance Abuse and Mental Health Services Administration (SAMHSA).

The Assistant Secretary for Planning and Evaluation (ASPE), located in OS, coordinates evaluation activities throughout HHS.

Evaluation plays an integral role in carrying out the HHS mission by assessing various aspects of program performance of the HHS agencies and by identifying means of improving that performance. The HHS evaluation function has three goals:

  1. To provide information on HHS programs that helps government officials and members of Congress make decisions related to program, policy, budget, and strategic planning.
  2. To help HHS managers improve program operations and performance.
  3. To disseminate HHS evaluations--study results and methodological tools--that are useful to the larger health and human services community of State and local health and human services officials, researchers, advocates, and practitioners for improving the performance of their programs.

The last goal is very important to HHS. The Department believes it has an important obligation to foster the development of new knowledge about the effectiveness of health and human services programs, interventions, and evaluation tools for use by the larger health and human services community. Although the findings and recommendations of HHS evaluations are usually first used by the Administration and Congress, they can also be applied by others in the research and practice communities to improve the performance of programs at the State and community levels. The purpose of this report is to widely disseminate information about recent HHS evaluations and to make sure the potential for wider application is realized.

This chapter describes the organization and operation of evaluation at HHS. It first provides an overview of the kinds of evaluation activities supported by HHS agencies and then describes the funding mechanisms used to support them. It details HHS evaluation management, including planning procedures, project management, quality assurance, dissemination of reports, and effective uses of evaluation results. The chapter concludes with a discussion of future directions for evaluation at HHS.

HHS Evaluation Activities

The evaluation activities sponsored by HHS and described in this report assess program performance (efficiency, effectiveness, responsiveness), analyze results on the basis of those assessments, and use the resulting information in policymaking and program management. These activities are diverse and include the full spectrum of evaluation methodologies developed over the last quarter century. The classification of HHS evaluation activities presented in figure I-1 summarizes that diversity.

HHS evaluation projects typically fall into a combination of these categories. For example, comprehensive HHS evaluations generally examine both process and outcome or impact. Knowing only whether goals and objectives are achieved is insufficient without also knowing how well the program was implemented and whether goals and objectives were appropriate in the first place. Similarly, evaluation feasibility and design activities generally represent the crucial first phase of major HHS process and outcome/impact evaluations.

Evaluation Funding

Evaluation activities of the various HHS agencies are largely supported through two funding mechanisms: (1) direct use of programs funds and (2) special use of legislative set-aside authorities for evaluation. The first is a common mechanism by which programs managers have discretionary authority to use appropriated program funds to support contracts that will design, implement, and analyze evaluation data. In some cases, a program's legislative authority calls for a special mandated evaluation, and the program funds are used directly to support the evaluation.

Figure I-1. Range of HHS Evaluation Activities

Evaluation projects

  1. Outcome evaluations: assessing the immediate or intermediate effects of a program with respect to the stated goals or objectives.
  2. Impact evaluations: assessing the broader results, intended or unintended, of a program on populations or institutions involved.
  3. Implementation or process evaluations: assessing the nature of program inputs and outputs and their relationship to stated goals and objectives.
  4. Policy assessments: examining health policies with respect to their development, implementation, or the impact on public health or program activity.
  5. Cost-benefit or cost-effectiveness analyses: developing methodology and its application to assess the relationship of program results to program costs (direct and indirect), often in comparison with alternative programs.
  6. Survey data analyses: evaluating the results of HHS programs or policies by conducting or analyzing data obtained from surveys.
  7. Performance measurement and data systems: identifying and testing the validity and reliability of process, output, and outcome indicators to measure the performance of programs and develop data systems supporting implementation of the Government Performance and Results Act (GPRA).
  8. Simulations and models: using computer simulations and modeling techniques to analyze the impact of policy changes on service delivery systems and beneficiaries.
  9. Management studies: examining the effectiveness or efficiency of the administration or operation of HHS programs and offices.
  10. Evaluation syntheses: integrating the results from multiple independent evaluation studies within a defined program or policy area in a fashion that improves the accessibility and application of those results.

Methodology projects

  1. Evaluation feasibility studies: assessing the clarity and importance of program goals and objectives, the consensus of program stakeholders on the potential use of evaluation information, and the availability of relevant performance data before committing to a full-scale program evaluation.
  2. Evaluation design projects: procuring assistance in the development of an evaluation design, measurement tools, or analytic models in preparation for full implementation of an evaluation.
  3. Instrument development projects: developing evaluation instruments (design, measurement, or analytic) for a specific HHS program or for general use by the health and human services community.

Evaluation support activities

  1. Evaluation technical assistance: providing assistance to HHS program managers or office directors on any aspect of evaluation planning, project design-implementation-analysis, or use of results.
  2. Evaluation dissemination: identifying target audiences and mechanisms to inform program constituencies and evaluation stakeholders about evaluation results.
  3. Evaluation training/conferences: maintaining the professional skills and expertise of evaluation staff through training opportunities, as well as promoting the dissemination of HHS evaluations through conference symposia.

The second mechanism for evaluation funding is legislative set- aside authorities permitting the Secretary of HHS to use a proportion of overall program funds for evaluation purposes. The largest of such set-aside authorities is one established for evaluations conducted by several agencies of the U.S Public Health Service (AHCPR, CDC, HRSA, NIH, and SAMHSA), ASPE, and the Office of Public Health and Science (OPHS) in the Office of the Secretary. It is called the 1 percent evaluation set-aside legislative authority, provided in Section 241 of the Public Health Service (PHS) Act. This authority was established in 1970 when Congress amended the Act to permit the HHS Secretary to use up to 1 percent of appropriated funds to evaluate authorized programs. Section 241 limits the base from which 1 percent of appropriated funds can be reserved for evaluations of programs authorized by the PHS Act. This limitation excludes all funds appropriated for FDA,1 IHS,1 and certain other programs that are managed by PHS agencies but not authorized by the Act (e.g., HRSA's Maternal and Child Health Block Grant and CDC's National Institute for Occupational Safety and Health).

In fiscal 1995, HHS invested more than $41 million in set-aside evaluation funds to conduct evaluation activities. These resources amount to approximately two-tenths of 1 percent of the total appropriated for programs authorized by the Act ($18 billion). An additional $46 million in set-aside funds was earmarked by Congress for use by CDC's National Center for Health Statistics and AHCPR in those agency's appropriations.2

In fiscal 1996, HHS estimates it will use approximately $33.5 million in the PHS evaluation set-aside funds to continue current evaluation activities and to initiate new evaluation projects. This amount is somewhat lower than the comparable fiscal 1995 figure. However, $100.2 million in set-aside funds was earmarked by Congress for CDC and AHCPR, as stated. This figure represents a substantial increase over past years. Table I-1 provides a breakdown of the estimates for fiscal 1996 and the actual usage for fiscal 1995 by PHS agencies and the Office of the Secretary.

Evaluation Management

The management of HHS evaluations, carried out on a regular basis by HHS agencies and offices and coordinated by ASPE, involves these five basic functions:

  1. Evaluation planning and coordination.
  2. Project management.
  3. Quality assurance.
  4. Dissemination of evaluation reports.
  5. Assurance of effective use of evaluation results.

A description of each function in general terms follows. Additional information on the individual HHS agencies, ASPE, and OPHS evaluation functions is found in chapter III.

Table I-1. Agency Use of Evaluation Set- Aside Funds,
in Thousands of Dollars
FY 1995 FY 1996
Agency evaluation use:
    AHCPR $450 $115
    CDC 2,000 2,000
    HRSA 7,114 6,677
    NIH 4,510 4,510
    SAMHSA 1,978 996
    ASPE 15,500 15,500
    OPHS1 9,525 3,852
Total use $41,077 $33,650

1. OASH in fiscal 1995.

Evaluation Planning

HHS Agencies, ASPE, the Office of the Inspector General (OIG), and OPHS develop evaluation plans annually in concert with HHS's program planning, legislative development, and budgeting cycles. Plan development is coordinated by ASPE. Before the start of each fiscal year, evaluation guidance is issued by ASPE to signal HHS program priorities for evaluation. Typically, the priorities include evaluations of Secretarial program or policy initiatives, new programs, programs undergoing major change, programs that are candidates for reauthorization, and programs for which key budget decisions are anticipated.

Recently, emphasis has been given to evaluations that support strategic planning program goals and objectives. Congress has requested HHS to coordinate all of its research, demonstration, and evaluation programs to ensure that the results of these projects address HHS's program goals and objectives. ASPE and the Assistant Secretary for Management and Budget are now working with HHS agencies to provide Congress with annual research, demonstration, and evaluation budget plans, beginning with the fiscal 1996 President's budget, that outline each agency's research, demonstration, and evaluation priorities as related to overall HHS program goals and objectives.

Project Management

The execution of evaluation at HHS is principally decentralized-- the various HHS agencies, OIG, and ASPE are all responsible for executing annual evaluation plans, developing evaluation contracts, and disseminating and applying evaluation results. Even within agencies, while there is some oversight responsibility and execution capability in the Office of the Director or Administrator, the various subunits (centers, institutes, bureaus) conduct much of the day-to-day evaluation activity.

OIG performs independent evaluations through its Office of Evaluations and Inspections (OEI). OEI's mission is to improve HHS programs by conducting inspections that provide timely, useful, and reliable information and advice to decisionmakers. This information (findings of deficiencies/vulnerabilities and recommendations for corrective action) is usually disseminated through inspection reports issued by the Inspector General. Since its inception in April 1985, OEI has produced more than 600 inspection reports. A summary of individual inspection reports and other OIG reports can be viewed on the Internet (http://www.sbaonline. sba.gov/ignet). OEI also provides technical assistance to HHS agencies in conducting their evaluations. A recent example is their joint work with AoA to help train, provide technical assistance, and develop an action plan to address weaknesses in their stewardship of the Older Americans Act.

Quality Assurance

Most evaluation projects are developed at the program level, and the initial review is conducted by a committee of agency-level policy and planning staff members. Before a project is approved, it is reviewed for technical quality, generally by a second staff committee that is skilled in evaluation methodology. Technical review committees follow a set of criteria for quality evaluation practice established by each agency. Some HHS agencies also have external evaluation review committees composed of evaluation researchers and policy experts from universities and research centers. More details on the quality assurance procedures for the various agencies, ASPE, and OPHS are presented in chapter III.

Dissemination of Evaluation Reports

Maintaining and sharing information on the various projects conducted by HHS agencies, ASPE, and OPHS is an important component of evaluation management. Project information is continuously reported to the HHS Policy Information Center (PIC)- -the departmental evaluation database and library maintained by ASPE. As an information database and library resource, the PIC contains nearly 6,000 completed, ongoing, and planned evaluation and policy research studies conducted by HHS, as well as key studies completed outside HHS by the U.S. General Accounting Office (GAO) and private foundations.

Typically, the results of HHS evaluations are disseminated through targeted distribution of final reports, articles in refereed journals, and presentations at professional meetings and conferences. Although the individual HHS agencies have primary responsibility for disseminating results, there is a department- wide effort under way to expand dissemination to the larger research and practice communities through centralized computer communications and publications. First, abstracts of all studies maintained in the PIC database are now accessible through HHS's World Wide Web server (http://www.os. dhhs.gov) on the Internet. Once into the HHS Home Page, one can click on "Policy Information" and then on "Research and Data Provided by HHS" to gain access to the PIC database. It is possible to obtain information on reports available from completed projects and the name and telephone number of an HHS official responsible for the project.

Second, HHS is widely distributing copies of its first annual report on evaluation (Performance Improvement 1995: Evaluation Activities of the Public Health Service). The report's theme of performance improvement reflects the numerous changes and initiatives throughout HHS to increase the effectiveness and efficiency of public health programs. As the first report to Congress, it summarizes the findings of PHS evaluations completed during fiscal 1994. Of the approximately $14 billion in the fiscal 1994 budget for program activities, PHS agencies used almost $27 million to conduct evaluations useful for understanding the outcomes and improving the performance of PHS programs. In FY 1994, PHS Agencies produced 71 evaluation reports and supported more than 180 evaluation projects in progress. The report provides summaries or abstracts of these reports and contacts for further information.

In addition to providing the report to members of Congress, HHS sent copies to State and local health officials, schools of public health, and other national public health research and practice associations. A similar plan has been developed to distribute Performance Improvement 1996: Evaluation Activities of the U.S. Department of Health and Human Services, which contains information on all HHS evaluations completed and in progress during fiscal 1995. These reports are also available on the previously described HHS Home Page in three computer formats: ASCII, HTML, and PDF for downloading information.

Ensuring Effective Use of Evaluation Results

HHS is committed to ensuring that evaluations yield a high return on the investment of available program funds. In the last decade, HHS evaluations were used primarily by program managers for internal purposes of improving program operations and efficiency. In the 1990's, however, the need for more program-outcome and impact-type evaluations has increased because of fiscal pressures to accomplish more with fewer resources. The stakeholders for HHS evaluations have expanded beyond the boundaries of program management to include decisionmakers at the top levels of government, both the Administration and Congress, and health and human service researchers and practitioners at State and community levels.

To meet the needs of these expanding stakeholder groups, HHS has encouraged its agencies to give high priority to outcome/impact evaluations, especially programs that are coming up for reauthorization or are instrumental to strategic planning goals and objectives. The need for this major shift in priorities was documented by GAO in its April 1993 review of the PHS Evaluation Program, focusing on the 1 percent set-aside authority (see Publication No. GAO/PEMD-93-13). GAO recommended that HHS target more of its evaluation resources to outcome/impact evaluations that can be used by Congress and others for program planning, budgeting, and legislative action. In addition, GAO recommended that HHS initiate special projects to synthesize multiple evaluation efforts to better communicate to Congress and others the aggregate lessons learned over the years in a particular program area. Several evaluation syntheses of HHS programs were completed during fiscal 1995 and are reported in chapter III.

Future Directions in HHS Evaluation

In upcoming years, HHS agencies, ASPE, and OPHS will focus their evaluation portfolios on three principal themes: (1) the impact of transformations in health and human services, (2) the development of performance measures, and (3) overall program performance improvement.

Impact of Transformations

In December 1995, the Secretary formed a working group to develop a research strategy to examine the transformations now taking place in health and human services and the impact of those transformations on the well-being of Americans--especially the vulnerable populations that are high priority for HHS programs, such as disadvantaged or low-income children and families, the elderly, racial and ethnic minorities, and individuals with disabilities.

These transformations refer to the nationwide changes in the organization, financing, and availability of health services delivery, including the new managed care arrangements and a growing emphasis on quality of care. Managed care arrangements are affecting virtually every health program funded by HHS. For example, HCFA is granting waivers to States under Section 1115 of the Social Security Act to redesign their Medicaid programs, with most programs having a managed care component. Some of the new evaluation questions being proposed are as follows:

  • Has the Nation's progress toward the Healthy People 2000 Goals been facilitated or slowed by the transformations to date? Can future impacts of the changes be estimated?
  • Are HHS's programs of health care for vulnerable populations performing more or less effectively with respect to these changes?
  • What is the impact of managed care arrangements on the effectiveness of State- and community-level public health programs?

Like health services, human services programs are undergoing transformations in their organization, financing, and availability. In August 1996, new welfare reform legislation was enacted that eliminated the entitlement to cash assistance and replaced it with a fixed block grant to States, placed a 5-year time limit on benefits, imposed strict work requirements on recipients, reduced benefits and services available to legal immigrants, and greatly expanded States' authority over welfare programs. This new legislation raises important evaluation issues, particularly as policy decisions will increasingly be made at the State and local level. Critical questions include the following:

  • How do States organize and implement the new welfare system?
  • What are the effects of the legislation on the well-being of families and children?
  • What approaches are States taking to move families from welfare to economic dependence? How effective are these approaches?

HHS's evaluation function has an important role to fulfill in this research strategy. HHS agencies have already initiated evaluation projects that focus directly on these transformations in health and human services. The evaluations of State-specific Medicaid and welfare reform demonstration are examples. Other projects include the ongoing evaluations of the Head Start program; the evaluation of the national welfare-to-work program (Job Opportunities and Basic Skills Training) which examines the effectiveness of different approaches to moving recipients into work and the impact of the program on the well-being of children; the effectiveness and efficiency of community health centers; alternatives for health care for Native Americans; cost and quality of and access to mental health services and treatment programs for substance abusers.

The evaluations provide an excellent base on which to build an expanded array of studies related to the role of HHS as both sponsor and provider of services. Future HHS program evaluations offer an excellent opportunity to examine the effects of devolution, considering such questions as whether improvements have been made in efficiency and accountability, and to examine the impact on vulnerable populations.

Development of Performance Measurements

The transformations also underscore the need for HHS to expand its leadership role in developing and applying better performance measures regarding health and human services. Stakeholders, from Congress to community leaders, are demanding increased attention to results and the concomitant development of program outcome measures that are meaningful, quantifiable, and reliable. In addition, consensus on performance measurement--at all levels of stakeholders--is likely to be a precondition for effective data sharing across governmental levels and between governments and private sector organizations.

HHS agencies are now engaged in evaluation projects to promote the development and use of performance measures related to health and human services. Recent examples include quality assurance measures within the health care industry and scorecards to help consumers rate health and mental health services.

One of HHS's most ambitious projects to involve States, communities, and service recipients in identifying program performance measurement is called Performance Partnerships. The initiative, which has involved consultations with more than 1,400 stakeholders nationwide, will identify performance measures for program activities within SAMHSA and CDC. The measures will be used as management tools at the Federal, State, and local levels to clarify program goals and objectives and to document the performance of specific programs. This is the most comprehensive effort yet mounted to fully involve States, communities, and service recipients in identifying program measures.

HHS will also invest its evaluation resources in performance indicators to ensure implementation of GPRA. The evaluation strategies of the HHS agencies, mentioned in chapter III, include the priorities of projects that examine program objectives and develop useful measures of program outputs and outcomes. GPRA offers HHS agencies an opportunity to develop performance measurement systems that will eventually link program evaluation activities to budgeting. HHS's evaluation set-aside authority, such as the 1 percent authority for some PHS agencies, is an important resource to help program managers identify performance objectives and test the validity and reliability of indicators to measure progress.

For example, HRSA has completed a major project to assess its capacity to develop and implement a performance measurement and management system, and is conducting followup activities. HRSA's objective is to document program inputs, processes, and outputs and to analyze the link between key program elements and outcomes for the target populations and community health objectives. This investment of evaluation funds will yield a high return on GPRA's objective to have performance measurement systems in place when agency strategic planning and performance budgeting systems are scheduled to be operational in fiscal 1999. HRSA's experience in developing performance measurement for GPRA has potential as a model for other agencies.

Performance Improvement

The Department also encourages evaluations initiated by program managers for improving the performance of HHS programs, such as the use of customer surveys to measure satisfaction with program services or outputs. The evaluations are designed to ensure that program operations are efficient and effective. They are also an essential resource for HHS's Continuous Improvement Program and will be used to support the development and operation of information systems and special studies that will enable program managers to measure customer satisfaction with HHS services.

Several projects illustrate the Department's evaluation priority of continuous improvement of services. CDC is working with the States to look at the efficiency and impact of two disease- surveillance systems. HRSA will use evaluation funds to develop performance measures for grantee assessment of program outcomes in projects funded by the Ryan White Comprehensive AIDS Resources Emergency Act. NIH will evaluate the National Research Service Award training program to determine whether its objectives are being met. SAMHSA will develop performance measures to monitor the generation of new knowledge from its demonstration programs.

Notes

1. FDA programs are principally authorized by legislation other than the PHS Act, specifically the Federal Food, Drug and Cosmetic Act, and appropriated under the Agriculture, Rural Development, Food and Drug Administration and Related Agencies Appropriations Act. IHS programs are authorized under the Indian Health Care Improvement Act and the Indian Self- Determination Act, and appropriated under the Department of the Interior and Related Agencies Appropriations.

2. In the past, the 1 percent congressional earmarks for AHCPR and CDC have been used to support national health surveys. Within AHCPR these funds have supported the Medical Expenditure Panel Surveys (MEPS), formerly called the National Medical Expenditure Survey. MEPS is the Nation's only representative survey regarding the use and payment of health care services. These surveys provide the data needed to develop economic models for national and regional estimates of the impact of changes in financing, coverage, and reimbursement policy. Because of the increase in the 1 percent earmarks in fiscal 1996 appropriations, AHCPR used these funds to support studies in the areas of health care systems, cost and access, and outcomes and effectiveness in addition to MEPS. The congressional earmark of 1 percent funds to CDC supports the programs of the National Center for Health Statistics, consisting mainly of national surveys and data systems designed to monitor and evaluate the health of the American people, their use of health services, and other related issues. For fiscal 1996, CDC received an earmark of $40,063,000 from 1 percent evaluation funds, and AHCPR received an earmark of $60,124,000.