The Department of Health and Human Services (HHS) funds or conducts many evaluations; some required by statute, others considered essential by the President, and the Department, or an individual agency. Evaluation completes other core Federal management responsibilities: strategic planning, policy and budget development, and program operations (Figure 1).
As currently listed in the Catalog of Federal Domestic Assistance (www.cfda.gov), the Department is responsible for more than 330 programs. In FY 2007, the HHS budget included $657 billion for these programs. Of this amount, Congress directed more than $800 million for evaluation and related activities through the Public Health Service Act Set Aside provision [Section 241(a) of the Act]. Successful evaluation increases the likelihood of effective delivery of public services through these programs and insures that programs are efficient, targeted to their intended clients, and well managed. Additional funds, through general and directed authorities, are also available for research, demonstrations, and evaluations by agencies of HHS.
Role of Evaluation
Programs need to provide good results for the individuals served, spend tax dollars wisely, and achieve the goals intended by Congress and the President. This obligatory report to Congress on Performance Improvement continues the effort to provide a strategic and analytic presentation of evaluation studies. With the implementation of a unified Strategic Plan, as required by the Government Performance and Results Act of 1993, and as further expressed in the Performance Assessment Rating Tool (PART) carried out by the Office of Management and Budget, on behalf of the President, and further specified in the Presidents´ November 2007 Executive Order 13450, Improving Government Program Performance, the Department recognizes its responsibility to both evaluate programs and measure their performance. These assessment activities must be carried out, just as must the public programs they observe, so as to assure that funds are targeted to address the core goals and objectives of both the Congress and Executive branch. This report reflects the important role evaluations, and to a strengthened extent, performance measurement, have to test, weigh, measure and judge the success of management performance, program outputs, and social outcomes and to provide information that enables managers and policy makers to address where changes may be needed in existing programs and to provide information necessary for revising policies, regulations and statutory provisions defining the programs.
HHS evaluations directly support several efforts. Evaluations help government officials and members of the Congress make decisions related to programs, policies, budgets, and strategic planning. Evaluations enable managers to improve their program operations and performance. Evaluation results and methodological tools are useful to the larger health and human services community of state and local officials, researchers, advocates, and practitioners to improve the performance of their programs.
Three Ways to View Types of Evaluation
The classic way to view types of program evaluation are the categories: process/implementation, experimental impact, non-experimental (or quasi-experimental), cost-benefit analysis, and other outcome studies. A cost-benefit analysis, examining the advantages and costs of one or more program designs, could be carried out before a program has been implemented. During the first several months, at least, of a program´s existence, before there are discernable outcomes to measure, a process or implementation evalution could be carried out to see if the program is being set up as intended. Fully experimental evaluations, or random-assignment studies, are sometimes considered the gold-standard of evaluation because they include both program and control groups so the results of the program can be compared to a group intended to be identical in every way except for the role of the program being tested. Finally, non-experimental or quasi-experimental studies seek to find natural circumstances that mimic to some extent what is created artificially by fully experimental studies so that comparisons can be drawn.
Performance measurement differs somewhat from and can fully complement evaluations. While performance measurement may use some of the same types of evaluative tools, the goal is more directed. While an evaluation will typically test a hypothesis, performance measurent must start with the goal of measuring observed performance against particular expectations or criteria for success.
Type by Use
A second way of thinking about types of evaluations is to examine how the information is intended to be used. At their best, HHS evaluations assess performance (efficiency, effectiveness, and responsiveness) of programs or strategies through the analysis of information collected systematically and ethically; effective use of resulting information in strategic planning, program or policy decision-making and program improvement. Evaluations serve one or more of the following objectives (Figure 2):
Improve Performance Measurement –– Monitor annual progress in achieving departmental strategic and performance goals. As emphasized in the Performance Assessment Monitoring Tool, we invest evaluation funds to develop and improve performance measurement systems and improve the quality of the data that support those systems. Performance measurement is a high priority for HHS agencies. The emphasis during development, implementation, and refinement of programs is on results and specific measurements are required under the Government Performance and Results Act.
Strengthen Program Management and Development –– Address the need of program managers to obtain information or data that will help them effectively design and manage programs more efficiently and ensure successful results. Focus on developmental or operational aspects of program activities and provide understanding of services delivered and populations served.
Assess Environmental Factors –– Seek to understand the forces of change in the health and human services environment that influence the success of our programs. Such understanding allows us to adjust our strategies and continue to deliver effective health and human services.
Enhance Program Effectiveness and Support Policy Analysis –– Determine the impact of HHS programs on achieving intended goals and objectives and examine the impact of alternative policies on the future direction of HHS programs or services.
Basic and Applied Evaluation
A third way of thinking about evaluations––one that cuts across both the "classic" and the typology presentations of evaluation just described––uses terminology borrowed from the way we think about scientific research generally: as either basic or applied.
´Basic" evaluations focus on gathering essential factual data. While surveys may be part of broader evaluations, as stand alone undertakings, they may yet represent a basic level of evaluation. An example would be SAMHSA´s annual surveys to determine the number of individuals entering, leaving or remaining in mental health and substance abuse treatment centers. Characterizing such activities as basic evaluation is one way of avoiding the disagreements, among evaluators about how to regard these types of studies. Assessing environmental factors, discussed in the previous section, might be considered as a component of "basic" evaluation.
"Applied" evaluations, in this context, could also be called "program" evaluations for they include studies of how well programs function. Applied, or program, evaluations address the full range of issues previously discussed: improving performance measurement, enhancing program effectiveness, and strengthening program management. A full example of an applied evaluation is the national evaluation of the State Children´s Health Insurance Program that sought to determine what happened and discern the benefit contributed by the progam.
Evaluation activities of HHS agencies and offices are supported with both general program funding and with a portion of the funds appropriated under the Public Health Service Act "set-aside" authority.
General Program Funding
Program managers, operating under either discretionary or directed authority may use program funds to support contracts to design and carry out evaluation studies and analyze evaluation data. In some cases, a program´s legislative authority calls for specially mandated evaluations, and program funds are used directly to support these studies. Agencies for which one or both examples of such funding applies include the Administration for Children and Families (ACF) and the Centers for Medicare and Medicaid Services (CMS). Such funds for evaluation are also available for the Administration on Aging.
Public Health Service Act Set-Aside Authority
The Public Health Service Act, Section 241 set-aside authority was originally established in 1970, when the Congress amended the Act to permit the HHS Secretary to use up to 1 percent of appropriated funds to evaluate authorized programs. Section 241 limited the base from which funds could be reserved for evaluations to programs authorized by the PHS Act. Excluded were funds appropriated for the Food and Drug Administration, the Indian Health Service, and certain other programs that were managed by PHS agencies but not authorized by the Act (e.g., HRSA´s
Maternal and Child Health Block Grant and CDC´s National Institute for Occupational Safety and Health). In addition, the Secretaries of HHS have exercised their authority to exclude from funds tapped by the set-aside authority, the funds spent on CDC´s Prevention Block Grant, SAMHSA´s Substance Abuse Prevention and Treatment Block Grant, and SAMHSA´s Community Mental Health Services Block Grant.
The Revised Continuing Appropriations Resolution, 2007, authorized the Secretary to use up to 2.4 percent of the amounts appropriated for programs authorized by the Public Health Service Act for the evaluation of these programs. For Fiscal Year 2007, the year reflected in the studies here reported, agencies were budgeted a total of $830 million from the set-aside authority:
- Administration for Children and Families (ACF) -- $11 million
- Agency for Healthcare Research and Quality (AHRQ) -- $319 million
- Centers for Disease Control and Prevention (CDC) -- $267 million
- Health Resources and Services Administration (HRSA) -- $28 million
- National Institutes of Health (NIH) -- $24 million
Substance Abuse and Mental Health Services Administration (SAMHSA) -- $121 million
Three staff components in the Office of the Secretary received a total of $40 million:
- Office of the Assistant Secretary for Planning and Evaluation (ASPE)
- Office of Public Health and Science (OPHS)
- Office of the Assistant Secretary for Resources and Technology (ASRT)
In addition, the Office of the National Coordinator for Health Information Technology (ONC) received $19 million and the Office of the Assistant Secretary for Preparedness and Response (ASPR) received $3 million.
Substantial portions of the above funds are congressionally directed to pay for both general operating expenses and broad research activities.
Most evaluation studies are started in one budget year, carried out in one or more subsequent years, and final reports, marking the completion of each study, may be delivered and available for the public in a third or subsequent year. Therefore, the studies completed in a particular year cannot be equated to the funds appropriated for the same year.
This Performance Improvement 2008 report includes studies funded both through the Public Health set-aside authority and with other appropriated funds.
Management of evaluations carried out by HHS agencies and offices involves: (1) planning and coordination, (2) project oversight, (3) quality assurance, and (4) dissemination of results (Figure 3). A description of each function follows.
Evaluation Planning and Coordination
The Government Performance and Results Act of 1993 (GPRA) requires that the Department establish a new five-year stategic plan every three years. The most recent was prepared last year for 2007-2012. This statute, PART, and the recent Executive Order, form an essential basis for evaluation planning . HHS agencies, ASPE, the Office of Inspector General (OIG), and several other offices, develop evaluation plans annually in concert with HHS program planning, legislative development, and budgeting cycles. Each agency or office evaluation plan generally states the evaluation priorities or projects under consideration for implementation. Typically, HHS evaluation priorities include: congressionally-mandated program evaluations, evaluations of Secretarial program or policy initiatives, assessments of new programs and ones that are candidates for reauthorization, and evaluations that support program performance management and accountability.
HHS evaluation planning activities are coordinated with three department-wide planning initiatives. First, HHS evaluation activities support the Department´s strategic planning and performance management activities in several ways. Completed evaluation studies are used in shaping specific HHS strategic goals and objectives. Evaluation findings provide important sources of information and evidence about the success of various HHS programs or policies. The HHS Strategic Plan highlights evaluations that document efficacy or effectiveness of strategic programs or policies and lists future evaluations that will benefit strategic planning. HHS agencies use findings from their evaluations to support GPRA annual performance reporting to Congress, program budget justifications, and the PART evaluation reporting obligations in theie budgets.
Second, Congress requests that HHS coordinate and report to Congress regarding all of its research, demonstration, and evaluation (RD&E) programs to ensure that the results of these projects address HHS program goals and objectives. HHS provides the Congress with a special annual research, demonstration, and evaluation budget plan that coincides with the preparation of the President´s fiscal year budget. The plan outlines planned spending on HHS agency research, demonstration, and evaluation priorities as related to the Department´s strategic goals and objectives (Figure 4).
Third, as mandated in statute, the Secretary reports to the Congress his plans for using PHS evaluation set-aside funds before implementing these plans.
HHS agencies and staff ofices execute annual evaluation plans that involve developing evaluation contracts and disseminating and applying evaluation results. All agencies and their subunits (centers, institutes, and bureaus) coordinate with each other on research and evaluation project planning and release of final reports that relate to work of other HHS agencies.
The OIG performs independent evaluations through its Office of Evaluations and Inspections (OEI). OEI´s mission is to improve HHS programs by conducting inspections that provide timely, useful, and reliable information and advice to decision makers. Findings of deficiencies or vulnerabilities and recommendations for corrective action are usually disseminated through inspection reports issued by the Inspector General.
Quality Assurance and Improvement
Most evaluation projects are developed at the program or office level. A committee of agency- or office-level policy and planning staff members may conduct an initial quality review. Before a project is approved, a second committee reviews it for technical quality with expertise in evaluation methodology. Technical review committees generally follow a set of criteria for quality evaluation practice established by each agency. ASPE, for example, has a peer review committee that serves to improve the technical merits of ASPE proposals before final approval. Some HHS agencies have external evaluation review committees composed of evaluation experts from universities and research centers.
Since HHS began reporting to Congress in 1995 on completed evaluations through the Performance Improvement report series, the Department has focused attention on improving the quality of evaluation studies performed. In the past, Evaluation Review Panels, convened periodically, have contributed insights to HHS evaluation officers on the strengths and challenges of ensuring quality evaluation studies. HHS evaluation officers have had opportunities to discuss these strengths and challenges and identify steps to improve agency evaluation projects. A 2008 study being currently funded by ASPE is examining how findings from HHS-funded evaluations are used.
Dissemination of Evaluation Reports
Maintaining online electronic report libraries and distributing information on evaluation results is an important component of HHS evaluation management. The Department´s information and reports on major evaluations are available through the Web site of the HHS Policy Information Center (PIC), located at: http://aspe.hhs.gov/pic/performance (Appendix E contains additional information about how to access this information). ASPE´s PIC Web site offers users an opportunity to search – by key word, selected program, or policy topics – the departmental evaluation report database and electronic report library maintained by ASPE. The PIC contains over 8,500 completed and in-progress evaluation and policy research studies conducted by the Department of Health and Human Services, as well as some studies completed outside of it by others.
Project officers and other key agency staff directly submit evaluation information online. This means that, as regards the online database, there is no delay in making information avalible to evaluation peers in other parts of the Department, and to the public at large. Researchers may now search to see what studies have been funded and are currently underway that may be relevant to their own research or planning activities. New entries in the online database are intended to focus on effective and clear summaries answering the basic questions: what was the study, why was it conducted, and what was learned. Through the online database, several months before annual reports such as this one are due to the Congress, much of the information regarding the work of evaluation underway can be known both to Congressional and Executive branch staff and to the public as well, speeding the dissemination of important factual information regarding work of the Department. A positive result is reduced chance of duplication of effort and speedier application of policy implications of evaluation work carried out.
Additionally, the results of HHS evaluations are disseminated through targeted distribution of final reports, articles in referenced journals, and presentations at professional meetings and conferences. Although individual HHS agencies have primary responsibility for disseminating results, ASPE continues its Department-wide efforts to expand dissemination of evaluation results to the larger research and practice communities through email lists, e-newsletters, and publications.
The value of evaluations reside in their use. How were lessons learned applied? Were improvements made in the program? Have the findings informed the policy debates? The initial study that ASPE has undertaken begins the process of learning what means are effective for disseminating and encouraging use of evaluation findings and what are the barriers to increased use.