Performance Improvement 1995. National Institutes of Health


MISSION: To discover and disseminate new knowledge leading to improved health for all Americans.

NIH Evaluation Program Evaluation is an integral part of the role of the National Institutes of Health (NIH) in the support of biomedical research, training, and public education. Evaluation studies are undertaken to ensure that NIH meets its specific goals to--

  • support biomedical and behavioral research of the highest quality;

  • inform health researchers, health care providers, industry, and the public of advances and opportunities to improve health;

  • support the training and continued availability of biomedical and behavioral scientists;

  • support the facilities and equipment needed to sustain scientific progress; and

  • manage its resources effectively and efficiently.

A distinguishing feature of the NIH Evaluation Program is the variety of evaluation instruments it employs. The most familiar instrument is a formal evaluation study that examines whether a program has successfully met its objectives. But NIH supports a host of evaluation strategies that go beyond traditional program evaluations. The NIH peer review system is one type of evaluation strategy: research proposals from scientists around the Nation are subjected to a rigorous assessment by fellow scientists, and only the most meritorious proposals receive funding. Other evaluation strategies include national advisory councils, boards of scientific counselors, consensus development conferences, and committees. These groups are charged with assessing a body of research to establish priorities, developing long-range goals and strategies, addressing emerging issues, identifying significant opportunities, assessing needs for new programs and activities, and recommending expansion, realignment, or continuation of ongoing programs.

The reason for the diversity of evaluation instruments lies in the nature of research. Research--especially basic research--depends on pursuing the unknown. The results of a research program and the generation of new knowledge usually cannot be anticipated. Consequently, research does not readily lend itself to the most common type of evaluation, an outcomes evaluation. Programs that provide services or promulgate regulations are most suited to outcomes evaluation because they are intended to achieve explicit, preconceived objectives. NIH attempts to evaluate its research programs with methods suited to the serendipitous nature of the research enterprise.

Another distinguishing feature of the NIH Evaluation Program is its use of the 1 percent set-aside evaluation fund strictly for programs that transcend individual NIH Institutes. The focus on NIH-wide evaluations is a self-imposed policy. NIH relies on its component Institutes, Centers, and Divisions (ICDs) to generate requests for funding of NIH-wide projects from the 1 percent set-aside, in addition to those that are centrally directed. The ICDs also conduct individual evaluations supported by their own program funds.

In June 1991, the Office of the Assistant Secretary for Health (OASH) authorized NIH to approve all set-aside funded evaluations, whatever the budget, while it maintained an ex officio presence in the review process. A two-tiered system is used to review project requests for 1 percent set-aside funding. One tier is the Evaluation Policy Oversight Committee (EPOC), and the other is the Technical Merit Review Committee (TMRC). The EPOC includes representatives from the Office of the Director, NIH, and ICD representatives at the level of Director, Deputy Director of an ICD, or Associate Director of the Office of the Director. EPOC conducts policy- level concept reviews of proposals for NIH-wide evaluation studies that use set-aside funds, establishes the overall NIH set-aside budget, and oversees the process. EPOC recommendations are approved by the Director, NIH, or designee before the initiation of any study. The TMRC is responsible for the technical review of the submissions and for recommending to the EPOC whether a project fits within departmental guidelines for the set-aside fund.

Evaluations and evaluation priorities pertaining to individual ICDs are shaped by ICD Directors and Deputy Directors. The results help ICDs and the Director, NIH, establish priorities, develop long-range goals and strategies, and review programs in terms of scientific excellence, relevance, cost, and uniqueness.

Summary of FY 1994 Evaluations The eight evaluations completed in FY 1994 addressed almost all elements of NIH's mission, from research to public education. Four of the eight evaluations are highlighted in chapter II. The evaluations summarized below illustrate the diversity of the NIH Evaluation Program through a commitment to evaluating a body of research that informs public policy, evaluating the Nation's need for scientific manpower, and evaluating public health information.

One of these four evaluations addressed the safety of selected childhood vaccines. This study was mandated by Congress under Section 313 of Public Law 99-660 to yield essential information that would help the Public Health Service draft recommendations about the use of, and compensation for adverse reactions to, vaccines against tetanus, diphtheria, measles, mumps, polio, Haemophilus influenzae type b, and hepatitis B. The study entailed an expert review by a committee of the Institute of Medicine. The committee examined all relevant medical and scientific literature on the potentially serious risks associated with currently licensed childhood vaccines. Its findings about each vaccine have been incorporated into brochures given to parents before children are vaccinated and into proposals to revise the list of compensable injuries presumed to have been caused by certain vaccines.

Another evaluation was on national needs for biomedical and behavioral research personnel. This evaluation addressed the Nation's future need for biomedical and behavioral research scientists and the contribution of NIH training grants called National Research Service Awards (NRSAs). The study was the 10th in a continuing series of reports to NIH and the U.S. Congress on this topic.

The National Research Service Award Act of 1974 consolidated all previous training authorities into the NRSA program. The Act authorized both predoctoral and postdoctoral support to individuals and to institutions. To implement the Act, NIH set up individual fellowships and grants to institutions for training predoctoral and postdoctoral students. Close to $400 million is spent annually on these training grants. A National Research Council expert committee, under contract to NIH, found that although the NRSA program is relatively small in terms of the total number of trainees, it is enormously powerful in its ability to change research emphasis and to attract the highest quality individuals to research careers. It is viewed as a prestigious, highly competitive program, and it is clear that initiatives introduced through the NRSA program can have a powerful impact on intended new research directions or constituencies. A final evaluation was performed to determine the feasibility of assessing NIH-supported research to increase condom use. This project was a feasibility study, or evaluability assessment, to determine if an outcomes evaluation could be performed in a second phase and if it would be useful to do so. The evaluation was motivated by public health efforts to prevent the spread of AIDS through the use of condoms. Its objectives were to assess the findings of condom use research efforts, guide the development of future program areas, and suggest methodological guidelines to facilitate the evaluation of future condom research programs.

The evaluation identified and inventoried the universe of condom use research studies supported by NIH grants. A reproducible methodology that combined automated database searching of NIH grants with human judgment was established. The methodology identified more than 500 relevant studies. A sample of 76 studies was examined in detail to identify how well demographic characteristics were defined, which sampling methodologies were employed, whether and what type of comparison group was used, and so on. The final report was widely disseminated. The evaluation was valuable because it generated useful methodological tools for meta-analysis and because it led to a decision not to pursue a larger scale evaluation. What emerged was a research agenda that the ICDs (seven had participated in the technical advisory group for the evaluation) could use to address the vital questions related to condom use most closely related to their institutional missions.

Evaluations in Progress NIH has 24 evaluations in progress. They range from small- to large-scale assessments, from evaluability studies to full-blown evaluations. One study builds on a longstanding role in ensuring the training and continued availability of superior biomedical and behavioral scientists through the NRSAs. The first study objective is to conduct an evaluation design study, that is, develop a detailed plan for a comprehensive evaluation of career outcomes of the predoctoral and postdoctoral trainees and fellows and the NRSA programs in which they have participated. The second objective is to develop an approach to characterize the nature and quality of the training actually experienced by present and former trainees and fellows and to differentiate between a good training program and simply good trainees. No baseline data are available on trainees, and program versus selection effects have not been studied. The third objective is to develop an approach to tap the perceptions of NIH staff, present and former NRSA trainees and fellows, and university administrators about the nature and impact of the training program. This study is expected to be completed in 1995.

A second example is a study of the Physician Data Query (PDQ), a comprehensive cancer database intended primarily for cancer health professionals. The database contains state-of-the-art treatment summaries and information on supportive care, screening, prevention, and experimental drug therapies. The objectives of the study are to survey PDQ database users to determine who is using the database and how the information is used, and to assess user satisfaction with the information and the method of retrieval, for example, CD-ROM, on-line, or hard copy.

The National Cancer Institute is directing a study to identify ways to increase target audiences. The study will produce a written summary of the activities and analyses of the project, data tapes and documentation of all questionnaire responses, and a computerized system of data collection, tabulation, and analysis. It will also include suggestions for improvements to the PDQ database to ensure that it meets user needs.

The third evaluation examines research resources available to primate researchers throughout the Nation. Through special grants, NIH supports seven Regional Primate Research Centers, a unique national network of nonhuman primate research and resource laboratories established in the early 1960s. The centers are geographically located throughout the United States, and each is closely affiliated with an academic institution. Center activities include the conduct of biomedical and behavioral research; research resource support to investigators funded by other sources; the maintenance of more than 18,000 nonhuman primates; the establishment of breeding programs to meet the centers' research requirements; conservation and preservation programs; expert professional and technical support to investigators; and training of pre- and postdoctoral professionals in primatology research. When the nonhuman primate is the most appropriate species to study, the centers provide a cost-effective response to the need for national repositories of nonhuman primates, scientific expertise, and specialized facilities and equipment.

The evaluation is assessing all aspects of the Regional Primate Research Centers program, such as its effectiveness in meeting current program objectives and guidelines, scientific distribution and emphasis of research programs, future planning decisions, and collaborations with and access by non-center investigators. Other issues to be addressed are compliance with policies and guidelines, financial management, host relationship, grant award process, peer review, status in the scientific community, and reporting and dissemination of program results. The centers are being compared with several selected non-center institutions with significant NIH-supported nonhuman primate research. The results of the study will be used to improve and refine the centers program.

New Directions for Evaluation In peer review, for example, NIH has been examining streamlining review of grant applications to make more effective use of reviewers' time. Streamlining has been tried on an experimental basis and is now being implemented fully. The results will be closely watched. The Government Performance Results Act (GPRA) of 1993 requires NIH to develop a strategic plan, an annual performance plan, and performance indicators by September 1997. This effort is currently a central focus of NIH evaluation activities. NIH is exploring a variety of indicators for potential areas in the strategic plan such as information dissemination, technology transfer, and its reinvention program. In addition, priority will be given to funding 1 percent set-aside project proposals submitted by the ICDs and the NIH Office of the Director that relate to the GPRA. Also, the EPOC will reexamine and redefine NIH evaluation priorities for the future.

View full report


"PerformanceImprovement1995.pdf" (pdf, 941.92Kb)

Note: Documents in PDF format require the Adobe Acrobat Reader®. If you experience problems with PDF documents, please download the latest version of the Reader®