Performance Improvement 1995. Centers for Disease Control and Prevention and Agency for Toxic Substances and Disease Registry


MISSION: To promote health and quality of life by preventing and controlling disease, injury, and disability.


Agency for Toxic Substances and Disease Registry MISSION: To prevent exposure and adverse human health effects and diminished quality of life associated with exposure to hazardous substances from waste sites, unplanned releases, and other sources of pollution in the environment.

CDC Evaluation Program The Centers for Disease Control and Prevention (CDC) place a high priority on evaluations seeking to answer policy, program, and strategic planning questions. Evaluation studies are developed and selected on the basis of eight strategies to achieve its mission. These strategies are to--

  • Monitor health
  • Detect and investigate health problems
  • Conduct research to enhance prevention
  • Develop and advocate sound public health policies
  • Implement prevention methods
  • Promote healthy behaviors
  • Foster safe and healthful environments
  • Provide leadership and training

The CDC Director provides annual guidance to the various Center, Institute, and Office (CIO) Directors on 1 percent set-aside evaluation activities. This memorandum generally includes information about the types of studies to be carried out with 1 percent evaluation funds. Each proposal undergoes multiple levels of review. Initial review is conducted by the Office of Program Planning and Evaluation. Subsequent reviews are completed by CDC analysts in the Office of the Assistant Secretary for Health (OASH) and the Office of the Assistant Secretary for Planning and Evaluation (OASPE). Study authors are provided with comments, questions, and recommendations made by reviewers. In addition to providing their responses, authors are given the opportunity to revise their proposals at this time.

A panel of CDC evaluators, scientists, and program managers are convened to review and rank proposals. Review criteria include (1) relevance to prevention effectiveness; (2) relative importance of the public health problem being addressed; (3) probability that the proposed project will accomplish its objectives; and (4) extent to which other CDC programs will benefit from the project. Results from this panel review are converted into a comprehensive ranking that is provided to the Director of CDC. Final funding decisions are made at this time.

Finally, staff in the Office of Program Planning and Evaluation work closely with program staff to ensure development of a clear statement of work for selected projects. Before initiation of procurements, a final ad hoc review of the project statement of work is completed.

ATSDR Evaluation Program Agency for Toxic Substances and Disease Registry (ATSDR) receives its funds from Environmental Protection Agency/Superfund appropriations rather than Public Health appropriations; therefore, ATSDR does not receive a 1 percent evalu-ation set-aside. Nevertheless, the Agency is responding to the changes mandated in its program planning and evaluation efforts by the National Performance Review and the Government Performance and Results Act (GPRA) of 1993. To meet those requirements, ATSDR staff members modified the Agency's planning process, incorporating implementation strategies and outcome/performance measures.

Prominent issues addressed in the new planning system emphasize ATSDR's commitment to improve the health of people affected by hazardous substances polluting the environment. Improvements include using exposure assessments and demographic data to identify people at risk and, more directly, assessing/addressing the concerns of customers. The new planning system provides the basis for measuring ATSDR performance and making systematic improvements as part of its internal evaluation activities.

Summary of FY 1994 CDC Evaluations CDC completed 12 evaluations in fiscal year (FY) 1994. These evaluations covered training and information dissemination, surveillance, program effectiveness, prevention, and costs of disease.

Training and information dissemination was the focus of several evaluations, two of which were highlighted in chapter II. The first was an evaluation of CDC and ATSDR training activities that assessed the training needs of State and local health departments and inventoried CDC's current training activities. It provided an example of using an evaluation to document current practice to help generate a new agenda for program action. The other highlighted evaluation was a survey of readers of the Morbidity and Mortality Weekly Report, a CDC publication of interest to public health professionals around the Nation. The survey found that this publication generally met the needs of its readers and was valued for its accuracy, relevance, and concise reporting format. Study recommendations are expected to help fine-tune this publication in response to reader suggestions.

Information dissemination activities also were addressed in an evaluation sponsored by the Office on Smoking and Health (OSH). Recommendations for key management and operational aspects of information dissemination in OSH were made based on interviews with key officials in CDC and outside the organization. The evaluation also made suggestions for strategic planning in view of OSH's evolving leadership role in the tobacco control community.

With respect to surveillance, several evaluations focused on gathering statistics for policy analysis and decisionmaking. One evaluation addressed the resiliency of the Model State Vital Statistics Act and Regulations to accommodate changes in social customs and technology of registering vital events and statistics. The results are being used to advise States about the need for revisions in the collection of vital statistics. Two evaluations are concerned with medical records in relation to national survey data. Both studies are intended to determine whether medical records confirm survey respondents' reports about selected conditions and impairments. In the first study, selected data elements of the 1988 National Maternal and Infant Health Survey (NMIHS) were compared to records maintained by the States. This evaluation assessed the quality and completeness of the information reported, identified discrepancies, and examined the nature and frequency of discrepancies. The second study evaluated the accuracy of diagnostic reporting in the National Health Interview Survey. Information collected from users of health care services was compared with infor-mation collected from the source of the health care services.

A third category of studies sought to evaluate health programs to determine their effectiveness. One study assessed the effect of burgeoning patient loads on sexually transmitted disease (STD) clinics, the impact of changing funding levels, and the increase in more elaborate patient testing. The study examined the effectiveness of services and identified factors that contributed to overburdening the delivery system. A similar type of study assessed the effectiveness and identified exemplary practices of State-based diabetes control programs (DCPs) in providing services with the potential for reducing diabetes-related mortality and morbidity. The outcomes examined were (1) the number of people reached by the programs; (2) the improved coverage provided by clinics; (3) the level of integration into ongoing medical service delivery; and (4) the effect of leveraging resources for diabetes programs. Findings showed that DCPs have had a measurable effect on diabetes services. Recommendations for program expansion were provided as part of this evaluation.

"Assessing Prevention Effectiveness: A Collaborative Effort With Selected Health Maintenance Organizations" is the first part of a two-phase study. A key component of this study includes the development of a framework and process to assess prevention effectiveness in health maintenance organizations (HMOs), types of services provided, and their potential to work with CDC. Data collected in Phase I provide the information required to move to Phase II of the study.

The purpose of the evaluation study by the Division of Cancer Prevention and Control is to collect data pertinent to program-related decisions. The evaluation focuses on components of a comprehensive breast and cervical cancer early detection and control program. In addition to these components (public and provider education, quality assurance, surveillance, screening, and followup), the combined effect of components is assessed.

The final category of evaluations sponsored by CDC developed methods to estimate direct medical costs for various diseases. One study estimated direct medical costs of chronic hepatitis B, concentrating on acute care costs. Costs from both Medicaid and private sector data were collected. Another study estimates the direct medical costs of congenital syphilis, using 1990 figures. The estimate included medical care costs for the first year, special education costs required by children with congenital syphilis, and lifetime custodial care costs, all of which were categorized by severity.

CDC Evaluations in Progress CDC has a total of 32 evaluations in progress. They fall into four general categories: surveillance/data collection studies; program evaluations; community/intervention effectiveness studies; and evaluation methodology studies. Performance improvement is a major focus of each of these studies.

Surveillance/data collection is the focus of the largest number of evaluations. For example, a study of the effectiveness of CDC surveillance for drug-resistant pneumococcal infections addresses drug resistance, which was identified as a major challenge in CDC's Emerging Infections Plan. This project will evaluate the validity of antimicrobial resistance data collected from sentinel hospitals, using CDC's sentinel hospital surveillance program. The hospital surveillance program, located in 13 hospitals in 12 States, is designed to determine the magnitude of drug-resistant pneumococcal disease and to provide clinicians with the ability to select optimal regimens of empiric therapy. By using population-based surveillance for invasive pneumococcal disease in two geographically distinct areas, this project will evaluate the extent to which CDC's sentinel surveillance program is capturing drug-resistant pneumococcal disease.

Another study that falls within the category of surveillance/data collection entails evaluation of STDs in the United States. The objectives of this study are to (1) determine the accuracy of CDC's surveillance data on STDs by comparing them with independently collected data from a survey of providers; and (2) to determine, also from a survey of providers, the extent of adherence to CDC's diagnostic and treatment guidelines for STDs and to identify ways of increasing compliance with those guidelines.

Program evaluations, also an important focus of numerous current evaluations, are being undertaken for grant programs, including the Lead Poisoning Prevention Program and the Injury Prevention and Control Program. Other studies in this group involve evaluations of the National Laboratory Training Network, the San Juan Laboratory's Dengue Hemorrhagic Fever Program, and the Fatality Assessment and Control Program.

Community-based interventions are the subject of several other evaluations. Four studies address the prevention of violence. These include projects focused on youth violence prevention, suicide in Native American communities, domestic violence medical education programs, and support systems for battered women.

Evaluation methodology is the focus of several ongoing projects. For example, one project is developing a comprehensive evaluation strategy that can be incorporated into planning, budget, and legislation for the National Center for Chronic Disease Prevention and Health Promotion.

New Directions for CDC Evaluation Evaluation studies focusing on program performance and effectiveness will continue to be of primary importance to CDC. As CDC moves toward a comprehensive performance monitoring system, focused studies in this area will be of utmost importance. Evaluations will be conducted to provide data for decisionmaking regarding the need for broader program implementation. Similarly, as programs develop and implement performance indicators, projects designed to provide data for performance measurement and to assess the effectiveness and efficacy of such indicators will be initiated.

View full report


"PerformanceImprovement1995.pdf" (pdf, 941.92Kb)

Note: Documents in PDF format require the Adobe Acrobat Reader®. If you experience problems with PDF documents, please download the latest version of the Reader®