Increasingly, private purchasers of health care, consumer groups, Federal and State agencies, and health care plans are searching for methods to compare clinical performance among providers to guide choices, ensure accountability, provide data for quality improvement, and track change. Unfortunately, currently used clinical performance measures do not provide meaningful comparisons of clinical performance or, at worst, are actually misleading because they are limited in scope, insufficiently detailed, methodologically flawed, or not standardized across providers. This project is a first step toward identifying gaps between the clinical performance measures that exist and those sought by potential users. Knowledge of gaps permits prioritizing needs for clinical performance measurement and assessing the feasibility of addressing those needs through future research and development.
The objectives of this 6-month study were to (1) collect information on the range of clinical performance measures currently in use; (2) summarize the resulting information; (3) assess the feasibility of deriving various clinical performance measures from existing databases; (4) explore the cost of different strategies for clinical performance measurement; and (5) explore the sampling issues associated with the application of selected clinical performance measures that would be useful in measuring quality of health care plans.
Clinical performance measures are instruments that estimate the extent to which a health care provider delivers clinical services that are appropriate for each patient's condition; provides the services safely, competently, and in an appropriate timeframe; and achieves desired outcomes in terms of those aspects of patient health and patient satisfaction that can be affected by clinical services. Clinical performance measures concern the technical content of health care and assess health care in terms of individual patients. Clinical performance measurement requires summing data about the health care given to many patients to create a rate or score for average performance. Performance can be measured by identifying a representative sample of similar patients and collecting data about the care received by those patients within a given time period. By applying criteria for quality of performance to these data for each patient, good and poor quality care can be determined. Results are then aggregated to form a performance rate or score.
Current measurement techniques are plagued by a variety of flaws. A significant obstacle is that many types of data currently used as indicators of quality are not directly usable for comparisons of clinical performance. For example, utilization statistics (e.g., hospital admissions or rates of surgery) are not helpful unless applied to the individual patient. Measures of health status or patient outcomes are not useful without allowing for the probability that each case would experience a good outcome if good clinical care were provided. Measures of patient satisfaction are flawed because of subjectivity, although patient surveys are useful if patients are asked about the facts of their care.
Although many existing indicators of quality provide inaccurate comparisons of clinical performance, they can serve as an intermediary step toward better measures. For instance, measures of patient outcomes that cannot currently be used to compare clinical performance may become useful for that purpose as methods are developed to allow for patient differences in the likelihood of achieving a good outcome. Similarly, users who currently have inadequate data sources to construct precise clinical performance measures may find it essential to use crude indicators while working to improve data sources.
To be serviceable, measures must be useful internally within health care organizations and must have adequate levels of sensitivity, specificity, and predictive value. Measures must be reliable and valid for their intended purposes, as well as affordable. A reliable typology of performance measurements would allow potential users to select an approach--quality control, choice of health care plan, or accountability--that is appropriate for their own purpose.
A typology of measurements was created, as was a data set that included the relationship of measures to each other, the aspects of clinical performance that the measures addressed, the properties of measures that determine appropriateness for specific uses, and the data needed to create the measures.
Existing and evolving clinical performance measures were identified by two approaches: (1) a literature search conducted by using the Medical Analysis and Retrieval System® database of the National Library of Medicine; and (2) direct personal contact by phone with 112 individuals or agencies known to be involved with performance measurement research, use, or evaluation. The contacts yielded 40 sets of measures consisting of 1,287 clinical performance measures. Data concerning measure attributes were extracted, coded, and entered into six relational databases.
The project developed a classification scheme to assist users in identifying and evaluating clinical performance measures. It developed and defined key attributes of clinical performance measures and applied this framework to 40 measure sets used by public and private organizations to measure and improve clinical quality. The 40 measure sets were classified on 7 dimensions:
- Rigor of development (e.g., detailed specification of measure, availability of reliability or validity tests).
- Organization type for which the set is used or developed (e.g., managed care, fee-for-service, government agency).
- Type of review for which the set is used or developed (e.g., internal quality management, outcome management, technology assessment, purchaser review).
- Extent of use (e.g., single system, multiple system, in test phase).
- Practicality (e.g., cost, implementation, or utility information available).
In addition to describing general characteristics about the type of measure set, the Typology framework classified measures with respect to their structure (factors such as data requirements, sampling, time window, scoring, risk adjustment, and interpretation) as well as their clinical content (e.g., whether a measure addresses health promotion, early detection, or treatment of a disease; whether it is a process or outcome measure, etc.).
The objective was to develop and test a prototype framework sufficiently flexible to encompass the structural and clinical characteristics of the wide variety of clinical performance measures currently used. The result was a series of interlinked databases containing information on measure sets, batches of measures (measures with similar structure or content), and clinical conditions and events that are associated with the measures.
The interrelated nature of the databases enables users to access data by measure or clinical event. Frequencies or percentages of measures in the various categories were computed and the results presented in graphs. A broad range of performance measures were included in the database (i.e., process and outcome measures, health care setting, demographics of patient population, data derived from clinician judgment and patient perception, and mental and physical health measures). The development of the prototype raised other issues to be considered (e.g., cost and sampling) when developing future databases for clinical performance measurement.
Performance measures were constructed from administrative data (e.g., enrollee files, claims data files, disease registers, pharmacy records, medical records), and from special data collections (e.g., patient or provider surveys).
Use of Results
The typology is proposed as a starting point for a data system that would permit users to find out what measures are available for given conditions and associated clinical events, what data resources are required, and which measures are suitable for the users' specific purpose. The classification framework and its definitions form the basis of a common or uniform language to describe and compare the thousands of clinical performance measures under development and in use today. The framework also serves as a teaching tool to help those interested in learning about how to construct, compare, and evaluate the utility of measures.
The study also concluded that future work is needed to test the framework and prototype databases against the needs of users. To accomplish this objective, the Agency for Health Care Policy and Research (AHCPR) is using the typology framework as the basis for a follow-on project, CONQUEST 1.0, the COmputerized Needs-Oriented QUality Measurement Evaluation SysTem. This project builds on the typology framework in three ways. First, it evaluates the typology by verifying the content of the measure database with measure developers. The verification effort has resulted in enhancements to the typology structure and content. Second, CONQUEST 1.0 builds on the typology by creating a database of information on clinical conditions that can be used to steer the search for appropriate measures. The condition database contains information from AHCPR-supported clinical practice guidelines, clinical practice guidelines produced by other organizations, and medical effectiveness research findings. Third, CONQUEST 1.0 translates the typology into a more useful system by making it available on computer. The project develops a computerized system with a user-friendly interface to link measures to clinical information and guide the selection of measures.
Follow-on efforts are currently under way at AHCPR. One such related project involves evaluating this product by convening users to pilot test CONQUEST 1.0 and participate in focus groups about its usefulness. Also, AHCPR has issued a Request for Contract Proposals for a project called QM-Net, to use the typology and CONQUEST 1.0 as the basis for a national data source for information on clinical quality measures. Information on CONQUEST 1.0 can be obtained through AHCPR's web site (http://www.ahcpr.gov) or through the Agency's clearinghouse at (800) 358-9295.
Office of Planning and Evaluation
Irma Arispe, Ph.D.
PIC ID: 5630
NTIS Accession Number: PB 96-144639
Center for Health Policy Studies, Columbia, MD