Understanding Medicaid Home and Community Services: A Primer, 2010 Edition. Crafting Performance Measures


Most importantly, performance measures should be actionable. Because their purpose is to help the state evaluate its program and the health and welfare of program participants, the measures should be able to identify when a subassurance is not being met, so that actions can be taken to bring the program into compliance. Performance measures must also meet four criteria (see Box).

Criteria for Crafting Performance Measures

  • Be measurable and stated as a metric
  • Have face validity
  • Be based on the correct unit of analysis
  • Be representative

The first criterion is that the performance measure be measurable and stated as a metric. This means that the performance measures must be able to take on different values. States frequently make the mistake of describing a process for monitoring a subassurance, rather than focusing on the outcome of the monitoring reported in the form of a metric. Typically, assurance-based performance measures are stated in the form of a percentage. The performance measure data must also be able to be aggregated across individual waiver participants, providers, or claims--depending on the unit of analysis (discussed below). Aggregating performance measure data allows the state to generate reports on a specific aspect (subassurance) for the program as a whole, assess the operation of the program on that given aspect, and determine the level of compliance.

A Performance Measure Should Be a Metric

Acceptable: Percent of waiver participants whose service plans were reviewed and updated annually. (Outcome)

Unacceptable: The Division of Aging conducts record reviews to assess whether services plans of waiver participants are reviewed and updated annually. (Process)

The second criterion is face validity, the property of a performance measure that reflects whether it will indeed measure what it has been designed to measure--in this case, a subassurance. To meet this criterion, state staff have to ask the following: Does the performance measure truly capture and measure the essence of a specific subassurance? “On the face of it” does it track with the subassurance? A performance measure with face validity will enable a state to monitor its performance on a given subassurance, and for CMS to judge the state’s demonstration of compliance with the subassurance. If a performance measure lacks face validity, the state runs the risk of collecting potentially useless information, wasting resources, losing the ability to monitor one aspect of its program, and not demonstrating compliance to CMS. States should be careful to not use performance measures that are stated as a metric but are lacking face validity vis-à-vis a given subassurance.

A Performance Measure Should Have Face Validity

Service Plan Subassurance: Service plans address all participants’ assessed needs (including health and safety risk factors) and personal goals, either by the provision of waiver services or through other means.

Unacceptable: Mean risk fall score for waiver participants.

  • Metric, but lacks face validity.

  • Does not tell you to what extent risks were addressed in service plan.

  • May be a good assessment item, but is not a performance measure with face validity for the subassurance.

Acceptable: Percent of participants’ service plans that address their risk.

  • Metric.

  • Has face validity vis-à-vis the subassurance.

The third criterion is the correct unit of analysis. Choosing the correct unit of analysis for a performance measure is crucial. The unit of analysis refers to the group/entity which the performance measure references. Typically, the unit of analysis for Level of Care, Service Plan, and Health and Welfare subassurances is the waiver participant; for Provider Qualification subassurances it is providers, and for Financial Accountability it is claims. Data for generating performance measures can come from several sources (administrative data, claims data, reviews of participants’ records, automated care coordination systems, critical incident data bases, mortality reviews, etc.), and sometimes from a combination of data sources. Whatever the data source, it is key to make sure that the unit of analysis is appropriate to the subassurance that the performance measure will be used to monitor.

A Performance Measure (PM) Should Use the Correct Unit of Analysis

Service Plan Subassurance: Service plans are updated/revised at least annually.

Incorrect Unit of Analysis: Percent of Supports Coordination Agencies that updated/revised annual service plans on time.

  • PM focuses on Supports Coordination Agencies (provider) rather than waiver participants.

  • May be a more appropriate measure for a Provider Qualifications PM.

Correct Unit of Analysis: Percent of waiver participants who received an annual updated/revised service plan.

The fourth criterion is representativeness.When monitoring a waiver, CMS is interested in knowing how the waiver as a whole is performing on any given subassurance. If the performance measure data are not representative of the waiver population (or of providers or claims), neither the state nor CMS can be confident that the resulting measure accurately portrays the waiver’s performance.

By definition, the data for generating a performance measure are representative if they derive from the entire population (e.g., service plans of ALL waiver participants, reviews of ALL providers, reports on ALL claims). However, collecting data from the entire population can be very costly--particularly for larger waiver programs. Thus, performance measures frequently are based on data taken from a sample (i.e., a subset of the population). The estimates derived from a sample can represent the entire population, as long as random selection is used in drawing the sample.

CMS expects performance measure data to be representative because if not, then the state cannot assert with confidence that the data represent the waiver as a whole, and CMS cannot conclude compliance due to insufficient evidence.

CMS elicits information about the state’s plan for generating representative data for its performance measures within the quality section of the waiver application labeled “Sampling Approach.” These sections require states to specify whether a performance measure will be based on population data or whether the state will use a sampling approach. If the state opts for sampling, CMS does not require the state to specify the size of the sample, but rather asks the state to specify the sampling parameters that will be used to determine sample size. CMS has certain expectations about the values these parameters must take on for the resultant sample to be considered representative (see Box.)

View full report


"primer10.pdf" (pdf, 2.08Mb)

Note: Documents in PDF format require the Adobe Acrobat Reader®. If you experience problems with PDF documents, please download the latest version of the Reader®