Medicaid and CHIP Risk-Based Managed Care in 20 States. Experiences Over the Past Decade and Lessons for the Future.. Quality Standards.

07/01/2012

The BBA requires that states develop quality standards for their Medicaid health plans and monitor compliance with those standards. States are required to have an ongoing quality strategy and must also arrange for annual external quality reviews. They must validate performance measures and performance improvement programs. Although the BBA ensures that quality monitoring occurs, it does not specify or define how quality of care should be measured. The result is a wide range in approaches with many common features.

Through the passage of the CHIPRA and ACA legislation, the federal government is increasingly engaged in an effort to improve quality measurement in risk-based managed care. Both laws call for the development of standard measures of quality for adults (the ACA) and children (CHIPRA). While this effort is already influencing how and what states collect and require of their plans, during the study period for this report (2001–2010) those new metrics had not yet been disseminated. Consequently, the information in this report generally predates those efforts.12

Without a strong federal role in mandating how states monitor quality during the study period, the development of quality monitoring standards and methods has been greatly facilitated by the National Committee for Quality Assurance (NCQA), which has undertaken special initiatives to develop approaches for state Medicaid agencies in their quality monitoring programs. Since 2007, the Centers for Medicare and Medicaid Services has participated in NCQA committees to provide input on quality measurement approaches. NCQA not only publishes standards, measurement metrics, and methods, but also has an accreditation program for health plans. Achieving NCQA accreditation assures that the plan must meet a uniform set of nationwide quality monitoring standards that are applied to all MCOs, regardless of the populations they serve (commercial or public). The accreditation process further investigates plans’ utilization management, provider credentialing, and member communication processes. Accreditation is voluntary on the part of plans in most of the study states. However, in 2010 six states (New Mexico, Ohio, Rhode Island, Tennessee, Texas, and Virginia) required all Medicaid MCOs to achieve NCQA accreditation. Two (Florida and Michigan) required plans to achieve accreditation from one of three bodies: NCQA, the Joint Commission on Accreditation of Healthcare Organizations, or URAC. Both the Joint Commission and URAC perform similar functions to that of NCQA in terms of accreditation of health plans. Of the remaining states, six (Arizona, California, Massachusetts, Pennsylvania, Washington, and Wisconsin) do not require accreditation but may use the results of accreditation in selecting and monitoring plans (NCQA, 2010).

The role of NCQA was described by one state representative in an interview:

[Before the BBA] we used NCQA standards to guide both our contract specifications as well as our monitoring activities. So we would go out on an annual basis and monitor each of our health plans and produce a report based on NCQA standards. In 2003, the BBA went into effect, and BBA mirrored in many ways, in my opinion, NCQA standards, but it didn’t include any kind of instructions about how you monitor against those standards. (State Official)

One of the most commonly used tools that have been developed by NCQA for quality monitoring is the Healthcare Effectiveness Data and Information Set (HEDIS), a tool that, according to NCQA, is used by more than 90 percent of health plans to measure performance. The HEDIS is a uniform set of quality measures, along with very detailed instructions for how to analyze data in a uniform way to compute the measures. Every health plan that is NCQA accredited must collect currently recommended HEDIS measures according to NCQA specifications. Most other plans (including those that are not accredited) also collect and report on some HEDIS measures, and all study states currently require reporting of some HEDIS measures periodically to the state. Most have required HEDIS reporting throughout the study period. However, this apparent uniformity in reporting masks tremendous variation across states in which measures they require, how the measures are computed, and how they are reported (discussed further below).

An additional tool—the Consumer Assessment of Healthcare Providers and Systems (CAHPS)—is also used by all study states for measuring beneficiary satisfaction with care provided by health plans and providers in their networks. This tool was developed by the Agency for Healthcare Research and Quality (AHRQ), which serves a similar role to NCQA, in terms of developing and promulgating measures and methods for the CAHPS. In contrast to HEDIS measures, which are generally computed either from claims/encounter or medical record data, the CAHPS measures come from periodic surveys of beneficiaries. There are separate surveys for adults and parents (reporting on their child’s care). For children’s care, six states (Arizona, California, Florida, New York, Pennsylvania, and Texas ) report data separately for CHIP programs for HEDIS and four (California, Florida, Texas, and Washington) report CAHPS separately. Similarly to HEDIS, there is tremendous variation in what states require, as well as in how often the surveys are done and who administers the survey. All study states but Arizona13 currently require some CAHPS reporting, but (as with HEDIS) measures differ across states and over time. Usually CAHPS reporting started later than HEDIS reporting.

There is considerable variation in who collects the HEDIS and CAHPS data. Fifteen of the study states require plans to collect and report HEDIS data, but five states oversee the collection directly (for example, by an NCQA-certified vendor contracted by the state). There is more diversity in who collects the CAHPS data than for HEDIS. Plans are required to do the survey in only five states, with the state taking the responsibility through various types of vendors in the others. This may differ for the adult and child surveys.

Plans do the adult CAHPS, and we do the child CAHPS. NCQA only requires the adult CAHPS, but we wanted the child CAHPS so we pay for it. (State Official)

NCQA plays an important role in CAHPS reporting through its certification of the vendors that collect the data. However, NCQA and AHRQ do not mandate certain critical things such as the response rates to the surveys, which vary across states and from vendor to vendor. One state reported that a vendor hired by one plan had a 28 percent response rate, while another vendor achieved a 45 percent response rate. In general, response rates are relatively low—under 50 percent—for most plans for the CAHPS.

Variations in Reporting CAHPS and HEDIS. We requested HEDIS and CAHPS data for the full study period from the 10 largest study states —in terms of Medicaid risk-based managed care enrollment—and one recent year of HEDIS and CAHPS data for the remaining 10 states. This process, along with information gathered in interviews, revealed tremendous variation in data collection and reporting for these quality measures. Remarkably, our review of HEDIS/CAHPS data for the first four states that submitted data to us revealed 202 distinct HEDIS-type measures for which definitions differed in some way over the study period. The types of variations include:

  • Wide variations in definitions for HEDIS quality measures; for example across the study states we found 10 different childhood immunization measures (e.g., different combinations, periodicity, or age groups) being used at some time in the study period.
  • Variations in who collects data, and variation in which approach is used to compute HEDIS measures. Some states use only administrative (claims/encounter) data to compute measures, and some use a hybrid of administrative and medical record data. Some states leave the choice of method up to their plans (see Table 7). Since the hybrid method is known to produce higher rates than using administrative data alone, there is a lack of comparability of measures across plans and states, and over time, depending on the data collection method used for the particular HEDIS measure.
Table 7: Number of States Using Alternative Methods for Constructing Medicaid HEDIS Measures in Study States, 2010
Type of Data Collection Timeliness of Prenatal Care Well-Child Visits Childhood Immunizations Breast Cancer Screening Cervical Cancer Screening HbA1c Screening
Source: Review of state documents and interviews with state officials.
Administrative Data Only 4 4 0 18 4 2
Hybrid Method 10 10 11 0 8 11
Plan Choice 5 5 4 0 5 4
Not Collected 1 1 5 2 3 3
Total 20 20 20 20 20 20
  • In terms of CAHPS, though the questionnaires and sampling methods are recommended by AHRQ, the methods are not mandated, and there are differences in data collection, as well as in how data are analyzed and reported.

An additional issue leading to a lack of uniformity in reporting is one of scale. Small plans may not have a sufficient sample size to meet HEDIS reporting specifications, especially for measures that are for smaller groups of enrollees such as diabetics. Many of these and other issues with variability and quality of reporting of data are documented in a recent report from the NCQA (2010).

External Quality Review Organizations (EQROs). Under the BBA, as required in 2003 regulations, states must use an EQRO to assist them in independently monitoring the quality of care provided by MCOs. More recently, under CHIPRA states are also required to use an EQRO to assist in monitoring CHIP quality of care. The functions of the EQROs vary from state to state. For example, some EQROs monitor the aggregate HEDIS and CAHPS data that plans submit for quality, some receive encounter data and compute HEDIS measures from those data, some do the CAHPS survey and process the data, and some perform other quality functions such as annual site visits.

We have an EQRO assessment of our plans’ operations. Every plan goes through that. It’s an exhaustive process. (State Official)

CHIP-related EQRO functions are still evolving for the states that manage their Medicaid and CHIP quality monitoring programs separately. They generally (with the exception of Florida) have hired the same EQRO for Medicaid and CHIP. The Centers for Medicare and Medicaid Services has recently released protocols for how EQROs should monitor access and quality under both programs. This process is bringing the quality monitoring process for Medicaid and CHIP MCOs closer together in most of the states where it has previously been separated (see Appendix B, Table 1).

The role of the EQRO is more important in oversight of quality monitoring in those states that do not require NCQA plan accreditation. In such states, the EQRO often takes on the function of periodic auditing of HEDIS data to assure that the data are correctly extracted and computed according to state specifications. In contrast, when a plan is NCQA-accredited this oversight function is performed by auditors hired by NCQA. Still, NCQA oversight does not completely eliminate the need for state and EQRO oversight. For example, states often design their own measures, or other quality processes, which must be separately monitored.

View full report

Preview
Download

"rpt.pdf" (pdf, 1.42Mb)

Note: Documents in PDF format require the Adobe Acrobat Reader®. If you experience problems with PDF documents, please download the latest version of the Reader®