Previous studies have provided mixed evidence regarding the effectiveness of nursing home quality improvement programs similar to the TA programs that we studied. A CMS study (1998) evaluated two nursing home quality improvement programs that were accompanied by reasonably strong evaluation designs. One program, an extremely labor intensive intervention to reduce incontinence, resulted in a reduction in incontinence rates, but these gains were not sustained after the external research staff stopped providing feedback to the participating nursing homes. The study found evidence that the other intervention, the Ohio Pressure Ulcer Prevention Initiative, was not effective. A Commonwealth Fund evaluation of the Wellspring quality improvement model27 found several positive outcomes (e.g., improvement on federal survey and lower staff turnover), but there was no clear evidence of improvements in clinical outcomes based on Minimum Data Set (MDS) quality indicators. These results suggest that it may be difficult to change the organizational and care practices within nursing homes that impact resident outcomes.
However, it is not possible to tell whether the mixed results of these previous evaluations are the result of an actual inability of the programs to result in improvements in quality or an inability of the available data to measure changes that may have actually occurred. A major challenge in measuring the effectiveness of any nursing home intervention is the difficulty in constructing valid quality measures. Absent any primary data collection, the two data sources that are available for measuring program effectiveness are the MDS and survey deficiency data. Both of these data sources have significant limitations for measuring quality of care, making it nearly impossible to draw definitive conclusions about the impact of specific interventions. These data limitations also limit the ability to compare the relative impact of nursing home programs with a quality improvement focus vs. those that focus on the survey and certification process.
The MDS has two potentially significant types of limitations:
The MDS may not contain the items that would be required to measure quality adequately because is not a comprehensive clinical documentation system. Harris et al (2003) notes that the construction of quality indicators and quality measures from MDS data elements is constrained by the availability of data within the MDS; the availability of data within the MDS is constrained by the limited clinical content within the MDS.
The MDS data may not be accurate. Several studies have identified serious accuracy problems with MDS data. Abt Associates (2001) reported that MDS error rates average 11.6 percent for all MDS items. Similarly, a study conducted by the Office of the Inspector General (OIG) (2001) found errors on 17 percent of the MDS data elements.
As noted by Walshe (2001), differences in deficiency rates across states (or regions within states) and changes in deficiency rates across time may reflect real differences in quality of care.28 But they also may be the result of differences in the stringency, scope, or implementation of the survey process.29 It is not possible to disentangle these two effects. According to an OIG report (1999), inconsistency in the survey process results from unclear guidelines that may contribute to different interpretations by surveyors when citing deficiencies, differences in the level of supervisory review for survey reports, and high turnover among surveyors.
Due to these data limitations, little is known about the effectiveness of either TA programs or the survey and certification process, or about whether quality is improved more by investments in quality improvement or enforcement programs.30
The Wellspring quality improvement model it is very labor intensive and incorporates with additional resources about every intervention that plausibly could impact quality. It has two primary goals: (1) To make the nursing home a better place for residents to live by improving the clinical care provided to residents and (2) To create a better working environment by giving employees the skills that they need to do their jobs. (See http://www.cmwf.org/programs/elders/stone_wellspringevaluation_550.pdf.)
Nationally, the average deficiency rate for nursing homes surveyed in 2001 was 6.2 per nursing home; this ranged from 2.9 deficiencies per nursing home in Vermont to 11.2 deficiencies in California (Source: OIG, 2003).
An OIG review of 310 survey reports reveals that different deficiency tags are being used to cite the same problem. In five of the six standard surveys we observed, the OIG found inconsistency across surveyors in how deficiencies were cited, and also found differences across states in how many deficiencies they will cite for a single problem of non-compliance.
While the CMS study found clear evidence of some important improvements in nursing home quality that resulted from the changes to the survey and certification process that were introduced as part of OBRA 87, this improvement is not relevant for assessing whether the marginal impact of additional resources is higher for enforcement-oriented or quality improvement programs (i.e., whether the marginal impact on quality is higher for TA or enforcement programs).