1. Generation of Initial Set of Quality Indicators
To generate the initial sets of post-acute care quality indicators for each of the four conditions, we conducted an extensive review of the medical literature, which was supplemented by clinical experience. Because our ultimate goal was to compile a comprehensive list of quality indicators relevant to Medicare post-acute care, our goal in selecting these initial sets of indicators was to be as inclusive as possible. Specific determinations as to the degree to which indicators and measures were relevant to a geriatric post-acute population would be made at a later time (both through the expert panels' review and literature review). Similarly, we were less concerned at this stage about the extent to which the indicators we selected would be measurable or feasible to collect; such determinations would be made following the final panel ratings. These lists of quality indicators included both process and outcome indicators. They also included a combination of both global quality indicators (those applicable to all four conditions) and condition/disease-specific quality indicators.
Next, we organized the quality indicators by domain to provide a basic structure for reviewing the indicators. These domains included: physical function outcomes, mental health outcomes, quality of life outcomes, utilization outcomes, physiology outcomes, satisfaction outcomes, and process of care. In order to facilitate review of the indicator lists and to avoid redundancy, each indicator was assigned to only one domain. Though one might reasonably argue that a different domain would be an alternative for a certain indicator, the purpose of the domain structure -- to simplify review -- would be defeated if an indicator were placed in multiple domains. To clarify the types of measures denoted by an indicator and to obtain some review of measures, we also included an illustrative set of potential quality measures corresponding to many of the quality indicators.
2. Selection of First Expert Panel
Participants on the expert panels were selected according to the following criteria: (1) representative of the three major post-acute settings (SNFs, HHAs, and rehabilitation hospitals); (2) representative of both managed care and fee-for-service settings; (3) national in scope; (4) inclusive of both generalists and specialists; and (5) representative of multiple disciplines (medical doctors, therapists and nurses). Our expert panels were ultimately comprised of a geriatrician, physiatrist, psychiatrist, SNF nurse, HHA nurse, rehabilitation nurse, speech therapist (who participated in the stroke panel meeting only), physical therapist, and a nationally recognized specialist for each of the four clinical conditions. The list of panelists is included in Appendix B.
3. Panel Ratings
The expert panel members were mailed background information about the project along with the initial quality indicator rating forms (see sample rating form, Appendix C). The panel members were asked to review the indicators and individually rate them with respect to the importance of each for assessing quality of care for the specified condition. They were asked to review each indicator separately for each condition using the four condition-specific indicator lists. The indicator ratings were based on the following scale:
0 = of negligible value for assessing quality of care for that condition;
1 = of definite value for assessing quality of care;
2 = extremely important for assessing quality of care.
Reviewers were asked to consider the following criteria to determine the value of each quality indicator: (1) the likelihood that a significant portion of individuals with that condition will experience some change for the outcome indicator; and (2) the sensitivity of the indicator to differences in the quality of care received by individuals between sites. For each condition, reviewers assigned a rating of "2" to only 20 indicators. This provided us with each reviewer's "Top 20" quality indicators for each condition. Panel members were also asked to add indicators to the list and to rate these added indicators as well.
In addition to rating the indicators, panel members were asked to rate the quality measures with which they had experience. The measure ratings were based on the following scale:
R = recommended;
NR = not recommended;
Blank = unfamiliar.
Panel members were also asked to add measures to the list and to rate these added measures as well.
4. Consolidation of Ratings
After rating the quality indicators and measures according to the above criteria, the panel members returned their completed rating forms. We calculated an average rating for each indicator and then ranked the indicators in descending order by their average ratings. In preparation for the meetings, we presented this information in two ways for each condition. First, we revised the previously distributed quality indicator rating forms that were organized by domain to include the average rating for each indicator. On this form, we indicated those quality measures that were frequently recommended by panel members. We also included all of the quality indicators and measures that were added by panel members. On a second form, we presented the ranked list of quality indicators in descending order of average rating.
5. Panel Meetings
The panel meetings took place over a period of two days; one half-day was devoted to discussing each of the four conditions. At the meetings, expert panel members were given the quality indicator rating results and a blank rating form pertaining to the condition under discussion. Panel members were, therefore, aware of the ratings each of the indicators received through the initial panel review. The focus of the panel meetings was on the indicators for which there was the least consensus among the panel members. We selected these more "controversial" indicators for discussion prior to the panel meetings by targeting those with the highest variance in ratings.
During the discussions, panel members often decided to redefine or modify some of the specific indicators for each condition; these modifications are delineated in the expert panel meeting notes (see Appendix D). Within each condition's panel meeting discussion, time was also set aside to discuss the additional indicators previously suggested by panel members, as well as some of the recommended quality measures for various indicators. After the discussion for each condition, panel members were asked to again individually rate the quality indicators a second time, this time by selecting the most important 25 quality indicators for the condition. The 25 selected indicators could include any combination of global and disease-specific indicators from any of the seven domains.
6. Final Ratings
We compiled the panels' final ratings, enumerating the indicators for each condition in descending order based on the number of panel members who selected each (see Appendix E). We further refined this list by eliminating the following: (1) indicators that were not direct measures of post-acute quality, (e.g., cost measures or risk factors); (2) indicators that were incorporated in or similar to other indicators (e.g., single elements of larger scales, or redundant constructs), and (3) indicators that were not feasible to collect or not measurable (e.g., requiring hard to access data sources, or dates that are not readily available). The dropped indicators, and our reasons for dropping each, are included in Appendix E.
Appendix F includes the final list of quality indicators that were selected by at least four panel members; it is organized by global indicators (indicators that were selected by four or more panel members for three or four of the conditions) and disease-specific indicators (indicators that were selected by four or more panel members for only one condition). We used this final list of indicators to develop our post-acute care quality assessment instruments for the four conditions.