The experimental design of the channeling evaluation was chosen to ensure that the experience of the control group would provide a reliable estimate of what would have occurred to treatment group members in the absence of the demonstration. However, as noted above, attrition from the carefully drawn channeling sample could thwart these intensions if the sample available for analysis after attrition were not comparable for the two groups. Regression models were used in the evaluation to control for observable differences between the treatment and control groups that could arise because of attrition, but estimates may still be biased if the two groups differ on unobservable characteristics. Bias occurs if (1) those sample members for whom data are available differ on unobservable characteristics from those for whom data are not available, (2) those unobservable factors also affect outcomes of interest, and (3) rates or patterns of attrition differ for treatment and control groups.
For each of the major areas of analysis in the evaluation, an analysis sample was defined which included those observations in the research sample for which the data necessary for analysis were available. Thus, the following analysis samples were defined:
- 6/12 and 18 month Medicare samples (for hospital outcomes)
- 6, 12, and 18 month nursing home samples (nursing home outcomes)
- 6, 12, and 18 month followup samples (well-being outcomes)
- 6, 12, and 18 month in-community samples (formal and informal care outcomes)
As shown in Table IV.1, the percent of the full sample included in most of these analysis samples was somewhat greater (about 6 to 14 percentage points) for treatments than for controls, especially in the financial control model. Thus, one of the conditions that, in combination with the other two, could lead to bias was present. These differences were due primarily to treatment/control differences in response rates at the baseline interview. However, despite this difference in rates of attrition, the analysis samples exhibited only minor treatment/control differences on initial screen characteristics.
|TABLE IV.1. Percent of Full Sample Included in Analysis Samples|
|Basic Model||Financial Control Model||Full Sample|
|Number of Observations in Full Sample||1,779||1,345||3,124||1,923||1,279||3,202||3,702||2,624||6,326|
|6 Month Outcomes|
|Percent of Full Sample Included in:|
|Nursing home sample||72.0||67.1||69.9||80.5||67.3||75.2||76.4||67.2||72.6|
|12 Month Outcomes|
|Percent of Full Sample Included in:|
|Nursing home sample||76.4||69.5||73.4||82.0||68.9||76.8||79.3||69.2||75.1|
|18 Month Outcomes|
|Number of Observations in 18-month Cohort||922||697||1,619||926||620||1,546||1,848||1,317||3,165|
|Percent of Cohort Included in:|
|Nursing home sample||69.8||68.1||69.1||78.8||64.4||73.0||74.4||66.4||71.0|
To investigate whether impact estimates based on these analysis samples were likely to be biased because of attrition, two types of analyses were performed during the evaluation and reported on in Brown et al. (1986)--a heuristic approach and a statistical modeling approach. Under the heuristic approach, Medicare data, which were available for virtually the entire research sample, were used to construct several variables measuring the amount of Medicare-covered services used, including hospital days and expenditures, nursing home days and expenditures, and several types of formal community-based and physician services. Channeling impacts on these Medicare-only variables were then estimated on the full sample, and again on the various analysis samples. These two sets of estimates were then compared to determine whether limiting the analysis to observations in the analysis samples produced different estimates than the full sample.
For the variables examined, the impact estimates obtained on the analysis samples rarely differed substantively from those for the full sample. This was especially true for the Medicare sample. Since over 98 percent of all hospital use by sample members was covered by Medicare, it was clear that attrition led to no bias in estimated impacts on hospital outcomes. For other outcomes and samples, however, this type of comparison was less compelling: although there were few instances of noteworthy differences between the full and analysis samples on the Medicare-covered variables examined, the Medicare data covered only a fraction of the total use of nursing homes and formal services and contained no information at all on other key outcomes, including well-being and informal care. Thus, it was possible that estimated impacts on these other outcomes would be biased by attrition, even though the estimates on Medicare-covered outcomes were not. Alternative procedures were required to determine whether attrition bias for these outcomes was present.
A statistical model developed by Heckman (1979) to control for the nonrandom selection of an analysis sample was used for this purpose. For each analysis sample, a model was estimated to predict which of the full sample observations were retained in the analysis, as a function of personal characteristics measured on the screening interview. Each estimated "sample inclusion" model was then used to construct for each member of the corresponding analysis sample a new variable that, when included as an additional explanatory variable in the regression equation used to estimate channeling impacts, controls for the effects of attrition. The coefficient on the constructed attrition bias term was then tested for statistical significance to determine whether the condition necessary for regression estimates to be biased by sample attrition was met.
This procedure was implemented for the 6-, 12-, and 18-month measures of the following key outcomes:
- Nursing home outcomes (nursing homes samples)
- whether admitted
- number of days in nursing homes
- nursing home expenditures
- Well-being outcomes (followup samples)
- number of unmet needs
- number of impairments on activities of daily living
- whether dissatisfied with life
- Formal and informal care outcomes (in-community samples)
- whether received care from visiting formal caregivers
- hours of formal in-home care received
- number of visits from formal caregivers
- whether received care from visiting informal caregivers
- hours of care received from visiting informal caregivers
- number of visits from informal caregivers
In general, this procedure yielded very little evidence of attrition bias. The estimated correlations between unobserved factors affecting attrition and those affecting a given outcome variable were typically small and rarely significantly different from zero. Impact estimates obtained from the regressions which included the control variable for the effects of attrition were very similar to the impact estimates obtained without this correction term.
Finally, to ensure that the results obtained from the statistical correction procedure were not distorted by overly restrictive assumptions, Brown et al. (1986) developed a somewhat more general model that would take into account two possible differences between treatments and controls and between models: differences in the relationship between observed (screen) characteristics and attrition, and differences in the covariance between unobserved factors affecting attrition and those affecting the outcome variable under examination. Use of this more general procedure showed (1) that the attrition models were not very different for treatments and controls or for basic and financial control models, and (2) that although there were some substantive differences between the 4 treatment/model groups in the correlations between unobserved factors, controlling for them separately yielded no convincing evidence that the unadjusted estimates were biased by attrition.
Although both the heuristic and statistical approaches led us ultimately to conclude that attrition bias was not a major problem, there were a number of isolated results that, if viewed alone, would have caused greater concern about attrition. To further ensure that no important evidence of attrition bias was being overlooked, the results from the heuristic Medicare data analysis were compared to those obtained from the statistical analyses for each outcome area to see if the alternative approaches both indicated that attrition bias might be a problem for any given set of outcomes. The specific patterns of attrition implied by the two approaches were also compared for consistency.
Estimates of impacts on hospital outcomes were shown conclusively to be unaffected by attrition, based on Medicare data alone. For nursing home outcomes, the Medicare comparison showed no evidence of bias in the estimates, and the only evidence to the contrary from the statistical procedure was two cases in which impact estimates changed in statistical significance. However, in both of these instances, the impact estimates changed only marginally after controlling for the effects of attrition, going from slightly below the critical value for statistical significance to slightly above it (and vice versa). Furthermore, the results that ostensibly controlled for the effects of attrition had the implausible implication that the bias was in one direction at 6 months and in the opposite direction at 12 months, and occurred only in the basic model. Finally, some sensitivity tests were performed which showed that estimates of channeling impacts changed only slightly under a variety of different assumptions about the use of nursing homes by those with missing data. Thus, it seemed clear that estimates of impacts on nursing home outcomes were not biased by attrition, and it was virtually certain that conclusions about the lack of channeling impacts on nursing home use would not change even if some bias did exist.
For well-being outcomes, the Medicare data could provide no direct evidence concerning attrition bias, but comparison of the full and followup sample estimates of impacts on Medicare-covered services suggested that bias was potentially a problem only for the basic model, and only at six months. However, the results from the statistical procedure to measure attrition bias implied that there was no bias in any of the well-being outcome measures examined in any time period for either model.
For formal care outcomes, the in-community sample estimates of impacts on use of Medicare-covered services were very similar to the estimates obtained on the full sample in all three time periods for the financial control model, and at 12 and 18 months in the basic model. However, at 6 months in the basic model, estimated impacts on skilled nursing visits and reimbursements were statistically significant for the analysis sample but not for the full sample. This suggested that the in-community sample estimates of impacts on use of formal care might be overstated in this time period for the basic model because of attrition. However, the statistical significance of the impact estimates did not differ between the two samples for several other outcomes even in this period, nor was the magnitude of the difference that great even for skilled nursing (13 percent of the control group mean for the full sample estimate compared to about 24 percent of the control group mean for the analysis sample estimate). The lack of evidence of bias at 12 months and in the other model led us to doubt further that attrition bias was a major problem for the estimates of impacts of formal care. This conclusion was further supported by the results from the statistical analyses, which indicated an absence of the conditions necessary for attrition bias and strong similarity between impact estimates obtained using the procedure to control for the possible effects of attrition and estimates obtained without such control.
For informal care outcomes the evidence was was less clear cut. The above comparison of estimated impacts on Medicare-covered services for the full and in-community samples suggested that attrition from the in-community sample used in the informal care analysis was not systematic. However, because the Medicare claims lack data on informal care outcomes, this analysis provided only weak evidence that no bias occurred in estimates of impacts an informal care. The results from the initial statistical procedure showed no evidence of bias, but the other, less restrictive statistical approach of controlling for attrition effects led to results that implied serious bias in the estimates for both channeling models. Whereas the unadjusted results implied no effect of channeling on informal care in the basic model, and (at most) modest reductions in the financial control model, the latter adjusted estimates showed large, statistically significant reductions in informal care in the basic model and no reductions in the financial control. Also, both the Medicare and more general statistical approaches implied similar patterns of attrition, i.e., that the systematic attrition occurred mainly for the treatment group in the basic model. However, a number of factors were identified by Brown et al. which suggested that this result was a statistical anomaly rather than credible evidence of severe attrition bias. Hence, we concluded that informal care impact estimates were probably not biased by attrition either.
The two approaches used in this analysis of attrition each have their flaws. The heuristic approach of seeing how estimated impacts on some variables changed when the analysis was restricted to a subset of the full sample is appealing because it provides a direct measure of attrition bias, albeit for variables other than those in which we are most interested. Reliance on these results as proof that there is no attrition bias in the estimated impacts on those outcomes in which we are interested requires belief that any unobserved factors affecting both attrition and the outcomes of interest also affected the Medicare outcomes. Although this assumption may be plausible, it obviously cannot be verified.
The statistical approach is also appealing, but for different reasons-it pertains to precisely the outcome variables of interest, provides a direct test of whether there is bias in the estimates obtained on the analysis sample, and also offers a way to obtain unbiased estimates of impacts on any outcome. The more general model developed and used in Brown et al. adds to the attractiveness of this approach by making the results sensitive to potentially different observed and unobserved patterns of attrition for treatment and control groups. However, in either statistical model the estimates may be quite sensitive to the assumptions of the model (bivariate normal disturbance terms in the outcome and sample inclusion equations), may reflect other nonlinear relationships between the outcome and control variables that have nothing to do with attrition, and are sensitive to colinearity between the correction term and the control variables in the outcome equations.
Despite these flaws, the analyses that were conducted on attrition from the channeling sample greatly exceed what is normally done or is possible to do to examine attrition bias, because the data available from the screen and Medicare claims on nonrespondents greatly exceeds what is usually available on sample dropouts. By definition, it is never possible to know with certainty what results would have been obtained had no sample attrition occurred. The heuristic and statistical approaches were the best methods available to assess the effects of attrition on our impact estimates, and both approaches provided convincing evidence that the inferences drawn from the analysis samples about the existence and magnitude of channeling impacts were no different from what would have been drawn had the full sample been available for analysis.