Randall S. Brown, Peter A. Mossel, Jennifer Schore, Nancy Holden and Judy Roberts
Mathematica Policy Research, Inc.
January 13, 1986
The paper was written as part of contract #HHS-100-80-0157 between the U.S. Department of Health and Human Services (HHS), Office of Social Services Policy (now the Office of Disability, Aging and Long-Term Care Policy (DALTCP)) and Mathematica Policy Research, Inc., and contract #HHS-100-80-0133 between DALTCP and Temple University. Additional funding was provided by the Administration on Aging and Health Care Financing Administration (now the Centers for Medicare and Medicaid Services). For additional information about this subject, you can visit the DALTCP home page at http://aspe.hhs.gov/_/office_specific/daltcp.cfm or contact the office at HHS/ASPE/DALTCP, Room 424E, H.H. Humphrey Building, 200 Independence Avenue, S.W., Washington, D.C. 20201. The e-mail address is: webmaster.DALTCP@hhs.gov. The Project Officer was Robert Clark.
This report was prepared for the Department of Health and Human Services under Contract Number HHS-100-80-0157. The DHHS project officer is Ms. Mary Harahan, Office of the Secretary, Department of Health and Human Services, Room 447F, Hubert H. Humphrey Building, Washington, D.C. 20201. The opinions and views expressed in this report are those of the authors. They do not necessarily reflect the views of the Department of Health and Human Services, the contractor or any other funding organization.
The National Long Term Care Demonstration was established by the U.S. Department of Health and Human Services to evaluate community-based approaches to long term care for the elderly. The channeling demonstration was designed to determine the impact of providing community-based services on costs, utilization of services, informal caregivers, and client wellbeing.
In designing the evaluation of the demonstration, great care was taken to ensure that the results of that evaluation would not be called into serious doubt because of methodological shortcomings. Thus, an experimental design was used, under which eligible channeling applicants in each of the 10 sites were randomly assigned to the treatment group which was offered channeling services, or to the control group which was not. Because of the random assignment, the control group should be very similar to the treatment group on both observable and unobservable characteristics, and therefore, their experience should provide the best possible estimate of what would have happened to the treatment group had the demonstration not existed.
One aspect of the evaluation which could, however, raise questions about the accuracy of the estimates of channeling impacts is the fact that impacts can be estimated only on those sample members for whom followup data on outcomes is available. The loss of sample members from the analysis samples entails--in addition to reduction in sample sizes--the risk that sample members remaining in the treatment and control groups might differ on observed and unobserved characteristics, leading to biased estimates of channeling impacts.
In order to eliminate effects that attrition might have on the comparability of the treatment and control groups, regression models were used throughout the channeling evaluation to estimate program impacts. This statistical procedure controls for any observed initial differences between the two groups of observations remaining after attrition. However, use of regression does not ensure that the estimates are not biased by attrition, because it controls only for observed differences between the two groups. Two conditions are required for regression estimates of channeling impacts on a particular outcome variable to be biased as a result of attrition: (1) the presence of unobserved factors that affect .both the likelihood of response at followup and the value of the outcome variable at followup, and (2) a different rate or pattern of attrition for treatment and control groups.
Because of the differing data needs and sources of data for the various outcomes of interest in the evaluation, many different analysis samples were used. All of the analyses, however, relied to some degree on those with completed interviews at baseline, and/or at the followup interview covering a given six-month interval (ending 6, 12, and 18 months after randomization). The proportion of the full sample included in the various analysis samples was nearly always substantially lower for the control group than for the treatment group in all three time periods, especially in the financial control model. These differences arose primarily because of the large treatment/control difference in response rates at the baseline. Thus, one of the conditions required for attrition bias was present. However, despite this difference in rates of attrition, the analysis samples exhibited only minor treatment/control differences on initial screen characteristics.
To investigate whether the primary source of bias in impact estimates--unobserved factors affecting both response and the outcome being examined--was present, two types of approaches were taken: a heuristic approach and a statistical modeling approach. The heuristic approach was to make use of the Medicare claims data available for virtually the entire sample, on Medicare-covered use of and reimbursements for hospitals, nursing homes, and formal community-based services. To learn something about the likelihood that there were large differences on unobserved characteristics between those sample observations available for analysis and those that were not, channeling impacts on these Medicare-covered services were estimated for the full sample and then again for the various analysis samples. Estimates of channeling impacts on this partial set of service use measures were generally very similar for the analysis and full samples, which led to the following conclusions: (1) estimated impacts on hospital outcomes were definitely not biased by attrition, (2) estimated impacts on total nursing home and total formal service use (not just that paid for by Medicare) were not likely to be biased, and (3) estimated impacts on other (well-being and informal care) outcomes probably were not biased.
The statistical modeling approach was then used to provide additional evidence on the existence and magnitude of attrition bias. The procedure that was used required the estimation of a model to predict whether the sample member was in the analysis sample (using all of the observations), and then the use of the estimated model to construct a new variable for each member of the analysis sample. This new variable, when included as an additional control variable in the statistical (regression) model used to estimate channeling impacts, accounts for the effects of attrition on these estimates.
Comparison of the estimates of channeling impacts obtained with and without inclusion of the term to control for attrition showed no major differences in the estimates, for any of the key outcomes examined. A somewhat more general model also yielded results that implied that attrition bias was small or nonexistent.
Finally, the statistical modeling approach and exploitation of the Medicare data were supplemented by additional specialized analyses of the effects of attrition on estimates of channeling impacts on nursing home use and mortality. Using a variety of imputation procedures for cases without nursing home use data showed that estimates of nursing home impacts did not appear to be biased by sample attrition. Similar sensitivity tests for mortality estimates led to the same conclusion. This was further supported by the finding that the vast majority of individuals for whom no definite information on death was available (from death records or interview attempts) were in fact alive, because they either were found to have Medicare claims for services after the dates on which mortality was measured, or were not found to be deceased in an examination of updated Medicare status files.
The results from these various approaches lead us to conclude that, in spite of the observed treatment/control differences in attrition rates, there is very little evidence that attrition resulted in biased estimates of channeling impacts. The occasional bits of evidence to the contrary were scattered and inconsistent across time, model, and outcome variables. Although each of the approaches employed has its flaws, the (rare) availability of substantial information on attriters both before and during the followup period and the fact that all of the approaches point to the same basic conclusion provides a high degree of confidence in the inference that attrition has not led to distorted estimates of channeling impacts.