
See Carcagno et al (1986, Chapter VI) for a detailed description of the eligibility criteria.

See Phillips et al (1986, pp. 3946) for a detailed description of the randomization procedure used.

See Carcagno et al (1986, Chapter VIII) for some statistics on the proportion of treatment group members who were terminated from channeling, the reasons for termination, and the points at which termination occurred.

See Phillips et al (1986) for complete documentation of interview data collection procedures.

In addition to these surveys of sample members, there were also surveys of the primary caregivers of a subset of the sample members. Data from these surveys were used primarily in the evaluation of the effects of channeling on caregivers. Methodological issues related to this sample are examined in Christianson (1986).

How this difference affected the comparability of the baseline data for the two groups is summarized in Section B of Chapter IV in this report.

See Phillips et al (1986) for a discussion of the 18month cohort and interview.

See Phillips et al. (1986) for a detailed description of the provider records data.

We also estimated channeling impacts on the "survivor" samples, the subset of the nursing home sample consisting of sample members who were alive at the beginning of the period being analyzed.

Restriction of the sample used for analysis to those living in the community during the reference week yields more meaningful estimates of service use, since sample members who were dead or in a hospital or nursing home at reference week were by definition receiving no formal or informal community care. However, this restriction of the sample would yield biased estimates of program impacts on formal and informal care if the program affected the mortality, hospital or nursing home use of sample members. Given the lack of channeling impacts on these other outcomes, we were able to use the incommunity sample.

The original sampling plan called for a research sample with an equal number of observations from each of the ten sites, with observations in each site equally split between treatment and control groups: about 300 treatments and 300 controls in each site, with about 240 of each group expected to be available for analysis (assumed 80 percent response rates). Our estimate was that in all but the smallest sites the supply of eligible applicants during the 12 month intake period over which the sample was to be drawn would exceed the number of observations required. Thus, in the smallest sites, applicants were assigned to treatment or control groups on an equal basis, but in medium sites three fifths were assigned to treatment states and twofifths to control states with only 2/3 of the treatment group to be included in the research sample. In the largest sites, onethird of the applicants were assigned to control status and twothirds to treatment status, with only half of the latter group to be included in the research. However, caseload buildup in several sites was slower than projected during the first 5 months of intake, so those treatment group members not originally intended for the research sample were in fact included in the analysis in order to achieve the desired total sample size. As a result, sample sizes differed across sites and the treatment group was larger than the control group in the medium and large sites. See Kemper et al (1982, pp. 3739) for a more detailed explanation.

It can be shown that if a regression of outcomes on a treatment/control variable and a set of binary site variables (with no control variables) is estimated, the coefficient on the treatment variable will be a weighted average of the treatment/control differences in the individual sites, with the weight for any site dependent only on the proportion of all observations coming from the site and the ratio of treatments to controls in the sites.

Another procedure used by some analysts is multiple classification analysis (MCA), which is simply a regression model in which all of the control variables are categorical or qualitative (i.e., discrete). Because a few of our control variables are continuous, we use regression instead.

A number of alternative specifications could be used, the most general of which would be separate regression equations for the treatment group and the control group in each site. See section D of Chapter V for a comparison of estimates obtained from the above model to those obtained from more general models.

There are econometric procedures to estimate such models. However, to obtain such estimates some screen/baseline explanatory variables must be excluded from some equations but not from others. If the analyst excludes from a given equation explanatory variables that truly belong, the coefficient estimates will be biased. Thus, unbiased estimates are obtained from these procedures only if the analyst correctly specifies the interrelationships among all of the endogenous variables and which exogenous variables affect which outcomes directly.

Inclusion of too many explanatory variables can result in a high degree of colinearity among them, which typically reduces precision and can produce anomalous estimates. However, this is unlikely to create much problem for the coefficients on the treatment variables, since random assignment ensures that treatment status will not be highly correlated with any of the regressors. Thus, the precision of the estimated treatment/control differences will not be seriously diminished by multicolinearity.

These variables indicate whether sample members had a recent hospital or nursing home stay and serve as a proxy for serious health problems.

The results obtained using binary site variables are exactly equal to what would be obtained if all variables (dependent and independent) were transformed into deviations around their sitespecific means. Obviously, this nets out the effects of any sitespecific factors on outcomes.

For convenience of interpretation, we actually include a binary variable indicating whether the sample member resided in a basic or financial control model site and 8 binary site variables (4 for each model). This is exactly equivalent to a specification with 9 binary site variables and no "model" variable but renormalizes the coefficients on the site variables. With the binary model variable included, the coefficients on a given site variable represents the regressionadjusted difference in mean outcomes between that site and the excluded site from the same model. With 9 site binaries, the coefficient on any one is interpreted as the differences in mean outcomes between that site and the only excluded site.

In preliminary analyses separate variables were used for income and Medicaid eligibility. However, because of the high correlation between the two variables it was difficult to interpret the coefficients. Hence, a composite variable was defined.

Four sets of mean values were computed and used for imputations, one each for treatments and controls in each model. Thus, the value imputed for any given observation depended on the treatment group and model to which the observation belonged. In addition, for variables such as hours of formal and informal care the value imputed depended upon whether the individual was known to have received some care. The conditional mean hours per recipient were imputed to those known to have received some care.

This problem could have been averted had each local channeling program implemented both models, with eligible applicants randomly assigned to the basic model, the financial model, or the control group. However, implementing both models in every channeling program would have led to serious problems as clients assigned to the basic model observed the much greater services provided to the clients who were under the financial model. Furthermore, making the complicated interagency arrangements necessary to set up the funds pool for the financial model in twice as many sites would have created a financial burden for the demonstration.

This tradeoff between type I and type II errors is essentially reversed for informal care outcomes. That is, whereas the "conservative" approach for other outcomes is to ensure a low probability of erroneously concluding that there were channeling impacts where none exist (i.e., a low probability of type I errors), the conservative approach for informal care is to avoid concluding that channeling had no effect on informal care when in fact it did result in reductions in such care. This difference arises because, unlike other expected effects of channeling, the hypothesized reductions in informal care are generally regarded as adverse effects, because they imply that informal caregivers were substituting expensive formal case for their own time. Therefore, estimated reductions in informal care that were large, even if not statistically significant at the .05 level, were discussed in the report on informal impacts (Christianson, 1986).

In order to estimate this model, binary variables for one of the categories of each of the classifying variables (X_{1}) were excluded from the model. (The results were unaffected by the choice of which category is dropped.) In some cases, data were missing on one or two classifying characteristics. For each of the 4 characteristics for which such missing data occurred, a separate binary variable indicating whether the necessary information on that characteristic was missing was included in X_{1}. Estimated coefficients on these indicator variables were ignored; the procedure was intended solely to retain those observations with a small amount of missing information without assuming (perhaps erroneously) a value for the missing characteristic.

An alternative way to define subgroups would involve using combinations of these 8 (or other) characteristics, e.g., individuals who live alone and are on Medicaid. Impacts for a number of multidimensional subgroups are examined in Grannemann et al (1986).

Another type of pooling that was considered was pooling the control groups from the two models. However, this would undo much of the benefits of randomization in that the control group would be obtained from 10 sites and the treatment group from only 5 of these sites for each model. Actual program effects would be confounded with differences among the sites in the estimates of channeling impacts. Coefficients on the binary site indicators in the regressions were nearly always large and statistically significant; hence, formal tests of whether control groups could be pooled would have failed for virtually every outcome measure examined.

The case management measure used was not available at 18 months.

Two measures of whether informal care was received were examined: receipt of any informal care, and receipt of care from a visiting informal caregiver.

Impact estimates at the model level were obtained from the separate, unpooled equations by first using the latter to compute predicted outcomes (at the sample mean of the client characteristics used in the regression) for treatments and for controls in each site, then taking a weighted average of the treatment/control difference in expected outcomes at the five sites comprising the model.

The 530 impact estimates arise from examining 18 outcome variables at 6 and 12 months and 17 variables at 18 months, with impacts computed for each of the 10 sites. These 18 variables are the same at the 14 examined above, except that "living arrangements" (in the community, a hospital, a nursing home, or deceased on followup reference date) is treated here as 4 separate variables rather than as one for testing purposes, and the life satisfaction variable is treated as two variables (whether very satisfied, whether somewhat satisfied) rather than as one.

Before comparing the impact estimates for the early and late cohorts, we compared the two groups on baseline and screen characteristics to determine whether they differed in composition prior to entering the sample. We found that the early and late cohorts differed very little for the control group, but somewhat more for the treatment group.

Some of the regression estimates presented in this section differ somewhat from those presented in final channeling reports because of various changes in samples or variables between the early analysis performed to address methodological issues and the final analysis.

Probit and logit estimates of the effects of a given explanatory variable on the dependent variable are virtually indistinguishable from each other in most applications. We have used probit in this comparison because it was somewhat easier to obtain the desired test statistics from our probit program than our logit program.

The disturbance term is subtracted from rather than added to the equation to facilitate the interpretation of e as a threshold (see text). Obviously, the sign on e and its interpretation could both be changed with no effect on the results.

This report is available free of charge from from the Office of the Assistant Secretary for Planning and Evaluation, Division of Disability Aging and Long Term Care Policy, Department of Health and Human Services, HHH Building, 200 Independence Avenue, S.W., Washington, D.C. 20201.

These instruments were used for both research and clinical purposes. After the research sample intake was completed, a clinical version of the instrument was subsequently developed by Temple University's Institute on Aging and is available from it.
View full report
"methodes.pdf" (pdf, 2.16Mb)
Note: Documents in PDF format require the Adobe Acrobat Reader®. If you experience problems with PDF documents, please download the latest version of the Reader®