U.S. Department of Health and Human Services
This report was prepared under contract #HHS-100-80-0157 between the U.S. Department of Health and Human Services (HHS), Office of Social Services Policy (now the Office of Disability, Aging and Long-Term Care Policy) and Mathematica Policy Research, Inc. For additional information about the study, you may visit the DALTCP home page at http://aspe.hhs.gov/daltcp/home.shtml or contact the office at HHS/ASPE/DALTCP, Room 424E, H.H. Humphrey Building, 200 Independence Avenue, SW, Washington, DC 20201. The e-mail address is: webmaster.DALTCP@hhs.gov. The DALTCP Project Officer was Robert Clark.
There has been rapid growth of public and private expenditures for long term care--a term which encompasses any of a variety of services (including personal and medical care and assistance with household management) usually provided in the home and designed to help the recipient continue to live in the community. The expectation that these expenditures would rise even more as a result of both demographic trends and increases in health care costs has focused attention on ways to control long term care expenditures while still providing adequate care to those in need. In 1980, the United States Department of Health and Human Services (DHHS) funded the National Long Term Care Demonstration to conduct a rigorous test of one such approach--the provision of case managed community-based services to meet the long term care needs of the functionally limited elderly--called Channeling.
This report documents the design and implementation of the surveys which collected data for evaluation of the Channeling demonstration. In doing so, it introduces a number of topics and results which have implications for the evaluation. Discussion of these implications is outside the scope of this report; separate technical reports in which such implications are discussed in detail will be completed as part of the evaluation.
This Executive Summary begins with an overview of the demonstration. It then presents an overview of the research issues and design, discusses the implications of the design for data collection, summarizes the data collection activities and instruments, describes the links among the data collection efforts, and reviews the chapters of the full report entitled: The Evaluation of the National Long Term Care Demonstration Survey Data Collection Design and Procedures.
The intended target population of the Channeling demonstration was the elderly with severe functional limitations who required long term care services for an extended period of time and who, in the absence of Channeling, were at high risk of being institutionalized. For this group of people, the goal of Channeling was to bring about more effective and efficient provision of community-based long term care services. The basic hypothesis to be tested was that the primary problems of the long term care system were lack of information, uncoordinated services, and reverse financial incentives--rather than the lack of direct services themselves. With respect to direct service use, the objective of Channeling, through case management, was to facilitate substitution of services provided in the community--both formal services and the informal care provided by family and friends--for institutional care, wherever community care was appropriate. This substitution was intended, in turn, to reduce costs and to improve the quality of life of its clients.
To accomplish these ends, the Channeling demonstration tested two variants of a managed system of long term care in 10 communities around the country. The two variants, called the basic case management model and the financial control model, were each tested in five sites.1 A set of core functions was common to both models, with additional features (that differed by model) intended to enhance Channeling's ability to intervene in the existing long term care system.
Core Channeling Functions. The major impacts of Channeling on clients, caregivers, and costs were to be effected principally by managing individual clients' service use, particularly by reducing the use of institutions. To achieve this objective, the designers of Channeling prescribed seven essential functions (Gottesman 1981):
Outreach to identify and attract the target population;
Screening to determine whether an applicant was part of the target population;
Comprehensive needs assessment to determine individual problems, resources, and service needs, using a structured assessment instrument;
Care planning to specify the types and amounts of care necessary to meet the identified needs of each client;
Service arrangement to implement the care plan through both formal and informal providers;
Monitoring to assure that services were provided as called for in the care plan or modified as necessary; and
Reassessment to adjust care plans to changing needs. Reassessment was to be done three months after program entry and every six months thereafter (or more frequently if a client's status changed).
These constituted the minimum set of functions deemed necessary to affect service use and ultimately client outcomes. In the Channeling demonstration, as noted, this minimum intervention was enhanced by additional features, some designed to increase Channeling's ability to affect service use, others and to limit the resources used.
Additional Model Features. The basic case management model included one additional feature designed to improve access to existing services. Each basic case management project had limited discretionary funds that case managers could use to purchase community-based direct services in order to complete a care plan by overcoming existing service or program gaps.2
The financial control model included six additional features, which together constituted not only a substantially more powerful influence on service access and use but also a cost controlling mechanism. These were:
expanded service coverage;
pooling of government funds (Funds Pool)3 from categorical programs for use by Channeling irrespective of categorical eligibility;
cap on average expenditures;
limits on costs of individual care plans; and
cost sharing by clients.5
Implementation of the Demonstration. State agencies, with overall responsibility for demonstration activities in their respective states were the prime contractors to DHHS. They in turn made arrangements (typically by subcontract) with local government or nonprofit agencies at the demonstration sites which acted as host agencies for the Channeling project in their site. The sites and host agencies selected for testing the Channeling interventions are shown in Table 1. The major milestones in the planning and implementation of the demonstration are shown in Table 2.
It is important to note that the demonstration was tested at a judgmentally selected sample of sites (in that they applied to participate in the demonstration and were selected by DHHS based in part on their ability to implement the demonstration), not at a random selection of sites. The sites do represent a broad range of demographic and service environments. However, it should be kept in mind that the particular nature of the selection process may have led to greater sophistication or innovativeness about community care or to a richer community service environment in the demonstration sites than in the nation as a whole.6
This section briefly reviews the research questions that were the objectives of the evaluation effort and the experimental approach that was the cornerstone of the research design.
Given that the intended impact of Channeling was to achieve less costly, more efficient use of services, and improved client and caregiver outcomes through improved matching of client needs and services, the research was designed to address the following impact questions:
What was Channeling's impact on formal and informal service use?
Did Channeling reduce the use of nursing home care?
Did Channeling reduce the use of hospital care?
Did Channeling increase the use of formal community-based health and long term care services (including case management)?
Did Channeling increase or decrease the amount of informal care provided by family and friends?
What was Channeling's impact on the costs of long term care?
Did Channeling increase or decrease public costs?
Did Channeling increase or decrease private costs?
What was Channeling's impact on individuals?
Did Channeling reduce mortality rates?
Did Channeling improve social and psychological well-being?
Did Channeling reduce unmet needs and increase satisfaction with services provided?
Did Channeling reduce the rate of deterioration of functioning?
What was Channeling's impact on informal caregivers?
Did Channeling increase or decrease caregiver stress and well-being?
Did Channeling increase or decrease caregivers satisfaction with the care received by the elderly individual to whom they give assistance?
Did Channeling increase or decrease the income and employment of caregivers?
Did Channeling increase or decrease the financial support provided by family and friends?
In addition to addressing impact issues, the research was designed to address implementation and cost-effectiveness issues:
How was the project implemented in each site?
What were the characteristics of the environments in which Channeling was implemented?
What were the characteristics of Channeling clients?
What were the costs of Channeling?
What approaches would be most effective for implementing future programs like Channeling?
Was Channeling a cost-effective long term care policy intervention?
To determine the impacts of the demonstration, the evaluation used an experimental methodology to determine how what was observed under Channeling differed from what would have happened under the existing long term care system in Channeling's absence. To achieve this objective, the Channeling evaluation incorporated random assignment of eligible applicants to the treatment group or to the control group in each of the 10 demonstration sites. Differences between outcomes for the two groups provide estimates of outcomes attributable to Channeling.
This experimental methodology was implemented in the Channeling demonstration as follows. To enter the project, individuals who were referred to or applied to Channeling were screened to determine whether they were eligible and interested in participating in Channeling. If so, they were randomly assigned to either a treatment group (and thus had the opportunity to participate in Channeling), or a control group (the members of which continued to rely on the existing long term care system). Such an experimental design is a powerful methodology for isolating program impacts for two reasons. First, the control group establishes what would have happened in the absence of Channeling in that its experiences embody any changes that occurred because of trends over time (such as improved training of health care professionals) or policy changes (such as changes in reimbursement policies under Medicare and Medicaid). Second, random assignment ensures that the treatment and control groups are similar with respect to not only measured characteristics but also unmeasured ones that may affect outcomes.
The research design, specified in DHHS' request for proposals for the evaluation and further developed during the design phase of the project, presented a number of critical issues for the data collection effort.7 To identify the eligible target population from among those elderly persons who were referred to the demonstration, a screening approach was required that was operationally feasible, sufficiently sensitive, and able to provide a standardized set of data for the research. Randomization of the sample raised sensitive political and operational issues which had to be resolved in close coordination with the demonstration projects, because their support of the experimental methodology and cooperation in the random assignment process were critical to its success. Screening and randomization are discussed in greater detail in Chapter II of the full report.
The original survey data collection plan for the elderly sample, specified in the request for proposals, was that all data (baseline and followup) on the treatment group would be collected by Channeling project staff and all data on the control group would be collected by research staff. However, data collection by two groups raised a number of serious analytical and practical problems for the research, involving the potential noncomparability of treatment and control group data collected under this design and the differing needs of practitioners and researchers. Although the evaluation contractor had identified these problems in its proposal and had proposed a solution, it became clear in the early months of the design phase that it was inadequate and the issue was raised again with DHHS in January, 1981.
The cornerstone of the evaluation design for the demonstration, as noted, was random assignment of individuals to treatment or control groups so that the control group can provide the basis for comparing the intervention to the status quo. This requires that the data for treatment and control group members be comparable. To prevent potential noncomparability, the evaluation contractor proposed that research interviewers administer periodic research interviews to all sample members. After extensive discussion, this recommendation was accepted for the followup interviews, with agreement that case managers would periodically reassess client needs as a separate activity. This was an important decision because data comparability was even more critical for followup than for baseline data as the followup interviews involved collection of data on the outcome variables. But it was not accepted for the baseline because to have practitioners develop a care plan for Channeling clients from a baseline completed by research interviewers violated assessment procedures in normal clinical practice.
Several alternatives were considered. One was to have both a research baseline and an initial clinical comprehensive needs assessment completed for all treatment group members; however, the cost and respondent burden were prohibitive. Another was to have clinicians administer the research baseline to control group members, but such a solution ran the substantial risk that in doing so clinicians would provide some case management services to control group members which they would not have received otherwise. If this happened the control group would no longer represent the status quo, violating the principles of randomized design, which were critical to the success of the evaluation. The decision finally was reached that clinicians would conduct baseline assessment interviews with all clients, including treatment group members. Research interviewers would administer the baseline to control group members and to treatment group members who were terminated from the Channeling projects prior to administration of the baseline assessment instrument.
Special efforts were made, therefore, to standardize training on the baseline instruments and procedures between the two sets of data collectors and a small subsample of the treatment group was selected for reinterview by research staff to examine data comparability. Baseline design and data collection procedures addressing these issues are discussed in Chapter III of the full report. The results of the reinterview study and an assessment of the comparability of baseline data for treatment and control groups are reported in Brown and Mossel (1984).
One feature of the planned intervention was use of a standardized, comprehensive assessment instrument for care planning. The data collection design outlined above further required that this instrument meet both research and clinical information needs at entry into the demonstration (baseline). Further, it had to be suitable for use in both community and institutional settings. These requirements called for a lengthy period of instrument development with input from both the research and clinical perspectives. This process is also described in Chapter III of the full report.
There were six major research components in the evaluation of the Channeling demonstration, corresponding to the major research questions outlined above. Associated with each research component was a set of sources providing data for the analysis. As can be seen from Table 3, no single source or methodology provided all the data necessary to address the issues in any research component of the evaluation, requiring close coordination among a variety of data collection activities. For example, the cost analysis drew upon data from personal interviews with the elderly sample members, from coding of individual service records maintained by local service agencies (other than the Channeling projects), and from computerized claims records maintained by state and federal agencies. In addition, in the special case of followup interviews with the caregivers of deceased sample members, community and institutional providers identified as having served them were also asked to provide data for the service use and cost analyses.
The need for linked data across sources and methodologies presented one of the most significant challenges in the design of the data collection instruments and procedures for the evaluation. The subsequent chapters of the full report discuss the specific issues faced in the development and implementation of each data collection effort involving individual-level data.8 The remainder of this section provides an overview of the procedures necessary for linking the separate survey efforts and the survey organization managing the integration of the activities connected with each.
Figure I.1 shows in schematic form the linkages among the various data sources and data collection activities. The data collection schedule is shown in Table 4.
After screening for eligibility and random assignment, elderly persons in both treatment and control groups (called the elderly sample) received a baseline assessment. As noted, Channeling project staff administered the assessment interview (with some additional clinical information) to the treatment group, for which it served the dual functions of an initial needs assessment for case management and baseline information for the research; research staff administered the same assessment instrument (without the additional clinical information) to the control group. When necessary, proxy respondents were asked to provide certain information not available from the elderly sample member. All elderly sample members for whom a baseline assessment interview had been completed were eligible for followup interviews by research staff 6 and 12 months after random assignment. The first half of the sample to be enrolled also received a followup interview 18 months after the initial random assignment. The purpose of the followup interviews was to obtain outcome measures of service use and quality of life. They also provided the basis for obtaining Medicare, Medicaid, and provider billing records to obtain complete service use and cost data and for identifying the primary informal caregivers of a subsample of both treatment and control groups.
Linking Elderly Sample Survey with Service and Cost Data. In their 6- and 12-month followup interviews elderly sample members were asked to name providers from whom they had received services. Caregivers at their 6- and 12-month followup also provided the names of providers used by sample members who were deceased. Provider cost and service use data were collected for a 20 percent subsample of elderly sample members who named (or had named for them) community-based providers, for all those indicating use of supportive housing, and for all those with hospital and nursing home stays not covered by Medicare and Medicaid.
Linking Elderly Sample Survey with Informal Caregiver Survey. A subset of the elderly sample members enrolled in the demonstration were assigned to the informal caregiver subsample. (This subset is described further in Chapter VI of the full report.) Caregiver interviews were administered by research interviewers to the informal caregivers identified by these sample members as the family member of friend who helped the most. The caregiver interviews were administered after the sample member had completed his or her corresponding interview.
Linking Elderly Sample Survey with Death Records Data. Death records searches were made for all sample members who did not complete their most recently assigned interview (including those who did not complete their baselines) after all followup interviews had been completed or had reached some other final status.
Due to budgetary constraints, the evaluation contractor had no site offices. Rather, data collection operations were managed from the main office and the field interviewers used their homes as their site base of operations. This structure necessitated intensive use of telephone and written communication between evaluation staff in the central office and the projects and the research interviewers. When issues arose that affected data collection policies and procedures, policy decisions were made and documented in operations memoranda distributed to evaluation staff (and to project staff as appropriate).
Figure I.2 shows the overall survey management structure for the evaluation. The survey director, who reported to the project director, oversaw all survey activities, for which there were four main tasks: randomization, field operations for the elderly sample member and caregiver interviews, data reduction, and provider data collection. During the entire data collection effort, there was frequent communication among the managers of these subgroups (as indicated by dashed lines in Figure I.2) to ensure that operations were coordinated and that interdependent activities were completed in timely and accurate fashion. In addition, the managers reported on progress, productivity, and budget monitoring to the survey director.
Management of Randomization. The survey manager for randomization was responsible for training and supervising the randomization clerks. Their duties included receiving requests for random assignment of applicants from the Channeling projects, assigning research identification numbers and research statuses, maintaining randomization logs and other documentation, transmitting information on randomized sample members to the field coordinators, and logging all incoming mail as well as documents sent for data entry.
Management of Elderly Sample Member and Informal Caregiver Surveys. The survey manager for field operations trained and supervised the field coordinators and research interviewers. She monitored interviewer progress, providing regular reports to the survey director, and handled all interviewer-related issues or problems raised by project staff and sample members. The field coordinators assisted in research interviewer training and supervision throughout the data collection period. They assigned work to the interviewers, provided interviewer progress reports to the field operations survey manager, prepared interviewer productivity reports, and processed interviewer time sheets.
The survey manager for data reduction trained and supervised the quality control (QC) supervisor and staff. She oversaw the production of followup contact sheets for sample members and their caregivers and for the provider data collection, supervised all data cleaning activities, and was responsible for reconciling all discrepancies in final status and ensuring that all relevant documents had been received from the projects and research interviewers.
The QC supervisor assisted in interviewer and QC staff training, communicated with project staff regarding missing documents, and was responsible for ensuring that documents were subjected to quality control in a timely fashion. She and her QC staff also were responsible for editing and coding documents for data entry and for data cleaning.
Management of Provider Records Data Collection.9 The director of provider data collection was responsible for developing the survey instruments for the provider records data collection, hiring and training special site interviewers, monitoring interviewer progress, and providing regular reports to the survey director. Site managers were responsible for assigning work to and supervising the interviewers, generating productivity reports, and processing interviewer time sheets. Most provider interviewers used their homes as a base of operations. Two interviewers were based in Princeton; they collected data in sites with too few providers to merit a locally based interviewer and assisted interviewers in sites with above average work loads.
Management of Death Records Data Collection. The assistant survey director managed the death records data collection. In all but two sites, staff at the State Vital Statistics offices searched for death records from a computer generated list provided to them by evaluation staff. In the remaining two sites, evaluation staff undertook the death records searches themselves.
The purpose of Chapters II through X in the survey report is to provide detailed information of the design, procedures, and results of the data collection effort. Chapter II discusses the selection and screening procedures used to identify eligible applicants and the randomization procedures used to assign eligible applicants to treatment or control status. It also describes the client tracking system. Chapter III discusses the objectives and issues involved in the design and the instrument development for the elderly sample member baseline assessment and followup interviews. Chapter IV documents in detail the field operations and survey results for the elderly sample member baseline data collection. Chapter V describes the field operations and survey results associated with the elderly sample followup survey and death records searches. Chapter VI discusses design and instrument development for the informal caregiver baseline and followup surveys. Chapter VII documents in detail the data collection operations and survey results for the informal caregiver baseline and followup. Chapter VIII discusses design and instrument development for provider data collection instruments. Chapter IX documents in detail the data collection operations and survey data from providers and the results for the provider data collection effort. Finally, Chapter X describes the automated tracking system and the quality control and data entry procedures.
In the initial design phase, four different models of channeling were considered, to be tested in 23 sites. Budget constraints and elimination of a planned second round of procurement resulted ultimately in the selection of 10 sites for the demonstration, and compressed the design into two models as the maximum that feasibily could be tested in this limited set of sites. (See Baxter et al. 1983 and Carcagno et al. 1985 for additional details.)
Under the agreement with DHHS, each of the basic case management projects had $250,000 over the approximately three-year life of the project for this purpose. Several projects were able to supplement these funds from state or other sources.
Because this model of channeling was funded partially through waivers under Medicare, as a condition of participation in the demonstration applicants had to be covered by Medicare Part A. This was a minor limitation as virtually all those over 65 are covered.
Authorization power applied to community-based services, and only so long as the individual remained a client. It did not apply to hospital, nursing home, and physician care.
There was some variation among projects in the way the cost sharing component was implemented, with some having more restrictive cost sharing requirements than others.
Detailed discussion of the representativeness of the sites and the characteristics of their long term care environments can be found in Carcagno et al. (1985, Part V).
These issues and their resolution within the context of the research design are discussed in detail in Kemper et al. (1982).
Implementation, process, and Medicare and Medicaid records data collection activities are described in other reports.
|TABLE 1: CHANNELING SITES AND HOST AGENCIES AT THE START OF THE DEMONSTRATION|
|Basic Case Management Model|
|Eight rural counties in Eastern Kentuckya||Department of Social Services, State Department of Human Resourcesb|
|York and Cumberland counties in Southern Maine||Southern Maine Senior Citizens, Inc.|
|Baltimore, Maryland||City of Baltimore, Council on Aging and Retirement Education/Area Agency on Aging|
|Middlesex County, New Jersey||County Department of Human Services|
|Houston, Texasc||Texas Research Institute for Mental Sciences (TRIMS)|
|Financial Control Model|
|Miami, Floridad||Miami Jewish Home and Hospital for the Aged|
|Greater Lynn, Massachusettse||Greater Lynn Senior Services, Inc.|
|Rensselaer County, New York||Rensselaer County Department on Aging|
|Cuyahoga County, Ohio (Cleveland)||Western Reserve Area Agency on Aging|
|Philadelphia, Pennsylvania||Philadelphia Corporation on Aging|
|TABLE 2: MILESTONES IN THE PLANNING AND IMPLEMENTATION OF THE CHANNELING PROJECTS|
|December 1979||Department of Health and Human Services (DHHS) published notice of intent in the Federal Register to develop a coordinated long term care Channeling demonstration.|
|April 1980||DHHS issued request for proposals for Channeling states.|
|May 1980||DHHS issued requests for proposals for both the national technical assistance contractor and the national evaluation contractor.|
|September 1980||DHHS selected 12 Channeling demonstration states and the national technical assistance and evaluation contractors. Start of the planning phase.|
|November 1980||Demonstration states submitted site proposals.|
|January 1981||DHHS selected 12 Channeling project sites.|
|June 1981||DHHS issued guidelines for Channeling states wishing to implement the financial control model.|
|August 1981||DHHS reduced from 12 to 10 the number of national research states and sites.|
|September 1981||DHHS designated 5 financial control projects; the other 5 as basic case management projects.|
|December 1981||Channeling projects submitted detailed operational plans to DHHS.|
|February 1982||First of the basic case management projects began operations after hiring staff; going through screening, assessment and case management training; negotiating referral agreements with priority referral sources; and implementing internal management information and recordkeeping systems.|
|May 1982||First of the financial control projects began operations for completing same tasks as basic case management projects, as well as negotiating provider contracts, implementing the financial control system, and completing funds pool arrangements.|
|June 1982||All projects were operational.|
|May 1983||First project reached research sample target.|
|June 1983||All projects achieved adjusted research sample target. Randomization ended.|
|July-September 1983||Sites continued to increase their caseloads in order to achieve their target sizes.|
|Steady State Phase||Projects maintained their caseloads at the levels agreed to with DHHS.|
|Demonstration Closeout Phase||Projects carried out their plans to end federally supported operations. Some transferred clients to other care arrangements while others prepared to continue under different auspices. Fiscal support staff in financial control model projects continued until June 1985 to process final provider billings.|
|TABLE 3: EVALUATION OF THE NATIONAL LONG TERM CARE DEMONSTRATION PRINCIPAL DATA SOURCES|
|Research Component||Principal Data Sources|
|Use of Services|
|Implementation and Processb|
|Cost-Effectiveness||Based on findings of other research components.|
|TABLE 4: DATA COLLECTION SCHEDULE|
|Screening and Randomization||March 1982-June 1983|
|Elderly Sample Member Baselines||March 1982-July 1983|
|Client Tracking Forms||March 1982-September 1984|
|Elderly Sample Members Followup||September 1982-July 1984
|Caregiver Baselines||November 1982-May 1983|
|Caregiver Followups||July 1983-July 1984|
|Provider Data Collection||February 1984-November 1984|
|Survey of Privately Contracted Individuals||December 1982-February 1984|
|Death Records Search||September 1984-December 1984|