Opportunities to Improve Survey Measures of Late-Life Disability: Part II - Workshop Summary. CONSIDERATIONS FROM THE FIELD

09/27/2006

The final session involved discussion from the perspective of major federal and nonfederal national surveys on aging, disability, and long-term care. The discussion was organized around the following questions:

  • What innovations in measurement are you exploring or have you explored already on your surveys?
  • Which of the concepts discussed today seem relevant to your survey/national surveys in general? Feasible to include? Completely unworkable?
  • What additional pilot/measurement work would you like to see done so new approaches can be readily adopted?
  • What constraints are binding in surveys that need to be considered? Time? Subject matter? Format (performance/mode of interview/etc.)?
  • What would it take administratively to modify an existing instrument? How much lead time is generally needed to make modifications? Can you give ballpark costs for different types of changes (e.g., add a module; modify existing questions; add to existing questions)?

Dr. Kenneth Manton provided an overview of the National Long Term Care Survey (NLTCS). He explained that since its inception in 1982 the survey has emphasized stability of health conditions and disability measures. One of the main purposes of the survey is to examine change in human capital and disability in elderly. The focus is purposefully demographic in order to provide accurate estimates for policy makers of Medicare and Medicaid use and the need for long-term care services.

Dr. Manton provided a reaction to the use of vignettes. He wondered how they would fair during the Office of Management and Budget’s (OMB) clearance process because of the potential respondent burden. He also questioned whether the approach could be used validly with individuals who have cognitive impairment. On the methodological note, he questioned whether results were sensitive to the choice of standardization and whether the uncertainty of the parameter estimates was taken into account when setting the thresholds.

In terms of adding innovations to the NLTCS, Dr. Manton described three important areas. First, they have had a consistent set of measures over time, but have the ability to add supplemental questions so that new measurement approaches can be explored. Second, they have over-sampled the 95 and older population so that robust estimates can be made for this group. Third, they have linked to Medicare data so that transitions between survey years can be inferred.

D.E.B. Potter then provided an overview of the Medical Expenditure Panel Survey (MEPS). She explained that everyone in the household, not just older respondents, is interviewed. The main purpose of the MEPS is to provide calendar year estimates of medical expenditures and use. As such, the survey includes a variety of measures of health but not detailed disability measures (e.g., no questions about individual tasks or activities).

In the coming year MEPs will be changing its CAPI from a DOS environment to a Windows environment. They are going into the field with a split panel to pre-test the new environment. They are also developing new questions to measure caregiving in the household and to quantify the costs of caregiving.

Ms. Potter commented on some of the innovations presented earlier in the day. She echoed some of the concerns raised by Dr. Manton about whether the OMB clearance process might view vignettes as burdensome. She also suggested that Internet pilot testing might be of concern in older age groups because the mode might be sensitive to cognitive ability.

She suggested that in the future it would be important to investigate the influence of question order. For example, if physical functioning measures are included in a survey with self-reported items, how does order influence performance and/or reports? She suggested that items tapping unmet need are important as is obtaining a balance of physical and cognitive disability items.

Ms. Potter also reviewed a list of potential constraints in adding innovations to existing surveys. Budgetary constraints are clearly always of concern. For some agencies Congressional priorities drive how agency resources are allocated. Funds are needed not only for data collection but for development work as well. For the MEPS, there is a steering group that agrees to survey additions and modifications. Currently if items are added, other items must be dropped. A lead time of approximately two years is needed to introduce items at the beginning of a calendar year.

Dr. Julie Weeks elaborated upon some of the constraints raised by other panelists. She explained that for some surveys many agencies (with potentially competing interests) may be involved so it can be challenging to maintain consistency in items over time. In her experience with surveys of older adults, she has found fatigue to be a major constraint that may preclude efforts to cross-walking multiple measurement approaches. Time constraints, funding constraints, and limits as to what proxies can report about older respondents with disabilities were identified as issues.

Dr. Weeks also provided a brief overview of three federal review processes that govern federal surveys--the Institutional Review Board (IRB), which ensures human subjects protection; OMB, which ensures that burden is not excessive; and the Disclosure Review Board, which ensures that public use data are not identifiable. These are not constraints in and of themselves but may require additional time and resources.

Dr. Weir provided an overview of disability measures in the Health and Retirement Survey. David Weir. Core measures in telephone interviews every two years include functional limitations, ADLs, IADLs, work limitations, housework and other limitations. He also described several innovations. In 2004, for example, SSA funded an in-person interview to increase consent for record linkages and the National Institute on Aging provided funds to include physical performance measures (grip strength, puff test, and times walk) and drop off questionnaires that included work limitation vignettes. The latter had an 80% response rate and data will be released in summer 2005.

In the near future, the HRS plans to include vignettes in its internet sample. In addition, one of the experimental modules in 2006 will be devoted to measuring modifications to the built environment designed by Dr. Vicki Freedman and Dr. Emily Agree. Also proposed for 2006 through 2010 are plans for one-third of the HRS to be interviewed in person and for performance measures and biomarkers to be collected. The performance measures collected in 2004 had good response rates (74.8%) even though individuals were given several opportunities to refuse, although preliminary analysis of the data suggests that there may be some selection according to disability status.

A number of additional points were raised by audience members:

  • There seems to be important differences in the ease with which innovations can be added to government and nongovernment surveys. The processes and constraints appear to be different. However, some of the limitations raised by the panel may be overstated. For example, NHANES is able to make changes to its CAPI and put data out every two years.
  • Survey content is driven in part by the audience for the survey. Policy-relevant surveys serving Congressional needs will have different measures than surveys designed to meet basic and social science research needs.
  • The question of whether innovations are being driven out by the need for consistency over time was raised. Audience and panel members suggested that this was not the case. One audience member suggested that surveys can be modified if the sample is large enough to undertake a split sample design, where the old and new questions are both asked for several years. Another audience member suggested that for measurement innovations to be accepted it would be important to demonstrate that the new items are more valid and reliable than the old. In making such changes it was suggested that the old questions appear first and newer questions placed at the end so that old questions would not be contaminated. Other creative ways to introduce and test new questions quickly include the experimental module (in which a random subset is administered the questions at the end of the main survey) and an Internet panel (in which respondents answer internet-based questionnaires).

View full report

Preview
Download

"survmeasII.pdf" (pdf, 477.72Kb)

Note: Documents in PDF format require the Adobe Acrobat Reader®. If you experience problems with PDF documents, please download the latest version of the Reader®