Skip to main content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Development of an Assistive Technology and Environmental Assessment Instrument for National Surveys: Final Report

Publication Date

U.S. Department of Health and Human Services

Development of an Assistive Technology and Environmental Assessment Instrument for National Surveys: Final Report

Part I. Recommended Modules and Instrument Development Process

Vicki A. Freedman, Ph.D. Polisher Research Institute

Emily M. Agree, Ph.D.Johns Hopkins Bloomberg School of Public Health

Jennifer C. Cornman, Ph.D.University of Medicine and Dentistry of New Jersey

December 2005

PDF Version (63 PDF pages)


This report was prepared under contract #HHS-100-03-0011 between the U.S. Department of Health and Human Services (HHS), Office of Disability, Aging and Long-Term Care Policy (DALTCP) and the Polisher Research Institute. Additional funding was provided by HHS’s National Institute on Aging. For additional information about this subject, you can visit the DALTCP home page at http://aspe.hhs.gov/_/office_specific/daltcp.cfm or contact the ASPE Project Officers, William Marton and Hakan Aykan, at HHS/ASPE/DALTCP, Room 424E, H.H. Humphrey Building, 200 Independence Avenue, S.W., Washington, D.C. 20201. Their e-mail addresses are: William.Marton@hhs.gov and Hakan.Aykan@hhs.gov.

This report was funded by the Department of Health and Human Service’s Office of the Assistant Secretary for Planning and Evaluation in cooperation with the National Institute on Aging (R01-14346) and the National Center for Health Statistics. The views expressed are those of the authors alone and do not represent those of the author’s affiliations or funding agencies.


TABLE OF CONTENTS

FORWARD

I. PURPOSE

II. RECOMMENDED MODULES

III. INSTRUMENT DEVELOPMENT PROCESS

Development of Conceptual Framework

Review of Existing Measures

Input from Policy Makers, Survey Designers, and Expert Panel

Cognitive Testing

Pilot Testing and Feedback

Finalization of Recommended Modules

REFERENCES

NOTES

MODULES (separate PDF files)

MODULE A: Survey Modules to Measure Assistive Technology and the Home Environment: Recommended 8-10 Minute Modules [PDF file]

MODULE B: Survey Modules to Measure Assistive Technology and the Home Environment: Recommended 2-3 Minute Module [PDF file]

[NOTE: These Modules are in separate Portable Document Format (PDF) files. You will need a copy of the Acrobat Reader in order to view them.]

LIST OF TABLES

TABLE 1: Content and Timing of Recommended 8-10 Minute Modules

TABLE 2: Content and Timing of Recommended 2-3 Minute Module

FORWARD

This instrument was developed with assistance from many individuals. We are grateful to our colleague Barbara Altman of the National Center for Health Statistics (NCHS) for her efforts in overseeing the cognitive testing and pilot testing of the instrument. Barbara Wilson and Karen Whitaker of NCHS’s Questionnaire Design Research Laboratory provided valuable expertise in the cognitive testing of early versions of the instrument. At Westat, Holly Schiffrin, Jim Bethel, and Donna Smith conducted the pilot study and provided important insights to improve the instrument. We also thank our colleagues at Polisher Research Institute: Lisa Landsberg, who served as project manager in preparation for and during the pilot phase of the study and who provided analytic data support to the project, and Morton Kleban, who contributed to statistical analysis of the pilot data. Carol Rayside of the University of Medicine and Dentistry of New Jersey provided helpful administrative support in preparing the final reports.

The project also benefited from the expertise of its Technical Advisory Group. Members included Susan M. Allen, Brown University; Laura Branden, Westat; Dawn Carlson, National Institute on Disability and Rehabilitation Research and University of North Carolina, Chapel Hill; Sara J. Czaja, University of Miami; Alexandra Enders, University of Montana; Laura Gitlin, Thomas Jefferson University; Jeffrey W. Jutai, University of Western Ontario; James A. Lenker, University of Buffalo; Sandra J. Newman, Johns Hopkins University; Mary Beth Ofstedal, University of Michigan; Brenda Spillman, Urban Institute; Margaret G. Stineman, University of Pennsylvania. We are grateful for their direction and guidance.

Finally, we thank the many individuals who volunteered their time to participate in the cognitive and pilot testing of the instrument. Their contributions were invaluable.

The project was funded by the Department of Health and Human Services’ Office of the Assistant Secretary of Planning and Evaluation in cooperation with the National Institute on Aging (R01-14346) and the National Center for Health Statistics. Address correspondence to: Vicki A. Freedman, Ph.D., Professor, Department of Health Systems and Policy, University of Medicine and Dentistry of New Jersey, School of Public Health, 335 George Street, Suite 2200, New Brunswick, NJ 08903, vfreedman@umdnj.edu.

I. PURPOSE

The purpose of this project was to develop, pilot, and disseminate a set of instruments for national surveys to measure the use of assistive technology and the environments in which they are used. The project focused on older adults (ages 50 and older) living in the community. The instruments have been designed as a series of modules that can be adopted into a computer-assisted telephone interview (CATI). The full instrument, consisting of five modules, takes approximately 8-10 minutes to administer. We also include a brief (2-3 minute) module.

Although national surveys are limited in the amount and complexity of information that can be collected, they provide rich socioeconomic, demographic, and administrative data that allow generalizable statements about the population. By incorporating more detailed items on assistive technology and the environment, national surveys can provide better insight into a number of important policy issues related to disability and aging. Relevant questions of interest include:

  • What is the potential for Americans to age “in place” in their own homes and what role do environmental features and modifications play in housing decisions in later life?
  • What role do assistive technologies and home modifications play in the lives of older Americans? How extensively are they used?
  • How effective is assistive technology in increasing older Americans’ well-being, social engagement, and participation in valued activities?
  • How have mainstream technologies (computers, telephones) affected older adults’ ability to manage their daily activities?
  • What is the contribution of technology and the home environment to changes in the prevalence of disability among older Americans?

This report documents steps taken in designing and piloting items to measure assistive technology and the home environment of older adults. The instrument development process involved five steps: development of a conceptual framework; review of existing measures; input from policy makers, survey designers, and an expert panel; cognitive testing with individuals who used assistive devices; and pilot testing with a sample of 360 people ages 50 and over.

In Chapter 2 we present the recommended 8-10 minute instrument and a brief 2-3 minute module, along with an overview of content areas. A final chapter provides more detailed background on steps taken to test and revise the instruments. Frequencies and other results from the pilot test are included in a companion report (available upon request from the corresponding author).

II. RECOMMENDED MODULES

The full (8-10 minute) recommended instrument is divided into five modules: Home Environment, Mobility and Other Devices, Effectiveness/Participation, Information/Communication Technology, and Residual Difficulty. Each module consists of one or more sections, described in more detail below.

  • Home Environment. This module distinguishes among the presence, addition, and use of features in the home that are intended to make daily tasks easier, safer, or so an older adult can do the task independently. The questions are designed to work in a range of residential settings from detached single family housing to multi-unit apartments and assisted living facilities. Items focus on three key areas of the home: the entrance used most often, vertical and horizontal circulation inside the home, and the bathroom. Finally, we include a set of questions about the cost of all mentioned modifications, using an unfolding technique to minimize non-response.

  • Mobility and Other Devices. This module is designed to collect information for all respondents about whether they used (during the 30-day reference period) each of the four most common mobility devices (cane, walker, wheelchair, and scooter). For those who answer yes to a global screen, we then ask 30-day use of each device for the mobility-related activities of daily living (ADLs). Thirty-day use is also assessed for a variety of other commonly used devices. Two additional sections assess the use of adaptive transportation and the cost of mobility and other devices.

  • Effectiveness. In this module respondents who report using one or more devices or modifications are asked three questions about how the use of these items has affected their quality of life. The items address the dimensions of safety, control, and participation in valued activities.

  • Information and Communication Technology. This module focuses on the use of computer and telephone adaptations, as well as use of computers and the telephone to facilitate common activities such as shopping, ordering prescriptions, and managing money.

  • Residual ADL/IADL Difficulty. The final module asks respondents individualized questions about the difficulty that they have with five ADLs and four instrumental activities of daily living (IADLs). The items differ from those commonly found in national surveys in two ways. First, these items focus on the level of difficulty with activities when using assistive devices and without help from another person. Second, the items are tailored to mention the specific list of devices and features reported by each individual.

Although the instrument was purposely designed to be modular, there are some interdependencies across sections (noted in Table 1) that need to be attended to if modules or sections are to be omitted. On average the entire instrument takes eight minutes to administer to a representative sample of persons age 50 and older and ten minutes for a sample of persons ages 65 and older. Average section-specific times (in seconds) are noted in the final column of Table 1.1

TABLE 1: Content and Timing of Recommended 8-10 Minute Modules
MODULE (ABBREVIATION) Section (number of items) Question Numbers Module or Items Required Average timing (in seconds)
50+ 65+
HOME ENVIRONMENT (HE)
   Home (2) HE-1 to HE-2   18 19
   Entrance and Inside Building (10) HE-3 to HE-8   7 10
   Entrance to Home (7) HE-9 to HE-10.3b   28 30
   Inside Home (17) HE-11 to HE-12.6b   43 47
   Bathroom Features (14) HE-13 to HE-14.2a   65 77
   Cost of Modifications (5) HE-15-INTRO to HE-19   12 16
MOBILITY AND OTHER DEVICES (MO)
   Indoor and Outdoor Mobility (18) MO-1 to MO-2.5 HE-1 16 19
   Other Devices (10) MO-3.1 to MO-3.10   60 72
   Transportation (4) MO-4.1 to MO-4.2 HE-1 27 30
   Cost of Devices (5) MO-5-INTRO to MO-9   13 18
EFFECTIVENESS/PARTICIPATION (EF)
   Effectiveness/Participation (3) EF-1-INTRO to EF-3 HE, MO 28 39
COMMUNICATION TECHNOLOGY (CO)
   Computers (10) CO-1 to CO-3.5   53 60
   Telephones (11) CO-4 to CO-7.4   39 49
RESIDUAL DIFFICULTY (RD)
   Activities of Daily Living (5) RD-1 to RD-5 HE, MO 40 45
   Instrumental Activities of Daily Living (8) RD-6.1a to RD-6.4b HE, MO, CO 53 60
   
Total Average Timing (in minutes)     8.4 9.9

For researchers interested in a shorter questionnaire, we have also recommended a module that takes approximately 2-3 minutes to administer. The module focuses exclusively on the presence, addition, and use of select features inside the home and bathroom and the use of mobility and other common devices. The following table summarizes the content and timing for this condensed module.

TABLE 2: Content and Timing of Recommended 2-3 Minute Module
MODULE (ABBREVIATION) Section (number of items)   Average timing (in seconds)
50+ 65+
HOME ENVIRONMENT (HE)
   Inside Home HE-1, HE-11 to HE-12.6b* 50 53
   Bathroom Features HE-13 to HE-14.2a** 55 65
MOBILITY AND OTHER DEVICES (MO)
   Indoor and Outdoor Mobility MO-1 to MO-2.5 16 19
   Other Devices MO-3.1 to MO-3.4 24 29
 
Total Average Timing (in minutes)   2.4 2.8
* Excluding HE-12.2, HE-12.2a, HE-12.2b. ** Excluding HE-13.1b, HE-13.4.

We have included in both instruments a brief set of instructions to assist in interviewer training. These recommendations are intended to supplement thorough interviewer training on the survey into which these items are embedded.

III. INSTRUMENT DEVELOPMENT PROCESS

The instrument development process involved five steps: development of a conceptual framework; review of existing measures; input from policy makers, survey designers, and an expert panel; cognitive testing with individuals who used assistive devices; and pilot testing with a sample of 360 people ages 50 and over.

Development of Conceptual Framework

To guide the instrument development, we constructed a synthesized framework that links together concepts of disability with the environment and assistive technology (see Figure 1). The framework draws on concepts from two well-established models of disability (the Institute on Medicine’s disablement process (Pope and Tarlov, 1991) and the World Health Organization’s International Classification of Functioning, Disability and Health (ICF)), but makes explicit the role of the physical environment and assistive technology use.2

In the measurement of the individual’s capacity to perform everyday tasks, we first distinguish between “person capabilities” and underlying or “unaccommodated“ disability. Capability is a measure of the movements or actions an individual can perform independent of their environment, and underlying disability represents the difficulty that would be experienced in the specific environments and for the activities that they live with everyday, if the person did not make any accommodations. We depict underlying disability as a latent construct, as it is often a hypothetical condition (e.g., if you did not use help or assistive devices, how much difficulty would you have?).

We also distinguish between underlying disability and the abilities of the individual once they make use of one or more types of accommodations. That is, assistive technology, personal care, and behavioral changes can reduce the gap between personal capabilities and the demands of the physical and social environment. Effectiveness can be measured in terms of activity-specific competence (residual/accommodated disability); participation in society and social groups; and measures of quality of life.

In the framework, the main pathway is influenced by:

  • Physical Environment. The accessibility and adaptability of the physical environment are key factors that influence (along with the demands of a particular activity) whether physical, cognitive, and sensory abilities result in intrinsic disability. In the proposed module we measure the home environment and distinguish environmental features that exist a priori from those features and technologies that are put into place as accommodations.

  • Activity Demands. The framework recognizes that in addition to basic activities (e.g., self-care activities, communication, mobility) there exist valued activities that may vary from individual to individual. The demands of these activities may vary greatly. In the proposed module we emphasize technologies for basic activities including inside and outside mobility, transferring, bathing, toileting, banking, preparing meals, shopping, managing money, communication, and transportation.

  • Accommodation Process. The accommodation process influences whether an underlying disability affects the competence with which an activity is carried out. We envision technology not as an extension of the environment but instead as an extension of an individual’s capacity (Agree, 1999). The use of assistive technology is one of several accommodations that an individual can adopt to meet the gap between underlying physical capabilities and the performance of an activity in a specific environment. In the proposed module we measure the use of fixed and portable technologies and the intensity of that use.

Review of Existing Measures

To identify existing measures of the concepts identified in Figure 1, we reviewed existing national surveys, clinical tools, and several additional instruments designed to measure quality of life. Based on a review of existing questions on 13 national surveys,3 we identified several important gaps in content. Here we summarize findings with respect to measures of the environment, assistive technology use, and effectiveness of technology.

  • Environment. Relatively few questions exist on national surveys to measure the environment; those that do focus on the home and do not explicitly distinguish questions about the existence of environmental features, use of (modified) features, and subjective assessment of difficulty with those features. We also examined a variety of environmental assessment tools. We found that environmental assessment tools are often long checklists used in person by trained professionals4 and thus have only limited translatability to national surveys.

  • Assistive Technology Use. There is little consistency across surveys with respect to terms used to identify assistive technology. Terms currently in use include “aids,” “special equipment,” “adaptive devices,” and “medical devices or supplies” (Cornman, Freedman, and Agree, 2005). Many of the existing instruments condition questions on the use of assistive technology on the identification of a problem or difficulty with a specific task. Others use a predetermined list of typically medical devices. There is variation in the use of a time frame but few surveys evaluate the amount (or “intensity”) of use.

  • Effectiveness of Technology. Only a few questions have been used in national surveys to measure the effectiveness of technology. The few items that focus on effectiveness are embedded in questions about ADLs and sensory tasks and refer to the level of difficulty residual to help and/or equipment use. Virtually no items focus on the effect of technologies in facilitating participation or in enhancing quality of life. We also reviewed a number of quality of life instruments.5 However, with the exception of PIADs (psychosocial impact of assistive devices), instruments measuring quality of life instruments did not explicitly relate to technology.

Input from Policy Makers, Survey Designers, and Expert Panel

To begin the process, the project team met with policy makers, survey designers, and an expert panel to gather input into key areas for question development.

In January 2003, the project team held a meeting at the Department of Health and Human Services (HHS) Office of the Assistant Secretary for Planning and Evaluation (ASPE) with representatives from various federal agencies to discuss policy and program issues related to disability, assistive technology, and the environment.6 There was consensus that agencies would like to have more information in four principal areas: (1) the effectiveness of programs in making potential users aware of available assistive devices and services; (2) the effectiveness of programs in meeting the needs of individuals with disabilities, with a broader definition of need and effectiveness than is currently employed; (3) the underlying demand for assistive devices and services or number of potential users; and (4) the extent to which assistive devices and the environment are enabling or disabling people with disabilities to fully participate in life.

In January and February 2003, the project team spoke with contacts from nine national survey efforts.7 The purpose of these conversations was to understand the survey efforts’ preferences for collecting information on the proposed topics. Taken together, these contacts suggested that topics related to use, effectiveness and the environment were of interest; that the shorter the modules (less than ten minutes; preferably five) the better chance they had of being adopted; and that it would be helpful if they were designed in a flexible way so that the surveys could pick and choose items.

In February 2003, we asked Technical Advisory Group (TAG) members to rank salient concepts embedded in many of the existing national survey questions. Specifically we asked TAG members to rank concepts related to the use of assistive technology, the acquisition process, the effectiveness of assistive technology and the environment. The ranking process provided some guidance on the priorities for measurement in a pilot study with limited resources. Of all priorities, measuring the use of assistive technology and the home environment were considered critical, and measurement of the acquisition of technology and its effectiveness were rated by TAG members as a lower priority.

Based on this input and a review of existing instruments and tools, we drafted a 30-minute telephone instrument focused on three primary areas:

  • objective assessments of the physical environment, specifically the home, which is most important for performance of ADLs;
  • the use of assistive technology as a part of the accommodation process; and
  • the impact of assistive technology on the amount of activity-specific competence, participation, and perceived quality of life.

Cognitive Testing

Cognitive testing of the instrument took place during July and August 2003 at the Questionnaire Design Research Laboratory (QDRL) at NCHS. Cognitive testing was conducted in three rounds, with revisions made to the instrument after each round. Participants were recruited through newspaper advertisements, flyers, and word of mouth. A total of 28 participants were tested (Round 1, n=8; Round 2, n=8; Round 3, n=12). Subjects ranged in age from 28 to 86 years (mean=62 years). All participants reported using one or more assistive technology devices, the most commonly reported being canes (n=10, 36%), walkers (n=10, 36%) and wheelchairs (n=7, 25%). The sample included a mix of genders and ethnic backgrounds.8 Most of the interviews were conducted by telephone in a closed office at the QDRL with a closed-circuit television connected to an observation room. Some of the interviews were conducted in the participant’s homes. The interviews took approximately 90 minutes and all were videotaped. Participants received $50 for participating.

After each round of interviewing, the project team viewed the tapes and the QDRL provided feedback regarding the effectiveness of the questions in eliciting the appropriate responses. Based on this feedback, the project team refined the instrument for the next round of interviews.

Several important lessons emerged from the cognitive testing (Wilson et al., 2004) including:

  • Need for a reference time period for collecting information about the use of devices. The cognitive testing suggested that lack of a reference period, or use of a “typical month” as reference, led participants to describe the many different ways they have accommodated and to mention items that they had owned years prior. This problem was resolved in the third round of testing with the addition of “the last 30-days” as the reference period to questions about use. No recall problems were noted with this relatively recent time frame.

  • Need to assess frequency of use in relation to specific activities. The cognitive testing demonstrated the difficulty in providing response options that capture frequency of use in relation to specific activities. Use of “all the time” caused confusion, particularly for questions about use of devices in the shower (e.g., did the individual use the grab bar for the entire time he/she was in the shower? or for at some point during each shower?) For the pilot test, the question was constructed as: “When you (name activity), how often do you use (device)? And the response set changed to “Every time, most times, sometimes, rarely, or never.”

  • Superior performance of uni-polar vs. bi-polar scales to assess effectiveness. The cognitive testing suggested that the use of bi-polar five-point scales to assess effectiveness of assistive technology lead to inadequate responses. This problem was resolved in the third round of testing when the items were simplified to a positive uni-polar response set (no more, a little more, a lot more) assessing the extent to which the devices improved various dimensions of quality of life.

  • Insight into improving terminology throughout the survey. The cognitive testing allowed the project team to fine tune the use of positive, everyday language that could be readily understood by respondents, whether or not they had prior experience with disability and associated terminology. Instead of using the terms “assistive technology” or “special equipment,” for example, the instrument refers to “items/features to make your daily activities easier, safer, or so you can do them on your own.” This language seemed readily understandable by participants. Moreover, many of these concepts (ease of use, safety, independence) were explicitly mentioned by participants in a series of open-ended items about the importance of technology to their lives. For the pilot test, words whose meanings were unclear or difficult to understand were eliminated from the questionnaire and a list of simple definitions was crafted for the CATI instrument.

There also were several sections that were eliminated after cognitive testing and prior to the pilot. For example, we cognitively tested questions about whether training was received, abandonment of devices, and transportation services. We concluded that these areas of inquiry, while important, deserve further qualitative work before useful questions can be constructed.

Pilot Testing and Feedback

Upon completion of the cognitive testing, we finalized a 25-minute instrument for pilot testing (see Part II of this report for the complete instrument). The pilot instrument consisted of nine sections: global items, neighborhoods and transportation, the home environment and modifications, use of technology for mobility and daily activities, cost of technology, effectiveness of technology, use of information/communication technology, functional limitations and disability, and demographic items.

NCHS oversaw the implementation of the pilot test. In the spring of 2004 NCHS submitted materials for the pilot test to the Office of Management and Budget (OMB) and Ethics Review Board (ERB). OMB approval was received in July 2004 and final ERB approval received in the fall of 2004.

NCHS and ASPE then contracted with Westat, a social science research firm in Rockville, Maryland, to conduct the pilot testing. Between November 2004 and February 2005, Westat converted the instrument to CATI, trained interviewers, and conducted fieldwork, which included a final total of 360 interviews with a racially-diverse sample of adults ages 50 or older living in the community. The national sample, drawn from a marketing list, over-sampled individuals in older age groups: 50-64 (n=124); 65-79 (n=124); and 80+ (n=112). Individuals ages 50-64 living in households with an individual reporting a disability were also over-sampled (n=78). The sample includes individuals living in assisted living facilities (n=21), African Americans (n=50), and individuals living in rural areas (n=81). No refusal conversion was attempted; the response rate (completed interviews/age eligibles) was 20% and the cooperation rate (completed interviews/(completed interviews+refusals)) was 39%. The interview length varied from ten minutes to one hour, with the average interview lasting 22 minutes.

Several approaches were used to gather information to evaluate the questionnaire. Interviewer comments entered into CATI during data collection were reviewed, along with rates of don’t know/refusals. Based on interviewer comments a minority of questions (n=29) needed further clarification. In the final instrument we dropped 16 of these potentially problematic items and clarified another ten through changes in wording, clarification of definitions, or training suggestions. Westat reported that the percentage of do not know and refused responses was very low in this study (averaging 1.18% and 0.34%, respectively). In the final instrument we eliminated all items (n=11) with a significantly higher than average rate of don’t know or refused.

Project team members also participated in an interviewer debriefing to assess problem areas in the questionnaire. Interviewer feedback from the debriefing was positive overall. They reported that the interview was easy to administer and worked well for respondents of all ages. The few issues they raised were addressed in the final interview, for example, by removing purposive duplication between global and more detailed items and by introducing additional skip patterns for individuals who had not gone outside in the last 30 days.

Westat also tape recorded 150 interviews and coded each item in each interview for key respondent and interviewer behaviors (e.g., reading questions other than verbatim, probing and providing definitions, providing qualified or inadequate answers, requesting clarification or definition, interrupting the question). Behavior coding is a standardized method of identifying potential problems with the validity and reliability of survey items (Fowler & Cannell, 1996). Behavior coding of the pilot study suggested few problematic items, with a high percentage of responses being coded as adequate and a low percentage involving wording changes by interviewers, requests by respondents for clarification or definitions, and interruption of questions (Smith and Schiffron, 2005).9 Although probing was not uncommon, interviewers reported that probing did not appear to be higher in this study than in other studies of similar populations.

Based on each of these sources of information, Westat made a set of recommendations for questionnaire revision (Schiffrin et al., 2005). All recommendations made by Westat were evaluated by the project team. Nearly all suggestions were addressed in the final instrument either through elimination of potentially problematic items, modification of question language or definition, or the introduction of additional interviewer training material.

Finalization of Recommended Modules

A final consideration in finalizing the recommended modules was instrument length. We aimed to cut administration time from approximately 22 minutes to less than ten minutes on average.

We first eliminated all sections of the questionnaire that are routinely found in national surveys and were included solely for the purpose of evaluating the pilot data (e.g., demographic items, assisted living services, functional limitations, help with ADLs and IADLs). Next, we evaluated the quality of the global items, the effectiveness and residual difficulty items, and the open-ended questions (Freedman and Agree, 2005). Based on these analyses, we recommend the following:

  • Limit Use of Global Items. The pilot study allowed us to investigate whether items that asked simultaneously about multiple devices or home features identified users with the same accuracy as the full set of questions about specific items. We found that these global items did not do so consistently across types of devices. A global item for mobility related technology ("In the last 30 days have you used a cane, walker, wheelchair, or scooter?") had high sensitivity and specificity compared to individual items (0.94 and 0.99, respectively). We therefore recommended adding this item as a screener to more detailed questions. This approach saves time and replaces the more common approach in existing surveys of imposing a screen based on reports of difficulty. However, global items designed to assess the presence of features in the home (e.g., a stair glide, chair lift, or support rails in the hallway; a bath or shower seat, raised toilet seat, or grab bars in the bathroom) had much lower sensitivity (that is are likely to miss people who would be identified as having a feature through more detailed questions) and were therefore eliminated.

  • Omit Open-Ended Questions. Although intended as “catch-all” items, open-ended questions did not yield additional useful information on assistive devices or environmental modifications. In every section where items or features were being queried, we asked respondents to name additional items that they used to make tasks easier, safer, or to do them on their own. Few responses were obtained in these open-ended questions and the items mentioned often did not fit within a standard definition of assistive technology. Therefore we dropped all open-ended items from the final questionnaire.

  • Retain a Subset of Effectiveness Items. Of a total of seven original items asked in this section, only three were retained for the final instrument. Unlike the four items that were eliminated, the remaining three yielded adequate variability and low levels of “does not apply.” Moreover, preliminary structural equation models suggest that these three items scale well and correlate with the intensity of assistive technology use and the extent of functional limitations, but not with the amount of personal help. Pilot data analyses suggest that these items work well (i.e., well distributed with few answering don’t know, refused, or does not apply) for a broad range of assistive devices and environmental modifications. We did, however, narrow the set of assistive devices to which these questions refer (eliminating commonly used items such as glasses and handheld showerheads) so that a smaller number of respondents would be routed through these questions.

  • Retain Individualized Residual Difficulty Items. The residual difficulty items we tested (Using your ___, how much difficulty do you have ___ by yourself?) allowed respondents to volunteer that they never do the activity without help. Those who respondent this way are then asked “Using your ___, could you ___ by yourself?” We found there were very few problem behaviors with the lead-in questions. In fact, these items required fewer clarifications and less probing than standard functional limitation questions that we included and they correlated as well as standard questions about help with ADLs (Cronbach’s alpha=.8). However, almost all respondents answering that they “never do the activity without help” responded in a follow-up question that they “could do” the activity. We therefore eliminated the follow-up items and instead recommend that analysts code the few who never do the activity without help as having a lot of difficulty.

Finally, we eliminated sections of the questionnaire that we thought needed further testing. This included several items on the neighborhood environment (e.g., the presence of curb cuts and sidewalks, the steepness of the grade, and the quality of the sidewalks); items specific to wheelchair use (e.g., the presence of widened hallways, whether their bathroom had enough room to turn in a wheelchair, and whether a wheelchair could fit in the car they drive in most often); items to identify mobility devices that were owned but not used and reasons for abandonment; the importance of home features; and the effectiveness of computer and phone adaptations. These areas are ripe for further cognitive and pilot testing.

The final recommended modules are estimated to take 8-10 minutes to administer and are described more fully in Chapter 2. A briefer, 2-3 minute module was also created. Detailed pilot data describing the frequencies and performance of the recommended items is provided in Part II of this report.

REFERENCES

Agree EM. 1999. The Influence of Personal Care and Assistive Technology on the Measurement of Disability. Social Science & Medicine, 48(4): 427-443.

Cornman JC, VA Freedman, & EM Agree. 2005. Measurement of Assistive Device Use: Implications for Estimates of Device Use and Disability in Late Life. The Gerontologist, 45: 347-358.

Day H, J Jutai, & KA Campbell. 2002. Development of a scale to measure the psychosocial impact of assistive devices: lessons learned and the road ahead. Disability & Rehabilitation, 24(1-3): 31-7.

Demers L, R Weiss-Lambrou & B Ska. 1996. Development of the Quebec User Evaluation of Satisfaction with assistive Technology (QUEST). Assistive Technology, 8(1): 3-13.

Fänge A & S Iwarsson. 1999. Physical housing environment: Development of a self-assessment instrument. Canadian Journal of Occupational Therapy, 66: 250-260.

Fowler FJ & CF Cannell. 1996. Using behavioral coding to identify cognitive problems with survey questions. In N Schwarz & S ASudman (Eds.), Answering Questions, pp. 15-36. San Francisco: Jossey-Bass.

Freedman VA & EM Agree. 2005. Linking measures of assistive technology and disability. Paper presented at Workshop on Improving Survey Measures of Late-Life Disability, May 17, 2005. Washington, DC: The Urban Institute.

Granger CV. 1998. The emerging science of functional assessment: our tool for outcomes analysis. Archives of Physical Medicine and Rehabilitation, 79(3): 235-240.

Iwarsson S. 1999. The Housing Enabler: An objective tool for assessing accessibility. British Journal of Occupational Therapy, 62(11), pp. 491-97.

Iwarsson S & A Isacsson. 1996. Development of a novel instrument for occupational therapy assessment of the physical environment in the home--A methodologic study on "The Enabler". Occupational Therapy Journal of Research, 16(4): 227-244.

Lansley P, S Flanagan, K Goodacre, A Turner-Smith, & D Cowan. 2005. Assessing the adaptability of the existing homes of older people. Building and Environment, 40(7): 949-963.

Lawton MP. 1972. The dimensions of morale. In D Kent, R Kastenbaum, & S Sherwood (Eds.), Research, planning, and action for the elderly. New York, NY: Behavioral Publications.

Lawton MP. 1975. The Philadelphia Geriatric Center Morale Scale: a revision. Journal of Gerontology, 30: 85-89.

Mann WC, D Hurren, M Tomita & B Charvat. 1995. The Relationship of Functional Independence to Assistive Device Use of Elderly Persons Living at Home. Journal of Applied Gerontology, 14: 225-247.

Mann WC, J Karuza, D Hurren & M Tomita. 1993. Needs of home-based older persons for assistive devices: The University at Buffalo Rehabilitation Engineering Center on Aging CAS. Technology and Disability, 2(1): 1-11.

Mann WC, D Hurren, M Tomita & B Charvat. 1997. Comparison of the UB-RERC Aging consumer Assessment Study with the 1986 NHIS and the 1987 NMES. Topics in Geriatric Rehabilitation, 13: 32-41.

Neugarten BL, RJ Havighurst & SS Tobin. 1961. The measurement of life satisfaction. Journal of Gerontology, 16: 134-43.

Ryff CD. 1995. Psychological well-being in adult life. Current Directions in Psychological Science, 4: 99-104.

Scherer M & LA Cushman. 2001. Measuring subjective quality of life following spinal cord injury: a validation study of the assistive technology device predisposition assessment. Disability and Rehabilitation, 23(9): 387-93.

Schiffrin H, J Bethel & D Smith. 2005. Piloting a Technology and Aging Survey Instrument. Task 10: Final Report. Submitted to the Department of Health and Human Services, March 2005.

Smith D & H Schiffrin. 2005. Piloting a Technology and Aging Survey Instrument: Task 7: Behavior Coding Report. Submitted to the Department of Health and Human Services, March 2005.

Steinfeld E & GS Danforth. 1997. Environment as a mediating factor in functional assessment. In S Dittmar & G Gresham (Eds), Functional Assessment and Outcome Measures for the Rehabilitation Health Professional. Gaithersburg, MD: Aspen, pp. 37-56.

Steinfeld E, S Schroeder, J Duncan, R Faste, D Chollet & M Bishop. 1979. Access to the built environment. A review of the literature. Washington, DC: Government Printing Office.

Weich S, E Burton, M Blanchard, M Prince, K Sproston & B Erens. 2001. Measuring the built environment: validity of a site survey instrument for use in urban settings. Health Place, 7(4): 283-92.

Whiteneck GG, CL Harrison-Felix, DC Mellick, CA Brooks, SB Charlifue & KA Gerhart. 2004. Quantifying environmental factors: a measure of physical, attitudinal, service, productivity, and policy barriers. Archives of Physical Medicine and Rehabilitation 85: 1324-35.

Wilson B, B Altman, K Whitaker, VA Freedman, J Cornman & E Agree. 2004. Improving Person-Item Fit: Cognitive Testing Questions about Assistive Technology and the Home Environment with Older Adults. Presented at the annual meeting of the American Association of Public Opinion Research, May 15, 2004, Phoenix, AZ.

NOTES

  1. We calculated the average time in two steps. First, we calculated the average time per question in each section or subsection of the piloted questionnaire for which we had a time stamp. In making these calculations we used sampling weights that realigned our sample to match national distributions of sex, age group, education, and functioning found in the 2003 National Health Interview Survey (NHIS). Then for each section or subsection we multiplied the average weighted time per question to the final number of questions in the section or subsection.

  2. The main pathway of the proposed conceptual framework resembles the Institute of Medicine’s concepts of functional limitations and disability. From the ICF’s model of interrelationships we adopt a definition of effectiveness that encompasses both activity-specific competence and also a broader set of outcomes measured at the level of the person (e.g., participation and quality of life.)

  3. Surveys reviewed included the NHIS (2002), the Disability Supplement to the NHIS (1994/95), the National Health and Nutrition Examination Survey (NHANES) (1998), the National Long Term Care Survey (NLTCS) (1999), the Medical Expenditure Panel Survey (MEPS) (1997, 2001), the Medicare Current Beneficiary Survey (MCBS) (2000), the Health and Retirement Survey (2002), the Self Care and Aging Survey (1994), the National Home and Hospice Care Survey (2000), the AT/IT Survey (2001), the Women’s Health and Aging Survey I (1995-97), the American Housing Survey (AHS) (1995), the Census (2000), the Survey of Income and Program Participation (2001), and the Panel Survey of Income Dynamics (2001).

  4. Instruments we reviewed included QUEST (Demers et al., 1996), Craig Hospital Inventory of Environmental Factors (CHIEF) (Whiteneck, 2004), MPT (Scherer and Cushman, 2001), OT Fact, Enviro-FIM (Steinfield & Danforth, 1997), Mann’s Consumer Assessment Survey (Mann et al., 1993, 1995); the Housing Enabler (Iwarrson, 1999), Built Environment Site Survey Checklist (Weich, 2001), and the housing audit tools from the REKI project (Lansley, 2005).

  5. Quality of life instruments we reviewed included PIADS (Day et al., 2002), Ryff’s measures of well-being (Ryff, 1995); Nuegarten’s life satisfaction scale (1961); Lawton’s PGC Morale scale (Lawton, 1972, 1975); and several measures of health-related quality of life.

  6. Agencies participating in the meeting included the Centers for Medicare and Medicaid Services, HHS, Agency for Health Care Research and Quality, the Department of Housing and Urban Development, the Social Security Administration, the National Institute on Disability and Rehabilitation Research, and the National Center for Health Statistics (NCHS).

  7. We spoke with contacts from the Health and Retirement Study; MEPS; MCBS; NLTCS; Study of Midlife in the US; Wisconsin Longitudinal Study; AHS; NHIS; and NHANES.

  8. Seventeen of the 28 participants were female (61%); 22 (79%) were White, five participants were Black (18%), and one participant was American Indian (3%).

  9. Detailed behavior codes are provided for each item in the recommended modules in Part II of this report, available from the corresponding author upon request.

MODULE A: Survey Modules to Measure Assistive Technology and the Home Environment: Recommended 8-10 Minute Modules

This Module is currently available only as a separate PDF file (http://aspe.hhs.gov/daltcp/reports/ATEAdevI-A.pdf), or as part of the PDF version of Part I http://aspe.hhs.gov/daltcp/reports/ATEAdevI.pdf.

You will need a copy of the Acrobat Reader in order to view it.

MODULE B: Survey Modules to Measure Assistive Technology and the Home Environment: Recommended 2-3 Minute Module

This Module is currently available only as a separate PDF file (http://aspe.hhs.gov/daltcp/reports/ATEAdevI-B.pdf), or as part of the PDF version of Part I http://aspe.hhs.gov/daltcp/reports/ATEAdevI.pdf.

You will need a copy of the Acrobat Reader in order to view it.

Development of an Assistive Technology and Environmental Assessment Instrument for National Surveys: Final Report

Part I: Recommended Modules and Instrument Development Process

Also Available as Separate PDF Files:Module A. Survey Modules to Measure Assistive Technology and the Home Environment: Recommended 8-10 Minute Modules

Module B. Survey Modules to Measure Assistive Technology and the Home Environment: Recommended 2-3 Minute Module

Part II: Pilot Study Results for Recommended Items

Also Available as Separate PDF Files:Module A. Home Environment Module

Module B. Mobility and Other Devices Module

Module C. Effectiveness/Participation Module

Module D. Communication Technology Module

Module E. Residual ADL and IADL Difficulty Module

Appendix I. Crosswalk of Question Numbers from Pilot Test and Final Recommended Modules

Appendix II. Technology and Aging Pilot Survey: Instrument for the Pilot Study

[You will need a copy of the Acrobat Reader in order to view the Portable Document Format (PDF) files.]