An Evaluation of AoA's Program to Prevent Elder Abuse: Final Report. II. EVALUATION DESIGN AND METHODS

08/01/2016

A. Overview

This evaluation of the five state cooperative agreements awarded by AoA's Elder Abuse Prevention Interventions program is an important element in building the evidence base on effective approaches to prevent elder abuse and enhancing existing data collection systems. Each grant covered at least a three-year period during which grantees finalized partnerships with APS and related institutions, developed and implemented the proposed intervention, collected and delivered program data to AoA and the national evaluator (for a minimum of 18 months), and reported semi-annually on achievements, barriers, and strategies to overcome those barriers. The research questions of interest to ASPE and AoA were:

  1. What is the infrastructure within which the interventions rest and the structure of elder abuse prevention interventions?

  2. What are the facilitators of and barriers to implementation of the interventions and how are barriers addressed?

  3. What are the characteristics of victims and perpetrators of elder abuse in the grantees' communities?

  4. What are the characteristics of the interventions and how do victims and perpetrators of elder abuse participate in the grantees' intervention?

  5. What data are available at the state, local, and national levels to measure the outcomes associated with those interventions?

To address these questions, the evaluation assessed the implementation and outcomes of individual grantee prevention interventions.

B. Guiding Framework

As an orienting framework, and in an effort to place the findings of the process evaluation of the prevention interventions in the larger discourse of dissemination and implementation science, we used the Framework for Enhancing the Value of Research for Dissemination and Implementation (Neta et al. 2015) as a way to structure this report. The Framework emphasizes the importance of transparent reporting on key elements across various phases of the intervention and research process--planning, delivery, results and reporting, and long-term outcomes--in addition to addressing cross-cutting issues that interact with each phase, such as multi-level contexts (including history, policy climate, and incentives), multiple stakeholder perspectives, and societal costs. It was developed in an effort to move research to practice and enhance the value of evaluation for researchers, practitioners, and policy-makers.

Originally, the Developing and Conducting an Evaluation of AoA's Program to Prevent Elder Abuse project called for two independent reports, one that synthesized the qualitative infrastructure and implementation findings and another that presented key findings related to service utilization and outcomes data. With approval from ASPE, we adopted the format suggested by the Framework to link the reports to present the findings of the prevention interventions in a holistic manner, and to do so in a way that would facilitate examination of factors that influence or inhibit change. We note, however, that the evaluation questions and constructs were limited and do not encompass all of the domains presented in the Framework.

  • Planning: Clinical, health system of public health intervention (evidence base, program logic, mechanism of change); Context/Setting characteristics (resources, organizational climate and culture, capacity and readiness); Implementation strategy (evaluability, scalability); Partnership; Dissemination and implementation study design.

  • Delivery: Reach; Adoption; Evolution of intervention and implementation strategy to fit conditions; Implementation; Implementation costs and resources expended.

  • Evaluation Results/Reporting: Effectiveness; Primary outcomes; Broader consequences (e.g., other benefits and harms); External validity of findings including explicit description of setting and setting change; Robustness.

  • Long-Term Outcomes: Sustainability; Evolvability; Transportability; Replication and uptake; Conditions under which findings hold; Economic evaluation (e.g., cost-benefit/effectiveness, budget impact, replication/implementation cost).

  • Goals: Improvement in population health, health equity, social well-being, and health system efficiency).

C. Data Sources and Data Collection Procedures

We used a mixed-methods approach to conduct the evaluation of the elder abuse prevention interventions. To address the first two research questions that focus on examining the implementation and infrastructure of the prevention interventions, we conducted site visits with each of the five grantees and met with grantee staff, their partners, and providers that implemented the elder abuse prevention interventions. We developed a Discussion Guide to structure the visits, which addressed: (1) Theoretical/Clinical Basis of the Prevention Intervention; (2) Elements of the Intervention Model and Implementation; (3) Partnerships; (4) Implementation Context, including facilitators and barriers; (5) Service Utilization; (6) State and Local Data Collection Systems; and (7) Project Replicability and Lessons Learned.

The Discussion Guides were informed by the description of the program model and operations provided in each grant application, initial and subsequent conference calls with the grantees, evaluability assessments, and program documents provided by the grantees and AoA. Given the diversity of the grantees, the Guides were tailored to each site and respondent type. NORC conducted site visits and interviews with the five grantees and their partners in late 2014 and early 2015. Following the site visits, we prepared summaries that were shared with the grantees, ASPE, and AoA. We used this information as the basis for a series of Research Briefsfor each grantee that were disseminated during the White House Conference on the Aging in 2015. We also periodically reviewed grantee progress reports provided by AoA.

To address research questions 3-5 that focus on describing the characteristics of participants, the interventions themselves and available data, we developed a Cross-Grantee Data Analysis Plan that called for the collection of core data elements on client characteristics, program activities and outcome measures across grantees. Given the heterogeneity in scope and program features of the grantee initiatives, this unified approach allowed for comparison of client and service utilization characteristics and outcomes from the diverse interventions. The systematic collection of core data elements enabled the preparation of risk factor profiles on victims/care recipients and perpetrators/caregivers served by each intervention.

The core set of data elements describe demographic, psychological/physical health and social well-being indicators that are risk factors for elder abuse. Other elements pertain to referral source, type(s) of abuse, service utilization and outcomes. Identification of the common data elements for inclusion in the cross-site framework was guided by a balance between any additional burden placed on the grantees and the increased scientific rigor achieved from collecting identical information that could be compared across sites. We note that given the heterogeneity and some gaps in the data we found that they could not be reliably harmonized and pooled across the grantees.

Following the site visits in 2015, NORC communicated with the grantees to discuss the data sharing and data acquisition process. Working closely with NORC's Data Use Agreement (DUA) Committee and each grantee, we executed DUAs with each grantee and their partners, as appropriate, and specified the variables needed for analysis. Data transfers between the grantees and NORC's Data Enclave, a secure, protected environment, were conducted through securely encrypted transfer of incoming confidential data via National Institute of Standards and Technology-certified secure file transfer protocol applications. The grantees provided data dictionaries and assisted the team by reviewing analyses. All data were returned or destroyed per the terms of the DUAs at the conclusion of the study.

D. Available Data from Adult Protective Services and Prevention Interventions

All grantees provided project-level data specific to their prevention intervention for the evaluation. AK DSDS, TX/WellMed, and UTHSC provided APS data for their regions. The grantees provided the following data under the terms of the DUAs:

  • AK DSDS provided nine data files, including: (1) Eligibility and Referral Form; (2) Action Goals; (3) Follow-up Survey; (4) Intake Assessment Form; (5) Vulnerability to Abuse Screening Scale (VASS) Form; (6) Eligibility and Referral Form; (7) Elder Services Case Management (ESCM) DS3 database cases through 12.15.2014; (8) ESCM Harmony database cases through 05.27.16; and (9) ESCM Paperwork Tracker.

  • For NYSOFA, we extracted four Excel data files from the intervention's web-based tool: (1) Eligibility; (2) Intake; (3) Tracking; and (4) Outcomes. E-MDT coordinators from Lifespan and NYCEAC collected data from multiple sources, including APS, to populate the database.

  • The Texas Department of Family and Protective Services (TX DFPS) provided two files: (1) APS data on WellMed patients and perpetrators; and (2) client logs that were prepared by the APS Specialists. The WellMed Charitable Foundation provided a data file of EASI screening tool results.

  • USC provided four project related data files containing demographic variable and outcome measures. USC did not provide APS data.

  • UTHSC provided six data files, including a demographic file, five data files pertaining to key measures, and services data from APS.

It should be noted that important variations in data sources within grantee interventions presented challenges to identifying participants across data sources, if unique identifiers were not readily available to enable data linkage. For example, the TX/WellMed intervention implemented two strategies to identify at-risk elders and three sources of data. The primary (universal) prevention component involved the administration of the EASI screening tool to WellMed patients. The data were first recorded by hand but ultimately folded into their electronic medical records system. Results of the ratings for 11,426 patients were shared by WellMed.

The second strategy involved embedding two APS Specialists at WellMed clinics to serve as ongoing resources for clinical staff. Their services included delivering training on the intervention to WellMed staff, participating in patient care coordination (PCC) meetings, and providing consultation to both WellMed and APS staff. During the period under study, 588 WellMed patients were brought to the attention of the two APS Specialists. Depending on the nature of the problem identified, the patient could either be served internally through WellMed's Complex Care program which provides an array of social work services and referrals and/or be referred to APS. To the extent possible, the two APS Specialists tracked information and outcomes on the patients with whom they engaged, including information on referral sources, administration of the EASI tool, patient history with APS, and APS referral, among others.

TX DFPS's APS served as the third and main data source for the TX/WellMed intervention. These data are collected on all their clients (WellMed patients received standard APS) and are not specifically tailored to this study. TX DFPS shared data with the evaluation team that most closely corresponded to the requested core data elements. A total of 310 WellMed patients were served by APS during the study period. Data were not always available for all 310 patients or the 415 perpetrators for all of the data elements and we note the N used in each table. Other differences in the number reported were related to changes in APS' database. Toward the end of the study period in September 2014, APS began using Strategies that Help Intervention and Evaluation Leading to Decisions assessments tools in their casework, replacing their earlier protocol.

Given TX/WellMed's three distinct forms of data collection as well as their period of collection, we were not able to identify patients that overlapped across services. For this reason, we present the available data in the tables and indicate the data source.

E. Analytic Approach

The qualitative data from the site visits and document reviews were analyzed to identify commonalities and differences across the grantees' prevention interventions. Major themes regarding infrastructure, planning and implementation that emerged from the analysis are presented in the planning and intervention delivery sections of this report.

Quantitative data from the study are summarized using descriptive statistics. The "core data elements" that grantees collected were used to tabulate these statistics, including frequencies, means and percentages (as appropriate for continuous and categorical data). In the following tables, we present participant characteristics, the type of services and referrals they received, the characteristics of interventions, as well as participant outcomes.

Prior to carrying out these analyses, the core data elements collected by grantees were harmonized, where possible. Because common measurement methods were not a required element of this project, only a limited number of variables permitted harmonization. These included demographic information, household characteristics, type of abuse, and intervention characteristics. Categories were created for these variables based on the most granular level of data that could be captured across all five grantees. More detailed information on how data were harmonized for specific variables can be found in Appendix A.

In contrast to the demographic and household information, the varied measures that were used to assess participants' physical health, and psychosocial characteristics across grantees precluded harmonization. In addition to using different measures, in some instances, grantees employed the same question items but used different response options and scoring methods. The diversity of interventions involved different target populations (victims, at-risk elders, and care recipients), and this variability further added to the specificity of the data for many measures. The tables summarizing this information therefore present the original measures of physical health, psychological and social characteristics used by each grantee. For any instances where grantees applied a measure that was unique to its intervention (whether it pertained to the choice of question items, response options or scoring methods), data are presented on a separate line in the tables.

F. Limitations

Several limitations of the data analyses deserve mention. To varying degrees, missing data was a common issue across all grantees. While incomplete reporting was due in part to participant attrition (particularly in collecting data on elders over time), the extent to which data collection was feasible depended largely on the extent to which the interventions had direct contact with participants and the degree to which the grantee had control over the process, tracking, and documentation of service delivery. Interventions that delivered services directly to grantees, including UTHSC, AK DSDS and USC, were better positioned to identify and collect data from participants. Grantees that built on an existing service infrastructure and relied on existing APS data to inform the evaluation, including NYSOFA and TX/WellMed, were more limited in their ability to collect new data or to obtain relevant data from the myriad services to which participants were referred.

It should be noted that all interventions drew on each community's existing infrastructure of services to some degree. For that reason, a key challenge for all grantees was tracking and documenting the full range of services and referrals that were provided to participants throughout the duration of their interventions. Collecting more detailed information on the frequency and intensity or "dose" of each service that had been originally intended, was not feasible. Given grantees' limited control over the full range of service delivery as well as participant follow through on the referrals, the data are best suited to broadly describing the common types of services provided and referred by grantees; they are not well-suited to confirming all the services to which participants were referred or whether participants received those services.

Another important limitation of the study concerns our ability to describe risk factor profiles in the absence of data collection on a comparison group. At the outset of the study, a set of core data elements was identified for data collection across all grantees on intervention participants. The basis for inclusion was guided by prior research on risk factors for elder abuse, such as cognitive impairment and low levels of social support. While data were collected on these measures for intervention participants, we do not have parallel data on elders who either are not at-risk/victims or elders who did not receive intervention services. As a result, while we are able to describe the profile of participants in the study, we are unable to understand the role that these risk factors play on abuse or intervention outcomes. It is important to emphasize that data collection on a comparison group was outside the scope of grantees' projects. However, this line of inquiry is important for future research, and can be informed by findings from the current project.

The profiles of participants speak to the enormous task presented to these interventions. Participants often had complex and evolving needs that cannot be easily addressed within a single intervention. Where data were available, we found that the population served is characterized by a number of vulnerabilities. For example, 85 percent of USC's care recipients were positively screened for dementia. A third of UTHSC's victims experienced mild to severe cognitive impairment. Eighty-eight percent of NYSOFA's participants reported being socially isolated. A quarter of TX/WellMed patients who were referred to APS were physically disabled or had impaired mobility. Over 40 percent of AK DSDS's participants had minimal to severe symptoms of depression. Of the intervention that collected information on all types of abuse, furthermore, approximately 40 percent of participants across interventions were identified for at least two forms of abuse. Treating one form of abuse is a considerable task yet co-occurring forms of abuse presents additional challenges.

While a great deal of information has been collected on participants themselves, the limited data that could be collected on service utilization and referral precludes our full understanding of the range of services to which participants were referred and completed. Within the confines of our study, we are unable to understand whether particular services or mix of services were especially helpful to participants across interventions or for whom particular interventions are best suited. Tracking information on service utilization and referral is not only challenging for services provided by the program itself, moreover, but particularly so for those that are referred out. Interventions must rely on participants' reports about services received or the myriad providers themselves which may be tremendously difficult to obtain.

At the same time, drawing on the community's existing service infrastructure is a necessity and reality of all interventions. Rarely can one intervention provide all the services that a vulnerable adult needs. Even when needs were identified, however, interventions could not guarantee that participants followed through on referrals. While lack of participation in the services may be in part due to choice, in other instances, resources may not have been available to secure those services. Based on information gleaned from our site visits, this may occur for example, when the participant lacks access to transportation to receive those services. Other times, the service was not available, had a waitlist or required additional funds from the participant.

This report only focuses on the core set of data elements that were requested from grantees and to a limited extent, outcomes that are specific to their intervention. Each grantee has collected additional data that are relevant to their particular intervention, which are not represented here. A number of grantees, however, have already begun disseminating or intend to disseminate findings that are specific to their intervention. These studies offer a deeper investigation of the individual interventions and will provide an important complement to the cross-site analyses presented herein.

While these data have their limitations, little research to date has been carried out on elder abuse prevention interventions, alone or in combination. The data collected by grantees in this project represent a rich source of new information on the characteristics of victims, at-risk elders, care recipients, perpetrators and caregivers and the interventions that delivered services to them. They provide a springboard for additional research and could be used in numerous ways to inform future studies and interventions.