Evaluation Design Options for the Long-Term Care Registered Apprenticeship Program. 6.2. Survey Option for Apprentices

09/01/2011

Survey data collection and analysis is a potential evaluation option to address research questions regarding the apprenticeship experience on topics not available in secondary data. For the LTC RAP evaluation, a survey of apprentices would provide systematic quantitative information on the experiences of apprentices with the LTC RAP compared to direct care workers not in the LTC RAP.

Research Questions

A survey of direct care workers who have participated in the LTC RAP program, including those who did not complete the program and a comparison group would address a range of research questions that cannot be answered directly by employers, sponsors, or partnering organizations or through administrative data, such as the LEHD. Potential research questions center on how the LTC RAP affects:

  • Job satisfaction
  • Intent to leave the job and the long-term care field
  • Participation in welfare programs (SNAP, TANF, Medicaid, etc.)
  • Relationship with supervisor, other staff members, and clients
  • Confidence in caregivingabilities
  • Knowledge and skills of caring for people with disabilities
  • Time and financial investment on the part of apprentices to participate
  • Opinions about the LTC RAP
  • Opinions about other training for direct care work
  • Future career plans

These research outcomes for the survey are generally more difficult to measure than outcomes like annual earnings, wages, and job tenure. Ideally, the outcome measures should be tested to ensure that they are valid and reliable measures and that there is variation in the responses. For example, commonly used measures of job satisfaction typically report very high levels of respondents who are “extremely” or “very satisfied” with their job (Bishop et al., 2009 ). Thus, it might be difficult to measure the impact of the LTC RAP on this dimension.

Brief Overview of Design

The suggested design would involve a survey of apprentices in facilities/agencies operating apprenticeship programs and a matched sample of non-apprentices at a single point in time. Thus, this survey would be a cross-sectional design; it would be able to find association between variables, but it would not be claim that the LTC RAP caused the differences because it does not measure changes over time. All direct care workers who started the apprenticeship program and still working for the facility/agency employer which administered the LTC RAP would be included. Non-apprentices would be employees of either branches within the same organization that are not implementing the apprenticeship program or employees of wholly different long-term care provider organizations not implementing apprenticeship programs.

A telephone survey is recommended to obtain data from apprentices. Given the number of sites and the small number of apprentices at most sites, an in-person survey would be prohibitively expensive. In addition, direct care workers generally have low education and literacy skills, and may also have cultural differences that make a mail survey problematic. Workers may have difficulty reading and interpreting the questions. In addition, similar surveys of CNAs (National Nursing Assistant Survey) and HHAs (National Home Health Aide Survey) have been successfully conducted by telephone.

The survey will be administered as a computer assisted telephone interview (CATI), which will ensure standardized question administration and will reduce data entry costs. Also to minimize costs, the survey would only be conducted in English and Spanish, but not other languages. The survey would be conducted over a 4-month period. Contact information, such as telephone numbers and addresses, for apprentices and the comparison group workers will be obtained from employers. As a practical matter, obtaining contact information for apprentices who have left the employment of the provider that trained them will be difficult if not impossible and will not be attempted. The survey administrator would vary the days and times of contact attempts in order to maximize the possibility of reaching sample members to schedule the full interview.

Given the similarity in the goals of the apprenticeship programs across the four occupations of the LTC RAP, and the relative few numbers of apprentices in some occupations such as HHA and HSSs, a single survey across all occupations is recommended. Even with the entire universe of apprentices, the number of completed surveys for HHAs and health care support specialists is too small to analyze separately. To control for differences across occupations, the four main LTC RAP occupations will be entered as control variables in the multivariate analyses. Subgroup analyses of CNAs, the largest occupation, and DSSs, the second largest occupation, will be possible if there is a large enough number of respondents.

Given the relatively small number of employers/sponsors and of apprentices, the sample design should include all current and past apprenticeship sponsors/employers and all apprentices currently employed by these employers/sponsors, including those who have already completed their apprenticeships and those who did not complete the apprenticeship. The evaluator will need to identify through secondary data (such as RAPIDS) or directly through employers those apprentices that are still working for them. The comparison group of workers who have not participated in LTC RAP will be drawn from the same or, more likely, other organizations providing similar types of services.

The sample should result in approximately the same number of completed surveys for apprentices and for comparison group members. To achieve this result, the evaluator will likely need to oversample the comparison group because the response rate may be lower because of their lack of knowledge and interest in an evaluation of the LTC RAP. For prior surveys of CNAs and HHAs, the ASPE/NCHS achieved roughly 75% response rates for facilities/agencies and a 75% response rate for workers, giving a 50% overall response rate.

Conservatively, using RAPIDS data, slightly lower response rates of about 67% for the 80 current employers with approximately 1,500 apprentices in training currently would yield about 1,000 completed surveys for apprentices. Consistent with the 2004 National Nursing Assistant Survey which provided a monetary incentive to workers to encourage participation, this survey will provide a $35 incentive payment. We do not anticipate payment of incentives to employers to provide the contact information.

The sample would include a comparison group of workers from providers not sponsoring apprenticeships or from non-apprenticeship-sponsoring branches of parent organizations who have apprenticeships in some, but not all branches. However, only a few sponsoring employers have multiple branches to make selection of comparison group members possible. Therefore, most, if not all of the comparison group would need to be drawn from non-apprenticeship-sponsoring organizations, which would have to be recruited to the study.

To provide a close comparison to apprentices and their sponsoring employers, comparison organizations ideally would be in same geographic area and have comparable size, ownership status, payer mix, and other important characteristics. This data is routinely collected for nursing homes and home health agencies by CMS, but is not available at the national level for other types of providers participating in LTC RAP. For other providers, many long-term care employers are members of state and national associations, and comparison group employers for these providers could be identified through their membership rosters. In addition, if ASPE and NCHS grant permission to use it, RTI International developed a sample frame of residential care facilities for use in the 2010 National Survey of Residential Care Facilities (Wiener et al., 2010 ). NCHS recently awarded a contract to RTI International to update the sample frame for residential care facilities in 2012.

Motivating non-apprenticeship-sponsoring facilities to participate will be difficult because of lack of interest or knowledge about LTC RAP or the perceived cost of participating. Moreover, facilities may not believe it is in their best interest to have outsiders asking their workers about subjects such as job satisfaction, relationships with supervisors, and wages and benefits. Employers may also be reluctant to release personal contact information or Social Security numbers of workers without their explicit permission, even if the employers are supportive of the survey. Letters of support from provider associations and high-ranking HHS and DOL officials may help with recruitment.

Comparison group members need to closely resemble apprentices in selected characteristics. Therefore, comparison group direct care workers ideally should be prospectively matched with apprentices, potentially using employment history/earnings, age, gender, race, education or similar factors, but doing so would be difficult. Alternatively, the evaluator could control for such matching retrospectively through statistical adjustments if sufficient data were collected from both apprentices and non-apprentices. Potential response bias may still occur if important variables are not collected during the survey. For example, apprentices and non-apprentices may vary on unobserved characteristics (e.g., altruism or motivation) for which data is not collected or successfully measured.

Estimated Statistical Power

Preliminary calculations of statistical power needed to detect differences in outcomes such as satisfaction or intent to leave suggest that the apprentice group would need to have 1,000 respondents and 600 comparison group workers for a total sample of 1,600 respondents. These numbers for completed respondents are of this magnitude to allow for sufficient power for subgroup analyses of CNAs and DSSs, the two largest occupation groups. Respondent group sizes in excess of these numbers to provide enough statistical power for subgroup analyses of HHA and HSSs.

Measures such as job satisfaction and intent to leave one’s job have relatively little statistical variation (Bishop et al., 2009 ), therefore relatively large numbers of apprentice and comparison group members are needed to detect statistically significant differences as small as 5 percentage points at probability of less than 0.05 (p<0.05). For a binary outcome variable in logit analysis such as Satisfied/Not Satisfied defined on a 100 percentage point scale, one could detect a difference as small as 1.25 percentage points using a mean of 82 percentage points and a standard deviation of 10 percentage points, assuming 1,000 apprentices and 600 comparison group members. We believe a sample of this size would provide sufficient power for assessing impact.

Sample Frame Construction for Programs and Apprentices

To identify sample frame members, lead letters from important HHS and DOL officials and letters of support from the relevant provider associations would be prepared and sent to prospective employers. These letters would provide assurances that the privacy of participating employers and employees will be protected. Senior staff from the evaluator would contact employer sponsors by phone to introduce themselves and address any remaining questions and solicit commitment in participation. Once agreeing to participate, employers would provide contact information (e.g., names, telephone numbers, and addresses) of their currently employed employees who ever participated in the LTC RAP and non-apprentice workers from other branches if possible. Similar information would be obtained from the comparison group facilities. Sample members would be sent a pre-notification letter 1 week before interviewing is scheduled to give advance notice to sample members that they have been chosen for a survey, establish survey legitimacy, and provide information about the survey.

Survey organizations often experience problems obtaining valid telephone numbers for potential respondents. Lower income people such as long-term care workers may not have listed landlines, and it is not likely cell phone numbers could be attained independently of the employers. Employers may or may not be willing to share home or cell phone numbers of workers, and it would not be appropriate to survey workers while on the job because of potential fear of retaliation by management if they criticize the facility/agency or their supervisors. Thus, the proportion of apprentices who are successfully contacted may be lower than anticipated.

Domains on Which Information Will Be Gathered

The survey will collect information on the outcomes of interest (e.g., satisfaction, intent to leave, new knowledge and skills attained) and also on an array of other domains which will be used in analyses to statistically control for factors not related to the effect of apprenticeship. These domains include:

  • Worker background (e.g., demographics, socioeconomic status, family relationships, residence status).

  • Personality inventory to assess fit with caregiver role.

  • Employment history (e.g., number and types of previous jobs, relative prior pay and availability of benefits, previous training, life/employment skills).

  • Availability and uptake of fringe benefits offered.

  • Organizational culture (e.g., control over work, relationship with peers and supervisors, opportunity to work in teams, and other characteristics thought to affect satisfaction, intent to leave, and confidence in new knowledge and skills).

  • Training before the apprenticeship (e.g., hours and source of basic training, whether had mentor previously).

  • Views about apprenticeship (e.g., motivation for participation, what they learn, best things/worst thing, non-paid time invested, out-of-pocket costs, and views of mentorship, OJT, and related training instruction).

Questionnaire Development

The evaluator would identify the specific domains that will be included in the questionnaire and, would identify potential questions and issues related to data collection. After obtaining feedback from ASPE and DOL, the evaluator would begin to develop the actual draft questionnaire. The evaluator would prepare an Office of Management and Budget (OMB) clearance package to include the essential supporting statement sections (e.g., justification, effort to identify duplication, methods to minimize burden, cost and response burden estimates, publication plans, and statistical methodology) and relevant information on research questions and survey protocol. The final OMB clearance package will include the final questionnaire.

Data Collection Process

The survey would be conducted using a CATI system and last approximately 30 minutes. Once an interviewer makes an initial contact with a potential respondent, the interviewer will schedule a time to administer the survey. At that time, the interviewer would administer the introduction, which would include obtaining informed consent from the respondent and assures of privacy of responses. The interviewer will then administer the survey, following the script that the CATI program displays on the computer screen. The CATI system conducts edit checks for appropriate response values and correct use of skip patterns to increase data accuracy during the interview. As data are collected daily, project staff would review responses and generate frequencies and means of key variables to ensure that data look as expected and no unusual response patterns are observed. Similarly, project staff would monitor response rates daily by apprenticeship and comparison group and for the sample overall. Should response rates be lower than expected, staff would implement corrective measures, such as varying the number of call attempts or call schedule or developing more targeted scripts to address refusals or questions that sample members may have.

At the conclusion of the data collection period, the data would be cleaned (e.g., provide standardized codes for “yes”, “no”, “refusal” and “don’t know” responses) and a dataset would be created for analysis. As part of the creation of the final dataset, programmers would prepare an accompanying codebook containing questions and responses, as well as key data collection variables such as date of interview and final disposition code for any non-interviews.

Time Frame to Collect and Analyze Data

We anticipate that the entire survey option would take approximately 2.5 years to complete. The activities would include questionnaire and sample frame design (6 months), preparation of OMB package and clearance (8 months), data collection (6 months), data cleaning (2 months), and analysis and reporting (8 months).

Ballpark Cost

We estimate the total costs for conducting the survey are approximately $450,000, which includes questionnaire and sample frame design, translation of the survey into Spanish, preparation of OMB package and clearance, data collection in English and Spanish, data cleaning, and analysis and reporting. Costs for the actual data collection would be approximately $335,000, which include costs for programming the 30-minute, closed-item 75-question survey into the CATI system, interviewer training, developing and mailing all pre-notification letters, delivering an English and Spanish-language CATI survey over a 4-month period, multiple call attempts for approximately 10 days, a survey case management system to schedule and track calling attempts and survey status, a $35 incentive for survey completion, cleaning the data and preparing a SAS dataset with survey frequencies and documentation of all coded items.

Main Statistical Methods for Analyzing Data

The data would be analyzed using both descriptive and multivariate regression techniques. Means for all analysis variables would be prepared for all respondents and for apprentice versus comparison group members. Descriptive analyses using comparisons of means (e.g., age) and proportions (e.g., gender) and cross tabulations of outcome measures (e.g., satisfaction, intent to leave) with selected characteristics of interest (e.g., employer profit status, worker job tenure) would be calculated. Descriptive analyses without testing for statistically significant differences could be calculated on apprentices with varying characteristics, but small sample sizes for given characteristics (e.g., those with any specialty training, various occupations, and source of related training instruction) would prevent much statistical significance testing for such differences on outcomes.

Multivariate regression would be used to analyze the effects of participating in apprenticeship on outcome measures representing the key research questions. The outcome measures are typically multilevel outcomes (e.g., extremely satisfied, somewhat satisfied, somewhat dissatisfied, extremely dissatisfied) that would be analyzed using multinomial logit, or multiple levels could be combined into only two levels and analyzed using logit. The principal independent policy variable would be a yes/no indicator of any participation in apprenticeship. The basic empirical model analyzed that would control for apprenticeship and other domains that are hypothesized to affect the outcome of interest would be:

Outcome = yes/no indicator of apprenticeship, demographic and socioeconomic status, family relationships, residence status, personality type, employment history, employer benefits, organizational culture, pre-apprenticeship training + error

There potentially may be enough CNAs and DSSs in the data to estimate regression analyses on that subgroup of apprentices, but there would not be enough apprentices in the remaining occupations to perform similar analyses. However, it is not likely that there would be sufficient statistical power to test for statistically significant differences in apprenticeship characteristics (e.g., specialty versus only advanced competencies) among the subgroup of apprentices in regressions because few apprentices with such characteristics are likely to be represented in the data. Because we anticipate that the universe of employer/sponsors and the universe of their currently employed apprentices would be used to construct the sample frame, descriptive and multivariate analyses would not have to control for the effects of the sample design.

View full report

Preview
Download

"LTCRAPedo.pdf" (pdf, 870.86Kb)

Note: Documents in PDF format require the Adobe Acrobat Reader®. If you experience problems with PDF documents, please download the latest version of the Reader®