The main purpose of this report is to provide information relevant to assessing designs of any future evaluation of LTC RAPs. Although the analysis of evaluation options will appear in a subsequent deliverable, this section delineates aspects of the LTC RAPs visited that are relevant to future evaluations. Before considering how the site characteristics inform potential evaluations, it is important to recognize the possibly wide scope of evaluations. The outcomes of interest certainly involve workers and employers but could include consideration for client and funder outcomes as well. For workers, the key outcomes are any costs (such as foregone earnings) and how well LTC RAPs improve their skills, wages, job stability, job satisfaction, and long-term earnings. For employers, the key outcomes are the added training costs as compared with the benefits of a more skilled workforce. Were LTC RAPs to increase productivity and quality, clients may be receivea higher quality of care without imposing added costs on taxpayers or other funders. The discussion in this section focuses on worker and employer impacts.
Several characteristics of LTC RAPs are relevant to evaluation designs, including: (1) the goals and activities of the programs; (2) the duration of the programs; (3) the size and scalability of programs; (4) the availability of data; (5) the recruitment and selection of apprentices into the programs; and (6) the implications of sponsors use of apprentices to improve non-apprentice staff performance.
One issue is whether the sites program goals and the intervention are generally uniform across sites. Although the programs are registered through DOL, sponsors have considerable latitude in deciding their goals and activities. That said, the goals of the LTC RAPs visited are roughly consistent across the programs: to improve the skills of the long-term care workforce in order to improve quality of care and to create more attractive jobs for apprentices who perform caregiving. Achieving these goals help sponsors meet state certification requirements, reduce errors in caregiving, reduce turnover, and create career opportunities for apprentices. The activities of LTC RAPs are also generally consistent across the sites visited. Most sites used the LTC RAP for advanced training and mentoring of employees who had already received basic training and had leadership or personal qualities for which they were selected into the apprenticeship. One site used its LTC RAP for entry-level training of all new employees.
The duration of the LTC RAPs is an important issue for any evaluation. An evaluation that involves longitudinal analysis would need to consider how much time is needed to implement an intervention to be able to assess its full effect. The programs visited have a wide range in time for completion, with the shortest program being 1,680 hours and the longest program being 3,232 hours (approximately 1.5 years). The remaining programs were approximately 2,000 hours. Longer interventions can be more expensive to evaluate than shorter ones, particularly if they involve multiple waves of data collection.
The size of the LTC RAPs visited ranged from eight to 183 active apprentices as of May 2011. These sites were selected because they were the largest sites with active programs that agreed to participate in the site visit analysis. Studying large-scale programs is especially important for assessing whether experimental evaluation options are feasible in achieving sufficient samples of apprentices to be able to detect small differences in outcomes. For example, if one wants to know the effect of the LTC RAPs on annual turnover, one needs sufficient sample sizes to have the statistical power to detect relatively small differences in the turnover rate of apprentices compared to non-apprentices. Based on national registered apprenticeship data, LTC RAPs have a median size of only six active apprentices, much smaller than all but one of the sites visited. As of May 2011, there were only about seven sites with more than 25 active apprentices. Given that LTC RAPs are typically smaller individual programs than suitable for experimental evaluations, one option would be to pool samples of apprentices across multiple programs, but such an approach might complicate efforts to assure comparison groups are appropriate.
Across these sites, limited data on outcomes are collected. Most sites did obtain data on wages, benefits, tenure, and turnover, but not in a common form across sites. Most sites collect annual turnover but one tracks only monthly turnover. Any future evaluation would involve collecting additional data beyond what sites currently collect.
Designing valid comparison groups for those entering apprenticeships may be difficult because of the selection process for entrants into the program. Almost all sites have selection criteria for apprenticeships; employees must typically apply or be recommended and subsequently be assessed and selected for an apprenticeship from a subset of all employees. As a result, regular workers not selected to enter apprenticeships would not be a valid comparison group, since unmeasured differences between them and apprentices would likely bias estimates of the program impact. A randomized control trial effectively addresses such selection issues. One approach that can control for measured differences in individuals is multivariate analysis, but it cannot capture unmeasured differences in skill level, experience, motivation, and aptitude for service in long-term care. Another approach is to use natural experiments in which a process that is independent of individual characteristics selects who participates and who does not; the resulting assignments can be random from the perspective of unmeasured differences in individual characteristics. The use of quasi-experimental methods for evaluating LTC RAPs may also offer options for drawing appropriate comparison groups, but they do not always do as well at controlling for unmeasured differences between treatment and control groups.
Another factor identified in this report that is relevant to an evaluation is the spillover effects of the LTC RAPs. In most of the sites visited, employers designed the program so that apprentices who complete the apprenticeship serve as mentors to the remaining non-apprentice staff. This intentional spillover of the intervention to non-intervention employees makes comparison of apprentice outcomes to non-apprentice outcomes within a site extremely difficult. An evaluation would need a comparison group outside of the sponsors organization, or at least another of the sponsors facilities not subject to the intervention, to address this issue. Gaining the cooperation of organizations not involved in apprenticeship in a future evaluation may be difficult. At the same time, employers may greatly value any positive impacts of these spillovers of their LTC RAPs.
Because the sponsors bear the costs of investing in and operating LTC RAPs and because their decisions will largely determine the scale of the LTC RAPs, evaluating the gains and losses for employers using the LTC RAP model is critical. There are assessment tools for gauging the employer perspective, but usually not in an experimental or comparison group context. The feasibility of evaluation options involving employers will be examined in a subsequent report.