The essence of evaluation research is asking whether the outcomes of the intervention group are different from those of some comparable comparison group which did not receive the intervention. Developing comparison groups for LTC RAP apprentices and programs will be challenging because of the selection bias inherent in the way apprentices are chosen for the program. Based on our site visits, most programs have selection criteria for apprenticeships; they are not open to all workers and they are not randomly chosen. Employees must typically apply or be recommended by a supervisor and are selected based on qualities such as their superior caregivingabilities, intelligence, ambition, and ability to work with clients and other staff. These attributes are not variables in administrative datasets that could be used to construct a comparison group. As a result, apprentices are likely to be different from similar workers of the same age, gender, race, education, and employment history. Thus, comparisons between apprentices and comparison groups that do not control for selection bias may measure outcomes that are the result of differences in the personal characteristics of the workers rather than the impact of the apprenticeship program.
The classic solution to the problem of selection bias is a randomized controlled trial, which randomly assigns all participants either to the intervention or the control group. Thus, people with unmeasured differences are equally likely to be in either the treatment or the control group. However, without random assignment, evaluators must seek other options to distinguish between program effects and effects linked to unmeasured individual differences. One approach is to gather more information about the comparison group and match people in the intervention with people not in the intervention. Arguably, prior earnings could be a proxy for some of the personality characteristics that may be important in the choice of apprentices and could be used to match non-apprentice workers for a comparison group. Gathering information not in administrative databases can be done, but it increases the expense in selecting the comparison group because not all people on which the information is collected will be used in the comparison group. Moreover, this approach does not guarantee that the biasing factor will be identified and measured. An approach that can control for measured differences in individuals in the treatment group and the comparison group is multivariate analysis which can statistically controls for many variables, but it cannot control for unmeasured differences in skill level, experience, motivation, and aptitude for service in long-term care.
A further complicating factor is that most programs are designed so that apprentices who complete the program serve as mentors to the non-apprentice staff. While this is strength of the program and builds the business case for LTC RAP, this program design means that non-apprentices in the same facility/agency are not free of the potential impact of the apprenticeship program and are, therefore, inappropriate as a comparison group. To address this issue, an evaluation would need a comparison group outside of the sponsors organization, or at least another of the sponsors facilities or agencies not participating in the intervention.
Another issue concerning comparison groups is the difficulty of convincing the comparison group to participate in the evaluation since they are not participating in the LTC RAP. For administrative datasets, obtaining cooperation is not a barrier to the study because the permission of the employers and workers is not required for research purposes. However, developing a comparison group for a survey of employers or apprentices may be difficult. While employers sponsoring LTC RAPs and apprentices presumably have some interest and motivation in participating in a study about the LTC RAP, employers in comparison groups may not be eager to provide information on their training practices to outside organizations and may be reluctant to release confidential contact information of employees. For their part, direct care workers may see little reason to participate in the survey. Offering survey respondents a modest financial incentive is a common strategy to increase participation.
Finally, ideally, an evaluation would compare apprenticeship with standard training, but with no advanced training. While minimal training is the norm in long-term care, some providers do provide more extensive training that has elements of the LTC RAP. Indeed, some of the providers in the case studies reported that they had enhanced training programs prior to their implementation of the LTC RAP. To the extent that the comparison group offers enhanced training, it will be more difficult to detect the influence of the LTC RAP.