The conference offered a diverse range of organ donation activities and evaluations with performance indicators of varying proximity to organ donation rates. Conference deliberations recognized the practical challenges of evaluating programs in a wide variety of health care and social environments, including the resources needed to implement rigorous evaluations. Organizers of the conference sought to aid program managers and researchers to understand what might be needed to improve evaluations of their programs and better demonstrate what does and does not yield greater organ donation.
Presentations on the first day focused primarily on pre-event activities, i.e., public awareness and educational activities, and the challenges of evaluating such programs relative to measures of organ donation. Presentations on the second day focused on post-event activities that tend to be more proximal to the "bull’s-eye" of performance indicators (Exhibit 3), but still face significant methodological challenges. Although the specific evaluation methodologies may be different for pre- and post-event activities, each type of activity shares similar challenges of evaluation. A distillation of the conclusions made by conference participants regarding program evaluation is as follows.
- Evaluations of the impacts of program interventions need to be more rigorously designed in order to determine the causal links between the interventions and the appropriate performance indicators. More rigorous evaluation design will rule out factors that may confound causal effects, e.g., differences in geographical or socioeconomic attributes that may affect performance indicators rather than the interventions themselves. Among the design attributes that contribute to establishing causality are:
- prospective studies
- careful identification of populations or target groups (including stratification into subgroups as appropriate)
- contemporaneous and otherwise well-matched control groups
- random assignment of interventions
- sample sizes of adequate size to detect any true causal relationships between interventions and changes in performance indicators.
- Researchers must demonstrate an impact on the intended target population and a change in the associated performance indicators. The performance indicator(s) chosen to measure the impact of a program intervention should be as proximal as possible to organ donation rate. Clearly, making the causal connection to organ donation rates is difficult or impractical for many programs, particularly those involving pre-event interventions. To the extent that causal links between less proximal pre-event measures such as measures of public awareness or number of people registered as organ donors, or post-event measures such as referral rates or request rates, and organ donation rate can be demonstrated, pre-event measures would be more useful.
- The target groups of evaluation and the timeframes of evaluation need to be commensurate with the chain of events or other stages of progress from initial program intervention through organ donation and follow-up. For example, programs intended to change the behavior of younger people may have to be evaluated over longer periods of time and include their family members to determine if the intervention affected donation-related behaviors.
- Potentially useful evaluative paradigms such as the transtheoretical model of behavior change, and tools such as survey instruments that have been developed or applied in other fields should be validated in organ donation settings, e.g., post-event in hospitals, and in representative populations, e.g., with donor and non-donor families.
- To improve their "generalizability" or external validity, program evaluations should be conducted in multiple geographic regions and socioeconomic groups.
- Organ donation researchers could benefit from increased collaboration with researchers in other fields including evaluation design experts, statisticians, health services researchers, health economists, and other academic researchers. For example, statisticians can provide assistance with regard to "power calculations" to determine adequate sample sizes and identify appropriate statistical tests, e.g., using non-parametric statistics and multivariate analysis. Economists can provide assistance with identifying and quantifying the direct and indirect costs of programs and conducting cost analyses. Engaging such experts in these efforts will strengthen the longer-term evaluation capacity in the organ donation community.
- Given considerable tradeoffs in the costs and outcomes of programs for improving organ donation, cost-benefit analysis and related economic analyses should be used to compare programs to improve organ donation and to demonstrate their value relative to other types of health care programs.
- Program evaluations should not overlook what one presenter called "formative evaluation." That is, programs should specify implementation milestones and provide the means to measure progress against these.
- Organ donation programs should increase collaboration in evaluation. Researchers should become more familiar with other programs in similar areas and evaluations, including those reported in the literature and other sources. Organizations involved in efforts to improve organ donation should engage in larger-scale collaborative efforts to plan and implement programs, e.g., through multicenter evaluations, registries, and related data collection and sharing efforts. More efforts need to be undertaken to refine pre- and post-event measures, and to further establish causal links between these measures and organ donation rates. Some of these efforts can be coordinated at a national level, e.g., through organizations like UNOS, and other nationally active organizations.
- Evaluation findings should be used to improve the programs that were evaluated and disseminated more widely for incorporation into other efforts to improve or ensure the quality of organ donation programs.