The feasibility of evaluating DHHS programs operated under Tribal self-governance is dependent on a number of factors. Discussions with the Technical Working Group, Tribal representatives that participated in the discussion groups at national conferences, and representatives from Tribes/Tribal organizations that participated in the project site visits identified the following issues as important to considerations of feasibility.
Willingness of Tribes to Participate in an Evaluation. The extent to which Tribes may be willing to participate in an evaluation is a key issue for this study. An evaluation in which only a handful of Tribes would be willing to participate would likely produce findings that are not representative of all DHHS programs managed by Tribes under self-governance and, thus, would have limited value.
Many Tribal representatives who contributed to this project emphasized that any evaluation should be structured as an evaluation of DHHS programs managed by Tribes under self-governance, rather than as an evaluation of self-governance. There is concern that an evaluation of self-governance could be construed and/or the findings used to reduce or eliminate self-governance programs. To allay those concerns and encourage Tribes to participate in an evaluation, it would be very important to be clear in the stated evaluation objectives that DHHS programs are to be evaluated, rather than self-governance.
Discussions with the Technical Working Group and others also stressed that it would be inappropriate to design an evaluation that used a standard set of outcomes to examine DHHS programs operated under self-governance. A principle of self-governance is that Tribes should have flexibility to set objectives and design programs to meet each Tribe’s priorities, which may be different than priorities set for Federal programs, generally. Tribes might be less likely to participate in an evaluation that set a standard set of outcomes and more likely to participate in an evaluation that permitted Tribes to set specific and unique program goals that then were examined to determine whether and what extent these goals were achieved.
In addition, it is probable that Tribes might be more willing to participate if: 1) there is a perceived benefit to Tribes from an evaluation, 2) there is extensive consultation on the evaluation objectives, issues, and data that will be collected, and 3) the costs of data collection are minor or are the responsibility of the Federal government. Tribes might be more willing to participate in an evaluation, also, if there were clear and detailed agreements in place that indicate that evaluation data collection/reporting would be limited to the evaluation period and would not continue after that period. In addition, an evaluation that was structured to report findings across all participating Tribes or large subsets of Tribes would be more likely to encourage participation than an evaluation that would report on individual Tribes.
Design of Appropriate Comparison Groups. Evaluation methodology requires that the impacts and outcomes of programs being evaluated be compared to the impacts and outcomes that would have occurred in the absence of the new program. Design of appropriate comparison groups is a critical evaluation feasibility issue.
Two types of comparison groups are generally used in a rigorous evaluation methodology: 1) pre-post comparisons to examine how the new program differs and what impacts it had, compared to the situation prior to the new program; and 2) external comparisons to control for underlying trends and changes that may affect the program being evaluated and the results produced by the evaluation.
For the evaluation of DHHS programs that may be authorized by Congress for Tribal management under a demonstration, there may be problems associated with constructing a pre-post comparison methodology if some participating Tribes did not manage the program under contract prior to the demonstration. In this case, there may be no “pre-“ data for comparison at all or the “pre-“ data may be only available for State-managed programs that may be more generously funded or otherwise inappropriate as a baseline for evaluating the program under Tribal management. Feasible evaluation strategies, in this case, might limit the participating Tribes for specific programs being evaluated to those that previously managed the program under contract arrangements.
Appropriate external comparison groups may also be difficult to define for similar reasons, but could be constructed based on adjustment algorithms that account for differences in funding levels and program objectives. A more important external comparison group issue was raised by Tribal representatives who provided input to the study: there is considerable concern that an evaluation of DHHS health programs operated under self-governance could result in findings that are divisive and politically problematic, if compacted programs were compared with direct service programs.
Data Availability. Evaluation research requires that data be available for the pre-intervention period, for the post-intervention period, and for appropriate comparison groups. Based on findings from the site visits and the discussion groups, it is likely that pre-intervention data would be available for new DHHS programs that might be authorized by Congress for inclusion in a demonstration, for Tribes that currently manage those programs under contracts. For Tribes that would choose to participate in the potential demonstration and did not previously manage specific programs under contract, it would be necessary to create a pre-demonstration baseline that could be used to evaluate the new DHHS programs managed under self-governance.
For DHHS health programs currently managed through compacts with Tribes, Indian Health Service data could likely be sufficient to establish a pre-compact baseline for use in evaluating these programs. Similarly, IHS data would be available for the evaluation period and for external comparison direct service Tribes.
In general, it would be possible to develop data collection protocols and strategies to obtain the data necessary for evaluation of DHHS programs managed under Tribal self-governance. The complexity and costs of such data collection would vary depending on the specific evaluation issues that were of interest, the unit of observation for which data were desired, and comparison groups that were used to evaluate the programs.
Costs to DHHS and to the Participating Tribes. While it would be possible to design an evaluation of DHHS programs managed by Tribes, and to collect necessary data, the costs of the evaluation and data collection activities could be so high as to render the evaluation infeasible. DHHS has limited funds available for research and evaluation and, if the costs of an evaluation were very large, that would render the evaluation infeasible. In addition, if a particular evaluation strategy imposed significant costs and data reporting burden on Tribes, it is likely that few Tribes would agree to participate. Alternatively, if DHHS assumed full responsibility for data collection and reporting costs incurred by Tribes, this would increase the cost of the evaluation to DHHS.
Trade-offs Between Costs and Usefulness of an Evaluation. With any evaluation, the comprehensiveness, number of sites, types of comparisons, and amount of primary data collection affect costs. A comprehensive evaluation, with a wide range of issues, a large number of sites, both pre-post and external comparisons, and extensive primary data collection would likely be costly, but also produce reliable and defensible results. A very limited evaluation, with a few priority issues, a limited number of sites, pre-post comparisons, and minimal primary data collection, would be significantly less costly, but might result in findings that are of limited value.