|As in many programs, a lack of funding and expertise, and interference with program activities are barriers to evaluation.|
Funding. Funding was the evaluation barrier most frequently mentioned by state adoption program managers and PAS coordinators and providers. Evaluation requires substantial resources, and program coordinators frequently place higher priority on meeting service needs than on evaluation. Funding agencies contribute to this situation if they require evaluation without specifying the level at which it is to be done or do not allocate adequate resources for both service delivery and evaluation. Among the case-study states, the one with the most sophisticated evaluation allocated approximately 5 percent of its budget to evaluation, hardly adequate for a new program area in which service delivery models and evaluation methods are not well established.
Evaluation expertise. Contracting with an external evaluator requires a greater commitment of program funds but provides access to a higher level of expertise than is likely to be found among program coordinators or staff. Even if a PAS program is willing to commit the resources to contracting with an external evaluator, however, finding an evaluator with adequate understanding of adoption issues may be difficult. Given the recent development of PAS programs, there is neither a large base of published research nor an extensive network of experienced researchers.
Interference with program activities. PAS program staff were concerned that the time required for evaluation activities added to their workload and impinged on their interactions with families, without necessarily providing any direct benefit to the family. Program staff were also concerned that evaluation activities introduced a clinical tone to their interaction that was at odds with their efforts to normalize the adoption experience, especially when instruments focused on child and family problems.
Limited value to program. PAS coordinators or providers rarely found evaluation findings to be useful in their practice. While data were used to quantify the volume of services delivered or families satisfaction with the program, evaluation was not seen as a source of new and useful input on substantive questions of program design. If evaluation findings do not inform program development, staff are less likely to be willing participants in evaluation activities.