Several barriers to evaluation, or to more extensive evaluation, were identified in the course of interviews. Some of these may have been inherent in the ethos of PAS or in services for children and families. Others were specifically related to program structure.
|Barriers to evaluation included insufficient funding, lack of expertise, and staff concerns.|
Funding was the barrier mentioned most frequently by both state adoption program managers and PAS coordinators/providers. Funding represented either staff time to conduct evaluations or the ability to contract with external evaluators. Although evaluation was among the activities described in either the RFP or the providers proposals in each state, it generally was not specified by allocation of specific staff members or budget line items. Massachusetts, the only state with a line item specific to evaluation, spent approximately 5% of its budget on evaluation. Although clearly more extensive than in other states, these efforts represent a relatively small proportion of funds to allocate to evaluation. Interviewees were committed to evaluation, and in some cases their efforts had gone beyond what was required by the state. However, providers noted the burden of evaluation on limited staff time.
Evaluation expertise is strongly related to funding. Contracting with an external evaluator requires a greater commitment of program funds but provides access to a higher level of expertise. Although program coordinators frequently have some experience and training in evaluation, it is unlikely to be at the same level as for someone whose primary role is evaluation. Similarly, local staff generally are chosen according to their service delivery skills and may have limited qualifications in the collection and management of evaluation data.
PAS program staff expressed concerns about evaluation in terms of its impact on their interactions with clients. One PAS provider noted that the worlds of social workers and evaluators are very different, and service providers did not naturally embrace evaluation. Program staff also expressed concern about the amount of client time spent completing assessment instruments, especially if this did not provide any direct benefit to the client. Families that come to me are under a lot of stress, said one service provider, and asking them to fill out something with 140 questions just adds to that. Program staff feared that adoptive parents would be put off by clinical instruments that seemed to focus on child and family problems.
|PAS programs presented specific challenges to evaluation design.|
The very structure of these PAS programs creates inherent challenges for rigorous evaluation design. With the exception of one intensive crisis management intervention that used a defined intervention model, the PAS programs described in this report were intended to be extremely flexible in their delivery, responding to the varied needs of individual families rather than offering a predefined bundle of services for a set period of time. PAS coordinators/
providers considered this tailoring to be a strength of the programs, allowing them to respond to clients specific needs in a way that empowers families. Unfortunately, this tailoring was at odds with several assumptions of evaluation design, particularly with respect to outcome evaluation:
- Because families use the service on an as needed basis, discontinuing and reentering as their concerns change, it is difficult to identify points at which pre- and post-measures should be administered. Measuring at standard intervals (e.g., three months after first contact) would be an alternative, but if families are not in touch with the program at that point, data collection will be more difficult.
- The array of services provided depends on the level and type of needs identified by the family and often changes over time. This diversity of interventions makes variations in satisfaction or other outcomes more difficult to interpret.
- Family needs and concerns are diverse. Evaluators must choose between tailoring their outcome measures to the specific issues of the family (creating greater specificity but smaller groups) or measuring outcomes more broadly (increasing statistical power but with less informative measures).
- Programs generally are family focused in philosophy and service delivery, but outcomes might be specific to one family member. Evaluators must choose how many family members to include in outcomes (e.g., whether to measure all children, all adopted children, or those children presenting the greatest problems). A narrow focus might increase the power to detect outcomes, but it raises concerns about stigmatizing and scapegoating. More inclusive data collection increases respondent burden and might dilute outcomes by including family members who are less involved in services.
The challenges described here are not unique to PAS programs. In HIV-prevention programs, for example (Napp, Gibbs, Jolly, Westover, and Uhl, 2002), limited funding, lack of staff expertise and interest in evaluation, and lack of fit between program models and standard evaluation methods all have been noted as barriers to evaluation. Though few would argue that program models should be radically altered to meet the requirements of rigorous evaluation, several measures could enhance the quality and usefulness of evaluations for PAS programs. A discussion of these measures is the subject of the PAS Evaluation Issues Report.