Assessing the Field of Post-Adoption Services: Family Needs, Program Models, and Evaluation Issues. Case Study Report. 8.2 Types of Evaluation Activities

11/01/2002

Each PAS program monitored client characteristics and services delivered, but outcome evaluations were less common.

The types of evaluation activities employed depended on the services being assessed and the expertise available to the PAS program for evaluation design and analysis. Among evaluation activities, needs assessment can be considered formative evaluation designed to guide program design and modify it during the early implementation period. Although states may have been guided by informal reports of adoptive families service needs from child welfare and other service providers, only Georgia and Oregon reported having conducted formal needs assessments of adoptive parents as part of their program planning processes. Needs assessments appeared to be of wide interest nationally. Excluding the five case-study states, 23 of 31 states responding to the ILSU survey reported that they were conducting needs assessment or had done so since 1990.

All five states collected the kinds of data normally used for process evaluation: characteristics of clients served and services delivered. Oregon and Massachusetts were far more sophisticated in their collection and use of process evaluation data than were the other three states. Both states developed databases of client contacts and other events and used external evaluation providers to analyze the resulting data. Both of these databases were detailed, including information on the nature, content, and client characteristics of every service delivery event, although Oregon did not record contacts that were handled in a single telephone call. The Massachusetts database was web-based to allow direct access by regional providers. These two states used their event tracking databases to support analyses of training audiences, specific types of services provided, hours of service provided in various service categories, and household composition of client families.

Assessments of client satisfaction also were used in each state, again with varying approaches and levels of rigor. Across the five states, client satisfaction surveys were used for most services, including information and referral, tutoring programs, respite care, family support group, training, and counseling. However, response rates for these surveys were low enoughranging from 36% to 86%to raise concerns about the validity of resulting data. Interviewees mentioned using the surveys, even if not aggregated, to inform their sense of how they were doing and what services might be added. Massachusetts also used group interviews and focus groups in its assessment of client satisfaction.

Formal outcome evaluations are conducted less frequently than process and client satisfaction measures. Georgia and Virginia used clinical instruments administered pre- and post-services to assess counseling services. Georgias crisis intervention program, which provided intensive case management to families in crisis, administered the Child and Adolescent Functional Assessment Scale at intake, three months, and exit as well as monitoring disruption rates among its clients. Regional PAS providers in Virginia reported using the Achenbach Child Behavior Checklist, Current Feelings About Relationship with Child, and Cline/Helding Adopted and Foster Child Assessment at intake and closing, although not all providers used all of these instruments.

Other forms of outcome assessment were mentioned. Massachusetts evaluators reported using data from their information system to analyze the degree to which goals identified in the course of information and referral calls actually were attained. Virginia planned to monitor the extent to which disruptions occurred among families served by the PAS program. PAS program staff in three states mentioned reviewing their own treatment narratives to assess how families were doing, if they were looking better. This informal assessment is worth noting because staff appeared to be implementing it without any direction from above. This suggests that they may have been more comfortable with individual clinical assessment than with program evaluation activities, perhaps reflecting the emphasis of their own professional training or their limited faith in the usefulness of other evaluation activities.

View full report

Preview
Download

"report.pdf" (pdf, 536.58Kb)

Note: Documents in PDF format require the Adobe Acrobat Reader®. If you experience problems with PDF documents, please download the latest version of the Reader®