We are unable to conclude that the family preservation programs in these three states achieve the objective of reducing placement of children in foster care.3 A summary of various analyses of placement rates at various points in time following random assignment is shown in Table 9-1. In none of the three states were there significant differences in placement rates over time for the samples as they were originally randomly assigned (the "primary" analysis). Since some of the families in the control group were actually provided family preservation services ("violations") and some of the families in the experimental group did not receive services or received only minimal services ("minimal service" cases), we also conducted analyses in which we dropped those cases ("secondary" analyses). Results of the secondary analyses were quite similar to the primary analyses, also showing no significant differences between the groups.
Since it was thought that the samples included families that did not fit the conception of cases best suited for the program model, we attempted to identify subgroups that might better fit criteria for referral. This selection was based on the idea that the service is most useful for families in crisis. Hence, we focused on cases referred in the course of an investigation of abuse or neglect and cases with recent substantiated allegations of maltreatment, on the grounds that these groups of cases might reflect families in crisis. These "refined groups" analyses also failed to show differences between the experimental and control groups on placement rates over time.
In Kentucky and Tennessee, we obtained data from case records and caseworkers on placements with relatives that were not recorded in the administrative data. Adding those data to our analyses, there were again no differences between experimental groups. Although not statistically significant, some of the differences between groups appear to be fairly substantial, particularly at the one-year point. However, there is no consistent pattern to these differences, sometimes the experimental group percentage is higher, sometimes it is the other way around.
Since these programs were intended to prevent the placement of children, the target group for the services was families in which at least one child was "in imminent risk of placement." We found that, by and large, the families served were not in that target group. This is shown by the placement rate within a short period of time in the control group, indicating the placement experience in the absence of family preservation services. In all three states, the placement rate in the control group within one month (a liberal definition of "imminent") was quite low. It would, therefore, have been virtually impossible for the programs to be effective in preventing imminent placement, since very few families would have experienced placement within a month without family preservation services.4 It should be noted, however, that the rates of eventual placement in the control group were higher, about one-fifth to one-fourth within one year. Hence, it would have been possible for family preservation to have shown effects on placement over time, but those effects were not observed.
There was one group that it seemed might represent better targeting, the "petition" cases in Kentucky. Prior to random assignment, workers submitted petitions to the court for placement or some other court ordered intervention on 67 families. It might be supposed that this group would be more likely to have children placed. Although more of the control group families in this group experienced the placement of a child within one month than other subgroups in Kentucky, that proportion was still quite low (10%), suggesting that focusing on groups such as this (cases with court involvement) would not resolve the targeting problem.5
(3) The language we use here is carefully chosen. Technically, we cannot conclude that the programs had no effect.
(4) It would be unreasonable to expect that targeting would be perfect, that is, that all cases referred for services were at imminent risk of placement. But how high should the targeting rate be? The answer to that question depends on the impact of the program, its costs, and the cost of placement. If the impact of the program is large (that is, it substantially reduces the rate of placement in those cases in which placement would have occurred) or if it is relatively inexpensive relative to the cost of placement, the targeting rate can be lower. Some algebra indicates that the ratio of cost of FPS to placement cost averted (per case served) must be less than the proportion of cases in which placement was averted. For example, if the targeting rate was .5 and the success rate was .4, then the proportion of cases served that result in placement avoidance will be .2 (the product of .5 and .4). The ratio of the cost of FPS to the cost of placement must then be less than .2 for FPS to be cost effective.
(5) This group also showed the largest difference between the experimental and control groups in percentages of families experiencing placement at one year, a difference of 15% favoring the experimental group. However, the difference is not significant. Furthermore, there are other differences in the table almost as large, some favoring the control group.