Several themes emerged from the case studies that might inform evaluation approaches for CSC and other ESMI programs being implemented across the country.
Model Fidelity. Most of the states started with a particular model of ESMI program, such as CSC, PIER or PREP, but the extent to which the models differ in meaningful ways and the extent to which each model is implemented with fidelity remains unclear. We expect that variation across programs, even those implementing the same model, will increase over time as programs adjust to the constraints of their local environments. Nonetheless, the programs still share a family resemblance as CSC programs. A characterization of the CSC or other model will be needed for an evaluation, but we recommend that this characterization allow for the natural variation across programs that can be expected to arise over time, rather than on a strictly defined set of model features. The model can be defined with sufficient detail to draw meaningful inferences about its effectiveness based on a multisite evaluation, but still allow for variations across sites that are needed to fulfill those functions in different settings. In fact, evaluations of specific model adaptations may contribute critical information to a nascent evidence base on the feasibility and effectiveness of community-based ESMI programs in the United States. In resource-richer environments, formative evaluations may provide valuable insights into the drivers of any such adaptations.
Process Evaluation Domains and Measures. Many of the activities of the ESMI teams that should be tracked in a process evaluation are new to the agencies hosting the model and may present challenges for reliable measurement. The broadly encompassing and individualized nature of the program will add further challenges because there is not a single set of providers or services that should be provided to all clients. Moreover, many of the services depend on collaborations with external providers, such as schools or job placements. Tracking activities of widely dispersed ESMI team members and external partners will be a daunting challenge that should be carefully considered in the design of the evaluation. Measures that focus on functions expected from CSC programs rather than specific activities are likely to be important. In addition, methods for data collection on processes that describe interactions between clients and various provider agencies will need to be carefully considered for each program.
Outcomes Evaluation Domains and Measures. As with any evaluation that includes outcomes measures, evaluation designers and policymakers should be careful in their selection of outcomes domains and exercise caution in drawing inferences about the effects of the intervention on client-level outcomes, given that the latter may be influenced by factors that are not under the providers' control. That said, a focus on critical short-term outcomes--including suicidal behavior, symptom stability, substance abuse behavior, and schooling/employment--is important as these are directly affected by components of CSC and other ESMI models. Most of the programs we visited had the capabilities of collecting primary data on these outcomes and/or tapping into data collected by state agencies and other entities for other purposes.
Monitoring the Referral Process. In all of the programs we visited, the referral process was in flux to some degree. This was not a mark of a failure but a predictable part of the program evolution over time. The newer programs had relatively narrow referral networks, in part because they want to control the rate of growth and in part because they do not want to invest in new referral sources while they are still focusing on providing a new set of services to their first clients. While the strength of these referral networks would be a likely target for evaluation, the stage of development of the program should be taken into account. In addition, if the evaluation will have a population focus (e.g., an attempt to measure the impact of a CSC program on the course of psychotic disorders and development of disability), then some method for assessing the referral process relative to the total population of new onsets of psychotic disorder in the program's catchment area will be needed. For instance, new cases of psychotic disorder could be identified in Medicaid claims data or in hospital discharge data and compared with referrals to the CSC program.
Control Group. Another likely challenge for an evaluation of CSC and other ESMI programs aimed at demonstrating effectiveness will be identification of a control group that can provide a valid comparison to assess the program's impact. With some exceptions, evaluation plans have not included a control group, which requires a higher degree of sophistication in the resources available for the evaluation and pre-implementation planning. However, inclusion of a control group should be considered for future evaluations to separate general trends over time from effects of the ESMI program.