A rigorous assessment of the effectiveness of state-initiated technical assistance programs is not possible at this time for several reasons:
Most programs have been in effect for only brief periods and have not had time to collect the type of information necessary for a rigorous impact analysis.
None of the technical assistance programs we reviewed were implemented in a vacuum but in combination with other quality improvement initiatives, making it difficult to isolate specific impacts of technical assistance programs.
There is no consistency among the study states on how quality improvement is defined and measured, and no consensus in the literature about what quality measures are most appropriate.
In addition, while states expressed a general interest in measuring effectiveness of their quality improvement efforts, most have not developed a systematic evaluation plan and have been unable to identify acceptable criteria for measuring the impact. Although, intuitively, states believed that TA has a positive impact, uncertainty about an appropriate measure, along with the unknown influences of other ongoing programs, may mean that the impact of these TA programs on quality is never known.
As an example, Florida said they have considered looking at changes in deficiencies, but have not been able to arrive at a suitable measure. A decrease in the number of deficiencies cited, a decrease in overall scope and severity, or a decrease in the number of citations have been considered as possible measures but none has been proven as reliable measures. The known inconsistency of survey results on these and similar measures adds to the state's reluctance to use any of them. Florida is also aware of the impact staff turnover has had on program effectiveness and sustainability, making them hesitant to begin an evaluation that does not take turnover into account.