Development of a Quality Measure for Adults with Post-Traumatic Stress Disorder. NOTES

05/01/2019

  1. For a recent synthesis of the evidence on PTSD treatment, see Institute of Medicine (2012, chapter 7).

  2. TEP members were experts in psychotherapeutic treatments for adults with PTSD and were not members of the original TAG. They were selected to assist in identifying therapeutic elements and creating the initial measure.

  3. The 1-9 rating scale follows the ratings practices used by RAND in other similar prioritization and appropriateness ratings exercises (AHRQ n.d.; Brook et al. 1990; Fitch et al. 2001).

  4. We also fit the same models using the more conventional WLSMV estimator. The WLSMV relies on large sample theory and assumes a normal distribution. Not unsurprisingly, given the comparatively small sample of clinicians, supervisors, and clients, there were problems identifying the factor model with the WLSMV estimator. Results from both models are presented in Appendix L.

  5. In preliminary analyses, we calculated inter-rater agreement using the Kappa statistic and observed the "Kappa paradox," where Kappa tends to yield a low value when the raters show high agreement. The AC1 statistic was designed to address the Kappa statistic's limitation. See Appendix M for further information on the Kappa and AC1 statistics.

  6. We are unable to calculate length of time to complete paper surveys.

  7. At the beginning of data collection, the supervisors in one site mistakenly completed 22 surveys based on review of the clinician's case notes instead of audio tape review. We calculated inter-rater reliability with and without the 22 surveys. Most agreement measures were negligibly affected by the exclusion, although one item, with regard to the therapist struggling to manage time, did dramatically decrease, from 0.81 to -0.67. Overall, these results indicate that completion of the supervisor survey did not create significant bias in the results of our inter-rater agreement analysis.

  8. To better understand what the AC1 for 2 raters conceptually represents, imagine that all subjects who are classified into identical categories by pure chance are identified and removed from the population of subjects. This creates a new trimmed population where agreement by chance would be impossible. The AC1 co-efficient is the relative number of subjects in the trimmed subject population upon which the raters agreed (Gwet 2008).