Outcome measures would assess whether individuals receiving psychotherapy experience improvements in their symptoms and functioning across a broad range of life domains, including employment, school functioning, relationships, and engagement in the community (Hoagwood et al. 2012). The use of such measures would complement efforts already underway to encourage the measurement of patient-reported outcomes in other areas of health care. For example, CMS and the Office of the National Coordinator for Health Information Technology are supporting efforts to develop patient-reported outcome measures for use in the future stages of the CMS EHR incentive program. These measures address medical and behavioral health conditions. In addition, HHS recently supported a NQF project that sought to identify the factors that should be considered for selecting patient-reported outcome measures for performance improvement and accountability (NQF 2013). Finally, the ACA established the Patient-Centered Outcomes Research Institute to conduct comparative effectiveness research focused on outcomes important to patients. Many of these projects employ or are testing measurement strategies that offer feedback to providers and consumers in an effort to support their ability to make treatment decisions.
As in other areas of health care, measuring psychotherapy outcomes could involve the use of repeated assessments to track improvements over time and progress toward reaching the consumers' goals. These assessments could be completed directly by the consumer. With consumer consent, family members or other individuals important to the consumer, such as case managers or teachers (particularly for children or adolescents receiving therapy), could also complete assessments since these reporters may offer different perspectives (Brown et al. 2008, 2007). Providers could use information from these repeated assessments to adjust treatment in response to an individual's progress, and the individual receiving treatment can use the information for self-monitoring and to make treatment decisions (for example, inquiring about more intensive treatment options or changing providers). Such data could be stored in medical records/EHRs or in other electronic systems, such as web-based systems. In the aggregate, health systems can use the data from these repeated assessments to monitor how consumers respond to treatment and to identify opportunities for quality improvement. Such data can facilitate comparisons of outcomes across these entities. This approach of using repeated assessments to measure outcomes, tailor treatment, and improve the quality of care--referred to as measurement-based care or routine outcomes monitoring in the literature (Boswell et al. 2013; Harding et al. 2011)--is common for physical health conditions, such as diabetes and hypertension, for which measurements are taken regularly and treatment is adjusted accordingly.
There are several examples of measurement-based care or routine outcome monitoring systems in mental health care (Drapeau 2012). Here we provide a brief description of some of these systems to illustrate how they work in practice.
Perhaps one of the most well-known outcomes monitoring efforts in mental health care is the Depression Improvement Across Minnesota Offering A New Direction (DIAMOND) project, in which participating health plans pay certified practices a flat monthly rate for providing a bundled set of services for depression or dsythymia. As part of the initiative, practices administer the PHQ-9 during the consumer's first visit and again at six months and 12 months after the initial visit (AHRQ 2013b). Practices receive monthly performance reports that include how many consumers completed the PHQ-9, symptom remission rates, and how many consumers are making progress toward feeling better (defined as at least 50 percent reduction in the baseline PHQ-9 score). These measures were incorporated into the work of Minnesota Community Measurement, which maintains a website that publicly reports these measures for clinics participating in DIAMOND and other primary care and behavioral health clinics across the state (http://www.mnhealthscores.org). The website facilitates comparisons of clinics over time and reports state averages. The Minnesota measures are endorsed by NQF and included in the CMS EHR Incentive programs. Other delivery systems (Kaiser Permanente and Group Health, for example) and community initiatives (MaineHealth, for example) are adopting these measures and similar measurement and reporting systems.
Other health plan initiatives have also used repeated outcome assessments that give feedback to providers and/or consumers. For example, Optum Behavioral Health, a large national managed behavioral health organization with more than 100,000 providers in its network, uses the Algorithms for Effective Reporting and Treatment (ALERT) system. This system combines data from a consumer-reported Wellness Assessment with claims data to track consumer improvement and identify individuals who are at risk for poor outcomes. The ALERT system identifies consumers with "high distress" or who are at risk of substance abuse who demonstrate poor progress early in treatment. The one-page Wellness Assessment contains items derived from validated tools that assess symptom severity, functional impairment, self-efficacy, substance abuse risk, and the presence of co-morbid medical conditions. The provider administers the assessment when treatment begins and then again during later visits. With permission from the consumer, Optum mails a follow-up assessment four months after treatment begins.
Several web-based systems for tracking symptoms and functioning have been used for quality monitoring and improvement in mental health care settings. One example is the Treatment Outcome Package (TOP), which tracks mental health symptoms and functioning across 12 clinical domains (Kraus 2012). Providers use this system to email a link to the consumer to complete an online questionnaire (which requires 3-5 minutes). The system scores the questionnaire and generates a short report for the provider. Over time, these reports graphically display changes in scores within each domain and benchmark those scores to the general non-clinical population. The report alerts the provider if the consumer is not making progress as expected and includes a list of suggested treatment practices aimed at improving outcomes. TOP also generates a section of the report designed to give to the consumer as feedback. Providers also receive monthly aggregate reports that benchmark their risk adjusted performance against similar professionals. Health care systems have used this system widely. For example, Blue Cross and Blue Shield of Massachusetts incentivized the use of TOP by requiring that providers achieve certain response rates on the tool in order to receive their annual provider fee increase. This was not without controversy and pushback from providers, but ultimately, the TOP was administered over 40,000 times in the first six months of the program (Youn et al. 2012; Liptzin 2009; Blais et al. 2009).
Another example of an approach to outcomes monitoring is the Partners for Change Outcome Management System (PCOMS) International Center for Clinical Excellence, which was recently listed in the SAMHSA National Registry of Evidence-based Programs and Practices (Reese et al. 2010; Anker et al. 2009; Campbell and Hemsley 2009). PCOMS consists of two brief scales: (1) the Outcome Rating Scale (ORS), which assesses mental health functioning and distress and the consumer's perceived benefit of treatment; and (2) the Session Rating Scale (SRS), which assesses the consumer's perception of the therapeutic alliance. The provider administers the ORS at the beginning of the therapy session and the SRS toward the end of the session. The provider and consumer discuss the consumer's ratings for both measures on a session-by-session basis to encourage the consumer's engagement in treatment, improve therapeutic alliance, and keep the sessions focused on the concerns identified by the consumer.
A final example of routine outcomes monitoring is the Improving Access to Psychological Therapy program, which currently operates throughout much of England (Department of Health 2012). This treatment model contains several components, including the use of assessments that identify the individual's concerns and treatment goals at initial contact, and tracks symptom reduction and progress toward treatment goals. Participating providers must ensure that at least 90 percent of consumers who are seen at least twice receive pre-treatment and post-treatment assessments and have a score on the main outcome measures. In addition, these providers receive weekly feedback and clinical supervision to discuss adjusting treatment based on information from the assessments. Information from these assessments, as well as other clinical information, is stored in an electronic database that therapists and care managers can access to monitor consumer progress and managers can use to monitor care and identify opportunities for quality improvement.
The measurement strategies described above, as well as others, have demonstrated promising results. Several randomized trials have found that assessing symptoms and functioning at regular intervals and giving the results as feedback to providers helps to identify individuals at risk for poor outcomes, prevents the worsening of symptoms, and decreases the time to positive outcomes (Bickman et al. 2011; Whipple and Lambert 2011; Shimokawa et al. 2010; Lambert et al. 2005). One meta-analysis of trials that examined feedback given to mental health providers during the course of treatment found a modest positive effect on short-term consumer outcomes but no effect on treatment duration, costs, or longer-term consumer outcomes (although very few studies included information on treatment costs or duration) (Knaup et al. 2009). The same meta-analysis found that feedback to providers had a larger positive effect on short-term outcomes if: (1) the feedback included information on mental health progress over time (versus providing information about current status only); (2) both the consumer and provider received feedback (versus only one of them); and (3) feedback was given more than once. These findings are consistent with the experience of one managed behavioral health organization, which found that six-month outcomes were better among consumers whose therapist reported using the information provided in the progress reports compared with consumers whose provider received the reports but did not report using them (Azocar et al. 2007). These findings underscore the importance of having user-friendly mechanisms that enable providers and consumers to use the feedback from outcome measures.
Outcome measures can be used for clinical decision making and to engage consumers in care. As described in the examples above, routine outcomes monitoring can serve at least two purposes: (1) to help track consumer progress and identify individuals who fail to respond to treatment; and (2) to encourage consumer engagement in treatment.
Identifying individuals who fail to respond to treatment is particularly important. One study of over 6,000 individuals who received an average of four psychotherapy sessions in community-based treatment settings found that only about 35 percent improved and about 8 percent experienced worse symptoms or functioning (Hansen et al. 2002). Likewise, studies of children and youth receiving mental health care have found that as many as 24 percent get substantially worse during treatment (Warren et al. 2010). By drawing on data from outcome assessments, the measurement systems described in this paper are able to identify individuals experiencing treatment failure by comparing the trajectory of their progress with statistically generated expected results (Boswell et al. 2013). Some studies have suggested that such measurement approaches can identify 85-100 percent of individuals who get worse during treatment (Ellsworth et al. 2006; Lambert et al. 2002), which is better than relying on clinical judgment alone (Hannan et al. 2005). The information from assessments can also be combined with claims data to implement algorithms that identify individuals at risk for hospitalization (McAleavey et al. 2012).
Consumers can directly benefit from receiving feedback on outcome measures. Some clinicians and researchers have noted that even slight improvements in outcome measures can be encouraging for consumers and can help enhance therapeutic alliance (Youn et al. 2012). Some routine outcome monitoring systems include mechanisms to collect additional information from consumers at risk for poor outcomes. For example, a consumer who is not improving would be asked to report additional information about his or her social support and recent life events, which may provide critical contextual information to help the therapist adjust treatment (Boswell et al. 2013). Thus, repeated assessments may provide another tool for consumers to share information, engage in discussion about their treatment goals and progress, and make treatment decisions.
Outcome measures may overcome the limitations of structure and process measures. To some extent, the measurement of outcomes can be accomplished without regard to the specific content of the treatment being delivered. Outcome measurement shifts the focus from the content of the treatment, which may vary across consumers and time, to the result of the treatment. Nonetheless, measuring how psychotherapy was delivered in tandem with outcomes may help to provide insight into which components of therapy and common factors across psychotherapies produce positive results in typical care settings. Measures that focus directly on outcomes and inform clinical decision making as well as quality improvement may be more appealing than measures of specific processes of care. There may not be agreement on what constitutes high quality treatment, or when there is inadequate evidence that the specific structures and processes of care are strongly associated with outcomes.
In sum, outcome measures may have benefits at multiple levels of the health care system. The measurement of outcomes offers an approach for making care more patient-centered by enabling consumers (as well as other important individuals in their lives) to report information about their symptoms and functioning. For consumers, measures that focus on functional outcomes--such as relationships, employment, and engagement in the community--provide direct feedback that they can use for self-monitoring and making treatment decisions. Providers can use such measures to identify individuals who are not responding to treatment or may require adjustments to their treatment. Likewise, improvements in consumer outcomes may signal to consumers, providers, or health plans that more intensive or sustained treatment may have limited benefit to the individual or family. The tools used to monitor the progress of treatment can be adapted to the service setting and the population. Providers, health plans, and the broader health care system can use performance on these measures to monitor outcomes and identify opportunities for quality improvement as well as promising practices.
Health systems must overcome several obstacles to widely implement psychotherapy outcome measures, including: (1) selecting outcome measures that are meaningful for consumers, providers, and other stakeholders; (2) deciding on the appropriate level of reporting and strategies for making fair comparisons across providers, plans, or systems; (3) overcoming providers' lack of familiarity with outcome measures; and (4) incorporating the administration and reporting of measures into clinical workflows so that it becomes a routine part of therapy (Boswell et al. 2013; Harding et al. 2011). Here we briefly discuss these challenges.
Selecting appropriate outcome assessments. There are many choices of outcome assessments. Some assessments focus on symptom severity among specific diagnostic groups; others focus more broadly on functioning across various life domains. Some of these assessments are proprietary. The selection of the best assessment may depend on the target population, treatment setting, and end use of the information. Although it is important to have some flexibility in which instruments are used to assess outcomes, the use of very different assessments across providers, health plans, or states may impede comparisons. In addition, because providers typically belong to several health plans and receive reimbursement through various state and federal funding streams--each with their own reporting requirements--aligning outcome reporting efforts would decrease burden.
In addition to measuring symptoms and functioning, experts and measurement stakeholders, including the Measures Application Partnership Dual Eligible Beneficiaries Workgroup, have recommended that performance measures should assess goal setting and goal achievement (NQF 2012). For example, the framework guiding measure-development for the CMS EHR incentive program calls for steps in building performance measures based on patient-reported outcomes. Experts recommended gradually introducing performance measures that include goal setting, goal attainment, and improvement in outcomes over time (Torda 2013). This goal setting approach is drawing on literature on goal attainment scaling developed in the mental health field and implemented in a variety of settings and populations (Kiresuk et al. 1994).
Ideally, outcome assessments would be inexpensive to collect, impose the least amount of data collection burden to providers and consumers, apply to broad populations, and have some comparability in terms of reliability and validity. Stakeholders could draw from a common menu of outcome assessments that would facilitate comparisons. There are some resources on which to build; the MacArthur Foundation and Project IMPACT have undertaken efforts to develop a tool kit of measures for depression (Harding et al. 2011), and the Patient-Reported Outcomes Measurement Information System initiative, sponsored by the HHS National Institutes of Health, has assembled measures and items that assess symptoms and functioning. When these measures are used for quality improvement within an organization, the goal may be simply improvement without a specific target. However, if different measures were used across providers or health plans for the purposes of public reporting and accountability, health plans or other entities would need to use methods to make equitable comparisons. Statistical concepts, such as the Reliable Change Index, could be used.
Lack of familiarity with outcomes monitoring. Historically, academic training for psychiatrists and other behavioral health providers has not included learning how to incorporate outcome assessments into clinical practice. Some providers may be reluctant to use assessments because they fear it could damage their relationships with consumers or threaten their autonomy (Boswell et al. 2013). Providers may also not see the value of using repeated assessments based on standardized scales (Lambert et al. 2005) or may not want their outcomes compared with other therapists (Youn et al. 2012; Okiishi et al. 2006).
Providers would need training to learn how to introduce assessments to consumers, interpret the results, and use those results for clinical decision making and quality improvement. There is surprisingly little research to guide how providers should approach the steps involved in introducing and administering mental health assessment tools and discussing the results of those tools with consumers (Wissow et al. 2013).
Getting providers to use feedback can be a formidable challenge. One study found that even when outcome assessments were required, most mental health providers reported that they did not use the feedback for clinical decision making (Garland et al. 2003). It is unclear what strategies work best for providing feedback and how to structure that feedback in a manner that is helpful. Most outcome measurement and feedback strategies in psychotherapy do not give specific instruction on how providers should use feedback, allowing providers to use their clinical judgment instead (Lambert et al. 2005). Some feedback strategies have successfully used color-coded systems that correspond to consumer progress, and there is some evidence that certain clinical support tools can help providers use feedback, but further research is needed to understand how these strategies and tools can work in typical community-based treatment settings (Whipple and Lambert 2011). Health plans have employed various strategies for giving feedback to providers. For example, PacifiCare Behavioral Health sent quarterly reports to providers that contained a summary of progress for their patient population (Brown and Jones 2005). They also sent letters to providers when a consumer failed to demonstrate improvement, which encouraged the provider to keep that individual engaged in treatment and offered to pre-authorize more intensive services. They also sent letters to providers when individuals responded well to treatment; the idea was to acknowledge the good outcome and suggest that longer-term treatment might not necessarily result in a better outcome. It is unclear to what extent providers may perceive such feedback as limiting their autonomy or attempting to restrict access to care--potentially adding to their reservations for participating in outcome measurement systems.
Making outcome measurement part of routine care. Providers and health plans will need to adopt new processes for measuring consumer outcomes and for using this information to improve the quality of care. Some providers may not have the time or resources to administer, score, or interpret assessments. Reimbursement models that pay for the administration of such outcome measures in mental health, similar to routine medical tests, may encourage their use (Boswell et al. 2013). There may also be opportunities to integrate these measures into existing reporting programs, such as Meaningful Use, which would provide a financial incentive. Moreover, providers will be challenged to incorporate the use of measures into their routine practice as opposed to an additional activity. Some providers with limited time may be able to rely on non-clinical staff, such as medical assistants or care managers, to administer and score the measures. Web-based or computer-based screening tools could ease the administration of measures, but smaller practices might not find investments in these technologies feasible. Health plans or large health care delivery systems may be well-positioned to provide the infrastructure to facilitate the measurement of outcomes and give feedback to providers, as in the examples described above.
Making fair comparisons. If outcome measures are used for public reporting or accountability they may require risk adjustment for two reasons: (1) to ensure that performance is not attributable to differences in severity of illness or other factors that may be beyond the control of the provider, organization, health plan, etc.; and (2) to guard against the possibility that providers or health plans would have an incentive to avoid treating/enrolling consumers with more severe problems. Methods for risk adjustment in mental health care are limited, in part because of the incomplete data on severity of illness and other consumer characteristics in claims or medical records. Some of the outcome measurement systems described in this paper have employed risk adjustment strategies that may offer guidance, whereas others are proprietary. Some potential alternatives to risk adjustment include reporting on stratified populations or examining whether there was meaningful change in clinical care in response to lack of improvement (Kerr et al. 2012). In addition, states, health plans, and providers could use measures to monitor incremental improvements rather than absolute values (Kilbourne et al. 2010). For example, a provider or health plan would be held accountable for whether consumer outcomes are improving or meeting a benchmark from one year to the next rather than being assessed and compared at only one point in time. Another possible variation might be to provide incentives to organizations that can demonstrate clinical improvement for a minimally acceptable percentage of their consumers rather than require improvement across the entire cohort.