Joshua Breslau, Mark J. Sorbero, Daniela Kusuke, Hao Yu, Deborah M. Scharf, Nicole Schmidt Hackbarth, and Harold Alan Pincus
Printer Friendly Version in PDF Format (66 PDF pages)
This report describes an extension of the RAND Corporation's evaluation of the Substance Abuse and Mental Health Services Administration's Primary and Behavioral Health Care Integration (PBHCI) grants program. PBHCI grants are designed to improve the overall wellness and physical health (PH) status of people with serious mental illness (SMI) or co-occurring substance use disorders by supporting the integration of primary care and preventive PH services into community behavioral health (BH) centers where individuals already receive care. From 2010 to 2013, RAND conducted a program evaluation of PBHCI, describing the structure, process, and outcomes for the first three cohorts of grantee programs (awarded in 2009 and 2010). That evaluation found wide variation in program structures, a range of implementation barriers, and some consumer-level improvements in PH outcomes (e.g., cholesterol, diabetes management). The current study extends previous work by investigating the impact of PBHCI on consumers' health care utilization, total costs of care to Medicaid, and quality of care in three states.
This report was prepared under contracts #HHSP23320095649WC and #HHSP23337015T between HHS's ASPE/DALTCP and the RAND Corporation. For additional information about this subject, you can visit the DALTCP home page at https://aspe.hhs.gov/office-disability-aging-and-long-term-care-policy-daltcp or contact the ASPE Project Officer, Joel Dubenitz, at HHS/ASPE/DALTCP, Room 424E, H.H. Humphrey Building, 200 Independence Avenue, S.W., Washington, D.C. 20201; Joel.Dubenitz@hhs.gov.
DISCLAIMER: The opinions and views expressed in this report are those of the authors. They do not reflect the views of the Department of Health and Human Services, the contractor or any other funding organization. This report was completed and submitted on October 2016.
TABLE OF CONTENTS
- Poor Health Outcomes and High Costs of Care for Adults with Serious Mental Illness
- Integrated Care May Improve Health Care Quality and Overutilization of Intensive Services
- Effects of Integrated Care on Health Care Costs
- Primary and Behavioral Health Care Integration Grants
- Estimating Health Care Quality, Service Utilization, and Cost of Care from Public Payer Claims
- Clinic-Level Differences
- Research Questions
- Selection of States
- Data Sources
- Identification of Primary and Behavioral Health Care Integration Clinics in the Claims Data
- Identification of Comparison Clinics
- Outcome Measures
- Statistical Analyses
- Supplemental Analysis of Continuously Treated Consumers
- Sample Descriptions
- Utilization Measures
- Cost of Medicaid Reimbursements
- Quality Measures
- Supplemental Analysis of Continuously Treated Individuals
- Results Summary
- Utilization of Emergency Department and Inpatient Services
- Cost of Care
- Quality Measures
- Study Limitations
- APPENDIX A: Quality and Utilization Measures Considered for Study
- APPENDIX B: Year-by-Year Estimates of PBHCI Effects on Measures of Utilization, Costs, and Quality
LIST OF FIGURES
- FIGURE 2.1: Overlap Between PBHCI Grantee Clinical Activities and Available Billing Data
LIST OF TABLES
- TABLE 2.1: Number of PBHCI Clinics in the States Choses for Analysis by Cohort
- TABLE 2.2: Years of Data Included in Analyses by State
- TABLE 2.3: Numbers and Sample Sizes During the Pre-PBHCI Year for the Comparison Clinics Selected for Each PBHCI Cohort Within Each State
- TABLE 2.4: Comparison of Consumer-Years Enrolled in PBHCI with Consumer-Years Identified in Clinic Caseloads in Medicaid Claims Data, for PBHCI Implementation Years Included in Analyses
- TABLE 3.1: Sizes and Selected Characteristics of the PBHCI and Comparison Groups Samples
- TABLE 3.2: DD Estimates of the Impact of PBHCI on Utilization Measures
- TABLE 3.3: DD Estimates of the Impact of PBHCI on Medicaid Costs
- TABLE 3.4: Impacts of PBHCI on Quality of Care Measures Based on DD Model
- TABLE 3.5: Continuously Treated Sample as a Proportion of the Pre-PBHCI Sample
- TABLE 3.6: Analysis in Samples Restricted to Continuously Treated Consumers
- TABLE 3.7: Summary of DD Results Across States and Cohorts
- TABLE A.1: Utilization and Quality Measures Considered for Study, Drawn from New York State's PSYCKES or NQF
- TABLE B.1: Utilization and Quality Measures in the PBHCI and Comparison Clinics During the Pre-PBHCI and Post-PBHCI Period, State 1, Cohort 1
- TABLE B.2: DD Estimates of the Impact of PBHCI on Utilization and Quality Measures, State 1, Cohort 1
- TABLE B.3: DD Estimates of the Impact of PBHCI on Medicaid Costs, State 1, Cohort 1
- TABLE B.4: DD Results for Cost Measures, State 1, Cohort 1
- TABLE B.5: Utilization and Quality Measures in the PBHCI and Comparison Clinics During the Pre-PBHCI and Post-PBHCI Period, State 2, Cohort 1
- TABLE B.6: DD Results for Utilization and Quality Measures, State 2, Cohort 1
- TABLE B.7: DD Results for Cost Measures, State 2, Cohort 1
- TABLE B.8: DD Results for Cost Measures, State 2, Cohort 1
- TABLE B.9: Utilization and Quality Measures in the PBHCI and Comparison Clinics During the Pre-PBHCI And Post-PBHCI Period, State 2, Cohort 3
- TABLE B.10: Utilization and Quality Measures, State 2, Cohort 3
- TABLE B.11: DD Results for Cost Measures, State 2, Cohort 3
- TABLE B.12: DD Results for Cost Measures, State 2, Cohort 3
Beginning in 2009, the U.S. Department of Health and Human Services (HHS) Substance Abuse and Mental Health Services Administration (SAMHSA) has supported provision of physical health (PH) care services by specialty behavioral health (BH) clinics through its Primary and Behavioral Health Care Integration (PBHCI) grant program. In 2014, the RAND Corporation completed an evaluation report examining the services supported by the PBHCI grants and their impact on health outcomes. The HHS Office of the Assistant Secretary for Planning and Evaluation (ASPE) contracted with RAND for this study to extend RAND's evaluation using Medicaid claims data to examine the impact of PBHCI on utilization of emergency department and inpatient care, costs of care, and quality of care. This report presents results of analyses of the impact of PBHCI grant programs on those outcomes in three states. The report is addressed to policymakers at ASPE and SAMHSA as well as the broader mental health policy and advocacy community.
PBHCI grants were designed to improve the overall wellness and PH status of people with serious mental illness (SMI) or co-occurring substance use disorders by supporting the integration of primary care and preventive PH services into community BH centers where individuals already receive care. From 2010 to 2013, RAND conducted a program evaluation of PBHCI, describing the structure, process, and outcomes for the first three cohorts of grantee programs (one cohort awarded in 2009 and two in 2010). Resulting reports describe wide variation in program structures, a range of implementation barriers, and some consumer-level improvements in PH outcomes (e.g., cholesterol, indicators of diabetes). The current study extends previous work by investigating the impact of PBHCI on consumers' health care utilization, total costs of care, and quality of care received using Medicaid claims data, which were not available in the previous evaluation. Specifically, we address the following research questions:
What was the impact of PBHCI on utilization of emergency department and inpatient services?
One of the major motivations for improving the quality of primary care services for adults with SMI is to shift care away from unnecessary or preventable emergency department visits or inpatient hospitalizations. The claims data allowed us to examine utilization of emergency department and inpatient services.
What was the impact of PBHCI on costs of care to Medicaid?
Improvements in care for PH conditions are likely to have complex cost implications for Medicaid. The claims data allowed us to examine the impact of PBHCI on the total costs of care per person and to break these costs down by the site of care to gain insight into how PBHCI affected each of these components of total costs of care.
What was the impact of PBHCI on the quality of health care for PH conditions for the people treated in PBHCI grantee clinics?
By improving primary care services, PBHCI was expected to improve care for PH conditions. Although the prior evaluation documented some of these improvements, the current study examined the impact of PBHCI on quality of care from a different perspective (Medicaid), which included documentation of preventive health services provided outside of each PBHCI clinic.
The research was conducted in RAND Health, a division of the RAND Corporation. A profile of RAND Health and abstracts of its publications can be found at http://www.rand.org/health.
The following acronyms are mentioned in this report and/or appendices.
|ASPE||HHS Office of the Assistant Secretary for Planning and Evaluation|
|CMHC||Community Mental Health Center|
|CMS||HHS Centers for Medicare and Medicaid Services|
|HHS||U.S. Department of Health and Human Services|
|IMPACT||Improving Mood--Promoting Access to Collaborative Treatment|
|LL||Lower Limit of the 95% CI|
|MAX||Medicaid Analytic Extracts|
|NPI||National Provider Identifiers|
|NQF||National Quality Forum|
|PBHCI||Primary and Behavioral Health Care Integration|
|PSYCKES||Psychiatric Services and Clinical Knowledge Enhancement System|
|ResDAC||Research Data Assistance Center|
|RFA||Request for Applications|
|SAMHSA||HHS Substance Abuse and Mental Health Services|
|SMI||Serious Mental Illness|
|UL||Upper Limit of the 95% CI|
|VA||U.S. Department of Veterans Affairs|
This report describes an extension of the RAND Corporation's evaluation of the U.S. Department of Health and Human Services, Substance Abuse and Mental Health Services Administration's (SAMHSA's) Primary and Behavioral Health Care Integration (PBHCI) grants program. PBHCI grants are designed to improve the overall wellness and physical health (PH) status of people with serious mental illness (SMI) or co-occurring substance use disorders by supporting the integration of primary care and preventive PH services into community behavioral health (BH) centers where individuals already receive care. From 2010 to 2013, RAND conducted a program evaluation of PBHCI, describing the structure, process, and outcomes for the first three cohorts of grantee programs (awarded in 2009 and 2010). That evaluation found wide variation in program structures, a range of implementation barriers, and some consumer-level improvements in PH outcomes (e.g., cholesterol, diabetes management). The current study extends previous work by investigating the impact of PBHCI on consumers' health care utilization, total costs of care to Medicaid, and quality of care in three states.
Adults with SMI suffer disproportionately from PH conditions. Compared with their non-SMI peers, adults with SMI are at increased risk for a range of acute and chronic diseases, including diabetes, cardiovascular disease, respiratory disease, cancer, and infectious disease (Jones et al., 2004; McGinty et al., 2012; Parks et al., 2006; SAMHSA, 2012). Life expectancy estimates for adults with SMI range from eight to 30 years lower than for the general population (Chang et al., 2011; Colton and Manderscheid, 2006; Saha, Chant, and McGrath, 2007; Walker, McGee, and Druss, 2015). Co-occurring medical and BH conditions are also disproportionately costly for public payers of health care, primarily Medicaid and Medicare (Kasper, Watts, and Lyons, 2010; Melek, Norris, and Paulus, 2014). These disparities have been attributed to modifiable risk factors such as smoking, alcohol and substance use, poor nutrition, lack of exercise, obesity, and high-risk sexual behaviors (Parks et al., 2006); side effects of psychotropic medications (Newcomer, 2007); housing instability and low socioeconomic status (Katon, 2003); and limited access to quality medical care (Lawrence and Kisely, 2010).
Fragmentation between the general medical and BH sectors--in terms of clinical practice, administration, and financing--is widely considered to be a significant contributor to the poor overall health outcomes associated with SMI (Druss, 2007; Horvitz-Lennon, Kilbourne, and Pincus, 2006; Committee on Crossing the Quality Chasm: Adaptation to Mental Health and Addictive Disorders, Board on Health Care Services, Institute of Medicine, 2006; Pincus et al., 2007; President's New Freedom Commission on Mental Health, 2003). As such, initiatives that promote medical and BH integration are expected to address the triple aims of health care reform: improved care experiences, improved health outcomes, and reduced per-capita costs (Katon and Unützer, 2013).
Improvements in care experience and health outcomes are expected to result from increased access to primary care and preventive medical services (because of service colocation or facilitated referrals) and increased collaboration and learning across BH and PH care providers (Alakeson, Frank, and Katz, 2010). Reductions in health care costs for adults with SMI are expected to result through decreases in hospitalizations and emergency department visits for preventable health conditions and fewer inappropriate visits to emergency departments (e.g., for primary care needs) (Nolte and Pitchforth, 2014). In practice, however, the effects of integration on health care costs for adults with SMI may be more complex. Given high levels of previously unmet medical needs, integrated care programs for adults with SMI may lead to increased visits to primary and specialty medical care, which can increase the cost of care particularly for consumers who had little to no contact with PH care services before.
In the current study, we examined the impact of PBHCI-funded integrated care for adults with SMI on health care utilization, total costs of care, and quality of care received using Medicaid claims data. Medicaid claims data provide a valuable perspective because they reflect a wide scope of services that (Medicaid-enrolled) individuals receive, which is particularly important given that adults with SMI may be transient (receiving services across multiple locations and health systems) and are likely to receive services across multiple levels of care (i.e., hospital, crisis, emergency, outpatient).
The prior RAND evaluation of PBHCI did not have information on utilization and costs of health care outside of the PBHCI grantee clinics. It also did not have information on utilization, costs, and quality among consumers treated in non-PBHCI clinics to whom the PBHCI enrollees could be compared. The current study was designed to address these limitations and, specifically, to investigate the following research questions:
What was the impact of PBHCI on utilization of emergency department and inpatient services?
One of the major motivations for improving the quality of primary care services for adults with SMI is to shift care away from unnecessary or preventable emergency department visits or inpatient hospitalizations. The claims data allowed us to examine utilization of emergency department and inpatient services and to distinguish utilization for PH conditions, where effects are anticipated, from utilization for BH conditions, which are not directly targeted by PBHCI.
What was the impact of PBHCI on costs of care to Medicaid?
Improvements in care for PH conditions are likely to have complex cost implications for Medicaid. The claims data allowed us to examine the impact of PBHCI on the total costs of care per person and to break these costs down by the site of care to gain insight into how PBHCI affects each of these components of total costs of care.
What was the impact of PBHCI on the quality of health care for PH conditions for the people treated in PBHCI grantee clinics?
By improving primary care services, PBHCI was expected to improve care for PH conditions. Although the prior evaluation documented some of these improvements, the current study examined the impact of PBHCI on quality of care from a different (Medicaid) perspective, which included documentation of services provided outside of each PBHCI clinic. These measures reflect not only the care that was directly provided but the programs' success connecting patients with care from external medical providers. The measures include services that were not provided by the PBHCI clinics, such as screening exams for colorectal cancer and follow-up after discharge from a hospitalization for mental illness.
This study used Medicaid claims data to estimate the impact of PBHCI grants on utilization, costs of care, and quality, using a difference-in-differences model. This model compared change in the outcomes associated with introduction of the PBHCI program into the grantee clinics with change over the same time period in a set of comparison clinics from the same state that did not receive PBHCI grants. The study was organized as a series of three state-level case studies. States were selected based on a number of state-specific characteristics (e.g., data availability, number of PBHCI grantees).
A group of comparison clinics was selected to represent clinics within each state based on information in the claims data sets. Specifically, we examined four claims-based provider characteristics: pattern of utilization, proportion of claims with a primary diagnosis of BH condition, proportion of claims with a primary diagnosis of schizophrenia, and caseload size. For PBHCI and control clinics, all consumers with at least one visit to the clinic with a diagnosis of a SMI during a year were considered members of that clinic's caseload for that year and, thus, included in analyses.
Three types of outcomes were examined: measures of emergency department and inpatientutilization, costs of care, and quality indicators. Utilization measures included any emergency department or inpatient visits for BH or PH conditions and frequent emergency department or inpatient usage (defined as three or more emergency department visits for a BH condition, four or more emergency department visits or inpatient stays for a PH condition, and four or more emergency departmentvisits or inpatient stays for any condition). Cost outcomes included both binary indicators of whether or not an individual used a type of service (e.g., an inpatient stay) and continuous measures of total costs of care (e.g., the total cost for inpatient stays among individuals with an inpatient stay). Quality of care measures included appropriately receiving services for diabetes monitoring, flu vaccine, cancer screenings, outpatient PH care, and follow-up after hospital discharge.
Utilization, cost, and preventive services were examined in a total of five cohorts of PBHCI clinics: two cohorts in State 1, two cohorts in State 2, and one cohort in State 1. Evidence of PBHCI effects on utilization of emergency department and inpatient services was mixed across cohorts, but two clear patterns emerged with respect to frequent use of these services. First, in all five cohorts, PBHCI was associated with a reduction relative to comparison clinics in the proportion of consumers having four or more emergency department or inpatient visits, and this reduction reached statistical significance in three of the five cohorts. Second, the reduction in frequent utilization was specific to utilization for PH conditions. In three of the five cohorts, PBHCI was associated with a reduction relative to comparison clinics in the proportion of consumers having four or more emergency department or inpatient visits with a primary diagnosis of a PH condition.
For each cohort of clinics, we examined the impact of PBHCI on total costs of care to Medicaid and on costs for specific types of services--outpatient care, emergency department visits, and inpatient stays. PBHCI was associated with a reduction relative to comparison clinics in the total costs of care per consumer in three of the five cohorts. The impact of PBHCI on total cost was not statistically significant in the remaining two cohorts. Reductions in cost for specific types of care varied across cohorts. Statistically significant reductions in cost for outpatient services were found in two cohorts: in cost per user of emergency department services for one cohort and in cost per used or inpatient services for another cohort. Countervailing increases were found for costs per user of inpatients services in one cohort and in two cohorts. PBHCI was associated with higher likelihood of having emergency department-related costs in one cohort and lower likelihood of having emergency department-related costs in another.
Few of the quality of care measures for primary care services were impacted by PBHCI, either positively or negatively. There did not appear to be a pattern to the effects that were found. An exception was a pattern of negative effects of PBHCI on quality indicators for State 3--that is, PBHCI clinic consumers were less likely to have received appropriate services, such as diabetes screenings, than comparison clinic consumers. It is important to note that consumers (in PBHCI or comparison clinics) may indeed have received such services despite these services not being reflected in the claims data, especially if grant funds were used to cover these services.
The current study on the impact of PBHCI on utilization of emergency department and inpatient services, total costs of care, and quality of care received for Medicaid beneficiaries yielded mixed results. We did find some evidence that PBHCI can be successful in producing positive changes in consumer health care utilization patterns. In particular, there was evidence that, in some of the groups of clinics studied, PBHCI reduced frequent utilization of emergency department and inpatient services, increased ambulatory follow-up after an emergency department or inpatient visit, and reduced total per person costs of care to Medicaid. While there was considerable variation in these effects across groups of clinics studied (across states and years awarded), there were no results in which PBHCI significantly increased total per person costs of care to Medicaid. Although our findings regarding the impact of PBHCI on quality of care did not yield positive results, Medicaid claims data may not reflect all services provided to consumers. In particular, care assessed by quality measures such as appropriate diabetes screening may have been paid with grant funds and thus may not be reflected in claims.
Results of this study should be interpreted in the light of the following limitations. First, the study was conducted in three of the 32 states that hosted PBHCI clinics during this time period. Given the variability in the results, even across these three states, it is reasonable to infer that there is wider variability in PBHCI impacts across the country. While the results demonstrate that PBHCI can have positive impacts on utilization and costs, they do not allow us to draw conclusions regarding the overall impact of the program on a national basis. Second, this study was conducted using entire clinic caseloads, while only a subset of individuals were actually enrolled in the PBHCI program. The apparent impact of the program may have been reduced by this more inclusive sample. Third, the PBHCI program requirements were being revised across the cohorts studied here and were further revised for the cohorts that came after. Therefore, these results should be interpreted as reflections of the impact of the early phase of the program. Later cohorts, which followed requirements revised in light of these early experiences, may have had different results.
Our findings raise a number of questions regarding the mechanisms of change that could be further investigated for lessons regarding continuing improvement in care. For example, although PBHCI impacts on total costs of care were similar across cohorts, the pathways through which those outcomes were achieved appear to be different in each cohort. This heterogeneity, which may result from different program implementation strategies or from different pre-PBHCI systems, deserves further investigation.
Poor Health Outcomes and High Costs of Care for Adults with Serious Mental Illness
Compared with their peers without serious mental illness (SMI), adults with SMI are at increased risk for a range of acute and chronic diseases, including diabetes, cardiovascular disease, respiratory disease, and infectious disease (Jones et al., 2004; Parks et al., 2006; Substance Abuse and Mental Health Services Administration, 2012). Life-expectancy estimates for adults with SMI are 8-30 years lower than for the general population (Chang et al., 2011; Colton and Manderscheid, 2006), and much of this disparity has been attributed to modifiable risk factors such as smoking, alcohol and substance abuse, poor nutrition, lack of exercise, obesity, and high-risk sexual behaviors (Parks et al., 2006). Other contributing factors include side effects of psychotropic medications (Newcomer, 2007), housing instability and low socioeconomic status (Katon, 2003), and limited access to quality medical care (Lawrence and Kisely, 2010).
Co-occurring medical and behavioral health (BH) conditions are also disproportionately costly for public payers of health care, primarily Medicaid and Medicare (Kasper et al., 2010; Melek, Norris, and Paulus, 2014). For example, the most costly 5 percent of Medicaid beneficiaries account for approximately 50 percent of all Medicaid spending; and, among this top 5 percent, psychiatric illnesses are present among three of the five most prevalent diagnostic pairs (Kronick, Bella, and Gilmer, 2009). A recent economic analysis of Medicare, Medicaid, and commercial claims data found that per-beneficiary costs for treating a wide range of medical conditions were 2-3 times higher for beneficiaries with co-occurring diagnoses of SMI or substance use disorder compared with those without comorbid BH diagnoses (Melek, Norris, and Paulus, 2014). The majority of disproportionate spending was on physical health (PH) rather than BH treatment (Melek, Norris, and Paulus, 2014). This largely reflects a need for improved management of chronic PH conditions, since spending on a range of chronic PH conditions was considerably higher for adults with SMI compared with adults with no SMI diagnosis.
Integrated Care May Improve Health Care Quality and Overutilization of Intensive Services
The United States health care system's traditional separation of medical and BH sectors--in terms of clinical practice, administration, and financing--is widely considered to be a significant contributor to the comparatively poor overall health outcomes associated with mental illness (Druss, 2007; Horvitz-Lennon, Kilbourne, and Pincus, 2006; Committee on Crossing the Quality Chasm: Adaptation to Mental Health and Addictive Disorders, Board on Health Care Services, Institute of Medicine, 2006; Pincus et al., 2007; President's New Freedom Commission on Mental Health, 2003). As such, initiatives that promote medical and BH integration--for example, through colocation of primary and BH care services, clinical changes such as collaborative care teams, or payment reforms--are increasingly common health care reforms (Druss and Mauer, 2010; Katon and Unützer, 2013; Smith et al., 2012). Most integrated care efforts focus on bringing BH care into primary care settings (Butler et al., 2008); however, the reverse model, in which primary medical care is integrated into BH settings, may be a more effective approach for populations with SMI because these individuals are more likely to have established relationships with BH providers than medical providers (Alakeson, Frank, and Katz, 2010).
Integrated care has the potential to address the triple aims of health care reform for adults with SMI: improved care experiences, improved health outcomes, and reduced per-capita costs (Katon and Unützer, 2013). Improvements in care experience and health outcomes are expected to result from increased access to primary care and preventive medical services (because of service colocation or facilitated referrals) and increased collaboration and learning across BH and PH care providers (Alakeson, Frank, and Katz, 2010). The effects of integrated care on health care costs are discussed in the next section.
Effects of Integrated Care on Health Care Costs
Integrated care has been hypothesized to reduce health care costs for adults with SMI through decreases in hospitalizations and emergency department visits for preventable health conditions and fewer inappropriate visits to emergency departments (e.g., for primary care needs) (Nolte and Pitchforth, 2014). In practice, however, the effects of integration on health care costs for adults with SMI may be more complex. Given high levels of previously unmet medical needs, integrated care programs for adults with SMI may lead to increased visits to primary and specialty medical care, which can increase the cost of care particularly for consumers who had little to no contact with PH care services before. Relatedly, a clinic's potential for cost savings may not be realized until an integrated care program has matured such that the identification of new conditions in the clinic population has reached a steady state and the majority of service is managing previously identified health care needs. More generally, experts have warned that, although preventive care initiatives often improve population health and reduce some types of care costs, they are often not cost saving to the health system overall. Specifically, preventive interventions tend to add more to total medical costs than they save (Russell, 2009).
The vast majority of research on the effects of integrated care on health care costs has examined models that integrate BH into medical (typically primary care) settings. In a 2014 report prepared for the American Psychiatric Association, Melek and colleagues estimated the economic impact of integrated medical and BH care on commercial and public payers and posited that "typical cost savings estimates range from 5 percent to 10 percent of total health care costs over a two to four year period" (Melek, Norris, and Paulus, 2014, p. 19). The most robust evidence of such savings is found in studies of programs, such as the Improving Mood--Promoting Access to Collaborative Treatment (IMPACT) collaborative care model (Unützer et al., 2008), which focus on specifically defined consumer populations such as older adults with depression or comorbid depression and diabetes (Katon and Unützer, 2013). In a randomized, controlled trial, consumers assigned to IMPACT had approximately $3,300 lower total health care costs compared with consumers in usual care over a four-year period (one year of intervention and three years of follow-up) (Unützer et al., 2008). No significant cost savings were found after two years of follow-up (Katon et al., 2005; Unützer et al., 2008). On the other hand, in their review of the economic impact of integrated care, Nolte and Pitchforth (2014) reported that cost outcomes are mixed and inconsistent, and described the evidence-base for this topic as weak.
One of the few studies that examined the effects of integration based in mental health settings was conducted within the U.S. Department of Veterans Affairs (VA) health system, where individuals enrolled in a VA mental health clinic were randomized to receive primary care through an integrated care initiative located within the mental health clinic or through a VA general medicine clinic (Druss et al., 2001). In addition to improved PH outcomes and quality of care, consumers in the integrated care group were significantly more likely than controls to experience a primary care visit (91.5 percent versus 72.1 percent, pp
An early analysis of Missouri's Health Home initiative, which includes management of chronic PH conditions through community mental health centers (CMHCs), compared hospital and emergency department utilization for CMHC Health Home enrollees in the year prior to enrollment and the year following enrollment (Department of Mental Health and MO Healthnet, 2013). Individuals with an SMI or serious emotional disorder were autoenrolled in the program if they had Medicaid paid claims data costs of at least $10,000 from July 2010 through August 2011 and if they had contact with a CMHC during that period. Researchers found a 12.8 percent reduction in hospital admission rates and an 82 percent reduction in emergency department use among persons continuously enrolled for 18 months; however, the analysis lacks utilization rates for a nonenrollee comparison group to determine to what degree these reductions can be attributed to the Health Home intervention.
In the current study, we extend the existing literature by describing BH-based integrated-care impacts on health care utilization, Medicaid costs, and quality for adults with SMI. Specifically, utilization, cost, and quality outcomes are compared for individuals served by a diverse set of U.S. Department of Health and Human Services (HHS) Substance Abuse and Mental Health Services Administration (SAMHSA) Primary and Behavioral Health Care Integration (PBHCI) grantee clinics and their matched controls in three demonstration states. PBHCI impacts are examined for grantees from three states that were providing care between 2010 and 2014 (or as limited by the availability of Medicaid data at the time of this research; additional detail about the study period for each state is provided in Table 2.2).
Primary and Behavioral Health Care Integration Grants
In 2009, SAMHSA initiated the PBHCI grants program to support the integration of primary care services into BH treatment settings for adults with SMI, with or without co-occurring substance use disorder.
Since 2009, PBHCI grants have been awarded yearly to annual cohorts of nonprofit community BH centers across the country. Many programs involve partnerships with primary care clinics, such as Federally Qualified Health Centers; however, some BH grantees opt to hire medical providers directly into their agency (Scharf et al., 2014). As of September 2015, SAMHSA has awarded $162,392,053 in grant funds to 187 PBHCI grantees across eight cohorts. Grantees receive four years of funding ($400,000-$500,000 per year) to support their integrated care efforts, and funding is nonrenewable (i.e., an additional PBHCI grant cannot be used to extend a previously funded program, although a single grantee agency may receive several grants to support programs in different locations and serving a different consumer population). A portion of the grant may be used for infrastructure improvements, such as implementing electronic health records that can be shared across BH and medical providers. Grantees are required to regularly collect and submit program-level and consumer-level behavioral and PH data to SAMHSA.
Awardees from the first three cohorts (awarded in 2009 and 2010) were required to provide four core program features: (1) screening and referral for PH conditions; (2) a registry/tracking system for consumer PH needs and outcomes; (3) care management; and (4) illness prevention and wellness support services. Optional program features included colocation of primary care providers (e.g., physicians, nurse practitioners) in the BH setting and embedding nurse care managers within clinical care teams.
Changes to the program over time have included several requirements for accountability, thoroughness in care, and sustainability. Specifically, the first update to the original PBHCI request for applications (RFAs), issued in 2012, newly mandated that grantees provide on-site primary care services and medically necessary referrals and that grantees serve as their clients' health home. Grantees were also required to achieve health information technology Meaningful Use Stage One Standards and to submit a comprehensive sustainability plan. The most recent RFAs, issued in 2015, contains additional requirements, including the use of evidence-based programs in the areas of smoking cessation, nutrition and exercise, and chronic illness self-management, plus protocols for managing blood pressure control from the Million Hearts Campaign, among others.
In 2014, RAND released the first large-scale research study on PBHCI, including the first three cohorts of grantees (n=56, awarded in 2009 or 2010) (Scharf et al., 2014). Data showed wide variation in program structures, a range of implementation barriers, and some consumer-level improvements in PH outcomes (e.g., cholesterol, indicators of diabetes). Programs that offered primary care services on more days of the week and held regular clinical meetings involving both behavioral and primary care providers were more likely to provide integrated care (Scharf et al., 2014). Additional research targeting clinical processes and outcomes in later cohorts of grantees is now underway. This newly commissioned research has the capacity to provide more comprehensive and current information about the impact of PBHCI on consumer service use and clinical outcomes (among other aspects of the program). It does not, however, include an analysis of health care costs or potential cost savings.
Although early descriptions of the PBHCI mission focused on improving consumer health outcomes, the more recent 2015 RFAs listed "reducing/controlling per-capita cost of care" as an explicit goal of the program ("SAMHSA Grant Announcements," 2014). A recent study of two clinics supported by a single PBHCI grant provides the first (and, to date, only) analysis of potential health care savings from PBHCI (Krupski et al., 2016). The authors analyzed outpatient medical, inpatient hospital and emergency department claims, and billing data from the medical center with which both clinics were affiliated. Cost analyses are from the perspective of the health system that provided the claims. One of the PBHCI-supported clinics had a ten-year history of providing integrated services, while the other began providing integrated care upon receipt of the grant. Controls were propensity score matched consumers served by each clinic who were not enrolled in PBHCI. Difference-in-differences (DD) analysis showed that PBHCI consumers were more likely than controls to use outpatient medical services at both clinics. At the clinic with the established integrated care program, the percentage of PBHCI clients using outpatient medical services increased from 80 percent to 92 percent after one year of enrollment in PBHCI, while outpatient service use changed little during the same period (61 percent to 60 percent). At the newer integrated care clinic, the percentage of PBHCI clients using outpatient medical services increased from 39 percent to 76 percent, while the comparison group showed little change in outpatient medical services used (28 percent to 31 percent). At the more established clinic, PBHCI was also associated with a reduction in inpatient hospitalizations and a trend for reduced inpatient hospital costs of $218 per member per month. PBHCI hospital-related cost savings were not observed at the newer clinic. No PBHCI effects were observed on emergency department use or costs at either clinic.
Estimating Health Care Quality, Service Utilization, and Cost of Care from Public Payer Claims
In this study, we investigated the impact of PBHCI on consumers' overall health care utilization, total costs, and care quality using Medicaid claims in multiple clinics across three states. Unlike Krupski et al.'s (2016) analysis of claims from within a local health system, we analyzed utilization, cost of care, and quality using Medicaid claims data regardless of where care was delivered, as they provide the most readily available, detailed, and comprehensive records of service for the most PBHCI grantees.
Medicaid claims data are useful for estimating enrollees' cost of care because numeric service codes are linked to standardized reimbursement rates within states. Unlike claims data from a health system or network of providers, a benefit of state and federal claims is that they reflect the total range of services that an individual receives, which is particularly important for a study of adults with SMI who may be transient (receiving services across multiple locations and health systems) and who are likely to receive services across multiple levels of care (i.e., hospital, crisis, emergency, and outpatient). Medicaid claims data are, in many ways, the best option for creating a complete picture of consumers' quality of care.
At the same time, there are several noteworthy limitations of Medicaid claims, for the purpose of this project. For example, some PBHCI service types may not appear in Medicaid claims because they were uncovered at the time that services were rendered (e.g., peer services) or because providers were unlikely to bill for those services for a variety of reasons, such as policy barriers (e.g., inability to bill for primary care and BH services on the same day) or lack of familiarity with public payer billing at the start of the grant. Medicaid claims also include bundled services and managed care, which do not include information about specific services rendered. For people enrolled in both Medicaid and Medicare (i.e., dual eligibles), Medicaid claims also only represent a portion of services received. Additionally, the individuals who were enrolled by the grantee clinics as PBHCI participants were not asked to consent for linking their personal information with external data sources. For this reason, the analysis in this project had to be conducted using the entire caseload of each clinic being studied. The analysis of the entire caseload may underestimate the impact of PBHCI on the consumers who were enrolled in the program.
Medicaid claims data can be obtained either directly from states or from the HHS Centers for Medicare and Medicaid Services (CMS). CMS's Medicaid data include data from all 50 states and are standardized across states. Limitations of CMS Medicaid data, however, are that the data take several years (about four) to become available and are processed and aggregated so as to obscure some details of services rendered. For example, CMS Medicaid data may include bundled payments in which individual procedures are obscured. CMS data also may include only high-level identifiers for the billing institution associated with each claim, thereby limiting opportunities for clinic-level analyses, when clinics are part of multiclinic institutions. Finally, CMS Medicaid data may also exclude claims for services that are covered in only a subset of states.
Advantages to obtaining Medicaid data directly from states themselves is that the data are likely to include more detail, including clinic-level (instead of institution-level) identifiers and specific services included within payment bundles, allowing for more detailed analyses about the quality and appropriateness of care. State data are typically also available more rapidly than CMS data, thereby allowing for analysis of more recent cohorts of grantees. A particular challenge of working with state-provided Medicaid data is that each data set is unique (with uniquely defined services and service codes), having undergone differing levels of processing. This precludes applying the same analytic algorithm and making direct comparisons of results across states.
In this study, we used CMS Medicaid and state-provided Medicaid data to maximize our ability to characterize the extent, cost, and quality of services received.
In addition to state-level differences in policy environments, considerable differences between participating clinics, within states, existed. Clinics within states were hospital affiliated and free standing; state, county, and independently run; and in rural, suburban, and urban areas. Depending on these factors, clinics within a state may have also experienced provider shortages (e.g., rural areas) or space shortages (e.g., in urban settings) in which to provide primary care. Clinics also may have had longstanding relationships with primary care partners at the beginning of the grant period, while others would have had their first experience providing primary care six months (or more) after receiving grant funds. While some clinics struggled to meet recruitment targets, others had demand that exceeded primary care capacity. Even within states, clinics also differed in their capacity to bill Medicaid for integrated care services; for example, some clinics began billing Medicaid right away for primary care services, but other clinics--particularly safety net clinics with less experience billing Medicaid--may have paid for a range of billable services directly from the grant until they created the administrative infrastructure for routine billing, depending on larger trends within the state. The diversity of clinics within and between states included in this analysis enrich and also challenge conclusions to be drawn from this analysis.
The PBHCI program aimed not only to provide services to consumers likely to be undertreated but to have a broad influence on their utilization of intensive health care services, the total cost of health care services, and the quality of the care they receive. The prior evaluation of PBHCI was not able to address these broader impacts because it did not have access to data that could have been used to examine them. In particular, the prior evaluation did not have information on utilization and cost of health care used by PBHCI enrollees outside of the PBHCI grantee clinic or on these outcomes among consumers treated in non-PBHCI clinics to whom the PBHCI enrollees could be compared. This project was designed to address this gap through the analysis of claims data. In particular, our goal was to address the impact that PBHCI had on utilization of emergency department and inpatient services, total costs of care, and quality of care for PH conditions. Specifically, we address the following questions:
What was the impact of PBHCI on utilization of emergency department and inpatient services?
One of the major motivations for improving the quality of primary care services for adults with SMI is to shift care away from emergency departments and prevent inpatient hospitalizations. In particular, the goal is to avoid frequent use of these intensive health care services for PH conditions that are inappropriate and could potentially be avoided. (Frequent use may be defined as three or more claims for emergency department or inpatient services.) The claims data allow us to examine utilization of emergency department and inpatient services and to distinguish utilization for PH conditions from utilization for BH conditions, which is not directly targeted by PBHCI.
What was the impact of PBHCI on costs of care to Medicaid?
Improvements in care for PH conditions are likely to have complex cost implications for Medicaid. On the one hand, improving primary care services is likely to identify unmet medical needs and thus increase costs of care. On the other hand, improving quality of outpatient care is likely to reduce total costs over time by preventing use of more expensive emergency department and inpatient services. The claims data allow us to examine the impact of PBHCI on the total costs of care per person and to break these costs down by the site of care to gain insight into how PBHCI affects each of these components of total costs of care.
What was the impact of PBHCI on the quality of health care for PH conditions for the people treated in PBHCI grantee clinics?
By improving primary care services, PBHCI is expected to improve care for PH conditions. Although the prior evaluation documented some of these improvements, the current study examined the impact of PBHCI on quality of care from a different (Medicaid) perspective, which includes documentation of services provided outside of each PBHCI clinic. These measures reflect not only the care that was directly provided but also the program's success at integrating care with external medical providers.
This report uses claims data to estimate the impact of PBHCI grants on utilization, costs of care, and quality using a DD model. The DD model compares change in the outcomes associated with introduction of the PBHCI program into the grantee clinics with change over the same time period in a set of comparison clinics from the same state that did not receive PBHCI grants. The primary advantage of the DD model over other alternatives is its effective control for unrelated changes in the health care system that happened to be occurring at the same time and might have affected the outcomes even in the absence of the PBHCI grants (Howell, Conway, and Rajkumar, 2015).
The analysis is conducted at the clinic level, meaning that information on the characteristics and outcomes for all of the consumers seen in a clinic during a given year, what we call the clinic's caseload for that year, is compared between the PBHCI and comparison clinics. This is the most appropriate level of comparison because PBHCI is a clinic-level intervention. Grants were provided to clinics so that they could provide PH care services to their consumers regardless of whether they were established consumers or new to the clinic. However, it is also important to note that there is a large amount of turnover in clinic caseloads from year to year so that the individuals in the PBHCI and comparison caseloads are not the same during the pre-PBHCI and the PBHCI implementation periods. A supplementary analysis, which we describe at the end of this chapter, addresses this issue by focusing on the subset of consumers treated in both the pre-PBHCI and the PBHCI implementation periods.
This chapter on the methods used in the analysis is organized into seven sections: (1) Selection of States; (2) Data Sources; (3) Identification of PBHCI Clinics in the Claims Data; (4) Identification of Comparison Clinics; (5) Outcome Measures; (6) Statistical Analyses; and (7) Supplemental Analysis of Continuously Treated Consumers.
This project was organized as a series of state-level case studies. Analyses were grouped by state for two reasons. First, Medicaid is a state-administered program, and many regulations that affect integrated care for beneficiaries with SMI vary state to state. Conducting comparisons between PBHCI and comparison clinics within states helps control for these state-level regulatory variations. Second, Medicaid claims data are generally available on a state-by-state basis, whether they are obtained directly from the state itself or from the Federal Government's Research Data Assistance Center (ResDAC). However, it is important to keep in mind that the case study approach, which examines a small number of states in detail, is limited because the states cannot represent all PBHCI programs across the 50 states in which they have been implemented.
In consultation with SAMHSA and the HHS Office of the Assistant Secretary for Planning and Evaluation (ASPE), three states were selected for inclusion in this study based on two primary considerations. First, states with larger numbers of grantees in cohorts 1-5 were prioritized so that the study would have sufficient statistical power. Second, states with Medicaid claims data available from ResDAC or from the state Medicaid office were prioritized so that the analysis would cover as long a time period as possible.
Availability of Medicaid Claims Data
There is no standard mechanism through which researchers can obtain Medicaid claims data directly from states. Some states have offices that will provide data to outside entities but only for projects of particular interest to the state. Others will provide data to researchers for a wider range of projects but at considerable cost. Our team consulted broadly with ASPE, SAMHSA, peers at RAND, and other colleagues to clarify the process by which to obtain data from each of the eight candidate states, whether this project would be of interest to the state Medicaid office, whether we have a contact within the office to help advocate for the project, and the cost associated with processing the data request. As a result of this process, we learned that State 2 would only provide us with a pre-prepared data set that it had recently used for another purpose; this data set met some, but not all, of our analytic needs.
Utility of Medicaid Claims Data
We evaluated the potential utility of the eight candidate states' Medicaid data in two ways. First, we considered whether each data set would allow us to infer which consumers were served by PBHCI clinics by investigating whether the data had reliable clinic-level identifiers associated with individual claims. This approach, described in detail in the next section, was necessary, since we did not have Medicaid identifications (or other identifiers) for individuals served through PBHCI. We also considered whether PBHCI clinics were in close geographic proximity of one another, as clusters of clinics could potentially share consumers, services, and providers, making the identification of PBHCI consumers and services difficult to disentangle, or at least difficult to validate, using approaches other than clinic-level identifications.
We also consulted with peers at RAND, ASPE, and other experts to determine whether services of interest (e.g., tobacco treatment, diabetic eye exam) would be identifiable in claims data, particularly in states with managed care programs, as specific service indicators (warranting a bundled payment) might not have been recorded or reported to the state. Some aggregate indicators in the pre-prepared State 2 claims data were problematic in this way.
Critically, we considered whether the state's Medicaid program covered sufficient individuals and services expected of PBHCI grantees and ruled out states where too few individuals or services would be represented in claims.
Policy and Implementation Issues Limiting the Generalizability of Findings
Finally, we considered the broader policy context of each of the eight candidate states that could affect our ability to draw conclusions about PBHCI impacts on consumers' quality and cost of care.
Based on these factors, and in close consultation with ASPE at all stages of this process, three states were selected for this analysis. PBHCI clinics are funded in annual cohorts, as shown in Table 2.1. Note that clinical services supported by PBHCI funds were scheduled to begin six months after the start of the grant (i.e., in February of the following calendar year).
Estimation of the impact of PBHCI programs requires data on services provided by the clinics and comparison clinics for the time period immediately preceding the beginning of the PBHCI program and time periods during which the program was being implemented. Medicaid claims data for these time periods were sought from two sources: the state Medicaid agencies in the study states and ResDAC, the office that provides standardized claims data sets to researchers under contract with CMS. These two sources are similar; the ResDAC data are in fact standardized data sets derived from the individual state claims data sets. ResDAC Medicaid data are known as Medicaid Analytic Extracts (MAX) files.
The reasons for preferring one of these two data sources over the other are related to timeliness and ease of use; data obtained directly from the state Medicaid agencies are timelier but much more difficult to use than the data from ResDAC. The state data are timelier because they are in a "rawer" form. The data are provided in the format in which they are used for internal state purposes. The data become available through ResDAC only after they have been submitted by the state and processed into a format that is standard across states. Additionally, the state data are much more complex to use because each state uses its own data systems, so knowledge regarding analysis of data from one state does not transfer to other states. Moreover, the state data must be obtained on a state-by-state basis through separate negotiations with each state.
For each of the three states in this study, we obtained both state and CMS data sets. Figure 2.1 shows how the time periods covered by the state and CMS data sets correspond to the time periods during which the PBHCI programs were implemented for each state. For State 1, the state data set covered the entire period from 2009, which is one year prior to the beginning of PBHCI Cohort 1, through 2014, which is two years after the beginning of PBHCI Cohort 5. However, the ResDAC data for State 1 are extremely limited, with no data available since 2010. It is not clear that these data will ever be available in standardized MAX data sets. There is one limitation to the State 1 state data, which is important to note. The data do not include information on inpatient stays with a primary diagnosis of a BH condition. For this reason, outcome measures that involve BH inpatient stays cannot be calculated for State 1.
Data are available directly from the state of State 2 for the period covering 2010-2012. However, the data set available from the state is aggregated at the consumer-year level, meaning that individual services and service dates cannot be identified. ResDAC data for State 2 cover the period from 2009 to 2012, enabling analysis of the clinics that began PBHCI services in 2010 and 2011. According to ResDAC, no data will be released for State 2 for 2013. As with the State 1 data mentioned, it is not clear that these data will ever be available in standardized MAX data sets.
Data on State 3 are available from the state covering the period from 2009 through 2013 and from ResDAC covering 2009-2011. Using data from either source, we were able to examine the impact of the PBHCI program that began in 2011. The State 3 All-Payer Claims Database was obtained for the years 2009-2013. However, after substantial effort, we were not able to reliably identify Medicaid recipients in the state-provided data set and were thus unable to analyze the later PBHCI cohorts in the state as planned.
The years included in analyses of the impact of each cohort of clinic grantees within states are summarized in Table 2.2.
In the absence of consumer identifiers that could be used to select data on specific individuals who were enrolled in a PBHCI program, we used clinic identifiers to select the entire adult caseloads of clinics that received PBHCI grants. The clinics were identified using clinic and provider-level Medicaid identification numbers, known as National Provider Identifiers (NPIs), in the claims data sets. The NPIs were obtained directly from each clinic to ensure that all the providers who provided services to the PBHCI enrollees were included. All consumers with at least one visit to the clinic with a diagnosis of a SMI during a year were considered members of that clinic's caseload for that year.
Defining the PBHCI sample on the basis of clinic caseloads, rather than individual enrollee identification numbers, introduces a potential for misclassification of consumers who received no PBHCI services as having been exposed to the PBHCI program. Such misclassification will have the effect of reducing any impact of PBHCI because the change among enrollees would be spread across a larger group of individuals than were actually enrolled. This downward bias because of the sampling should be kept in mind while interpreting results.
On the other hand, there are also methodological advantages to defining the group exposed to PBHCI in this way. First, all of the consumers who received care at a PBHCI clinic were potentially affected by the program, whether or not they were enrolled. Thus, comparison at the clinic level--that is, between entire clinic caseloads--is an appropriate level of analysis. Second, it is likely that there were complex factors affecting selection of consumers into the program that would introduce bias when comparing a group of PBHCI enrollees with another group selected through claims data. For instance, PBHCI programs might have made an effort to enroll consumers with high PH needs. Alternately, consumers with high PH needs may have sought out the program. Given the coarseness of Medicaid claims data, adjusting effectively for these selection processes within clinic caseloads might not be possible.
A group of comparison clinics was selected for each cohort of clinics within each state. The goal was to identify a set of comparable mental health clinics within each state based on information in the claims data sets. To that end, we examined four claims-based provider characteristics:
Pattern of utilization: CMHC are distinctive in that the people they treat tend to have frequent regular visits for psychotherapy, rehabilitation, or medication reviews. We characterized utilization patterns simply as the average number of visits per unique person over the study period.
Proportion of claims with a primary diagnosis of a BH condition: We expect that of the claims submitted by any specialty mental health clinic, the overwhelming majority would have a BH condition as a primary diagnosis.
Proportion of claims with a primary diagnosis of schizophrenia: Even among specialty mental health providers, there is likely to be variation in the extent to which they treat SMI in general and schizophrenia in particular.
Caseload size: Similar-sized clinics are likely to respond similarly to factors affecting care delivery. Selecting based on size is also important to ensure adequate sample size within each comparison clinic.
Within each state, a group of comparison clinics was selected that matched the PBHCI clinics in that state with respect to the characteristics listed above. Table 2.3 summarizes the results of the selection process with respect to the number of clinics selected as comparisons for each group of PBHCI clinics. In State 1, we selected two groups of comparison clinics, one set of five clinics for the Cohort 1 clinic and another set of five clinics for the Cohort 5 clinics. In State 2, we selected one group of comparison clinics for both the Cohort 1 and Cohort 3 clinics, since the two cohorts were much closer in time, separated only by one year. Consumers with at least one claim from a clinic and with a diagnosis of SMI were counted as members of that clinic's caseload for that year. Any consumers who received services from both a PBHCI clinic and a comparison clinic were considered PBHCI clinic consumers.
As expected, the numbers of individuals identified as members of the clinic caseloads in the Medicaid claims were larger than the numbers of individuals enrolled in the PBHCI programs (Table 2.4). This was expected because the PBHCI clinics targeted only a subset of their consumers for enrollment and because they were not successful in enrolling all of the consumers they targeted.
Three types of outcomes were examined: measures of emergency department and inpatient utilization, costs of care, and quality indicators.
Several considerations went into the selection of utilization and quality measures. (See Appendix A for a more comprehensive list of measures considered for study inclusion and sources of measures.) First, we gave priority to measures that have been approved by the National Quality Forum (NQF), which serves as a clearinghouse for carefully specified and vetted measures of the quality of health care. We also considered measures developed by the New York State Office of Mental Health's Psychiatric Services and Clinical Knowledge Enhancement System (PSYCKES), for which detailed specifications for Medicaid data are available (PSYCKES Medicaid, undated). Second, measures were selected to take advantage of the distinct strengths of the claims data by measuring care provided outside of the PBHCI clinic. The selected measures vary to some extent in this regard. PBHCI grantees were more likely to provide diabetes monitoring directly and to rely on referrals for cancer screenings.
Utilization of Emergency Department and Inpatient Services
The utilization measures used in the study are as follows.
Emergency department visit for a BH condition (BH emergency department visit).
Emergency department visit for a PH condition (PH emergency department visit).
Inpatient stay for a BH condition (not calculated for State 1) (BH inpatient stay).
Inpatient stay for a PH condition (PH inpatient stay).
Three or more emergency department visits for a BH condition (three or more BH emergency departmentvisits).
Four or more emergency department visits or inpatient stays for a PH condition (four or more PH emergency department/inpatient).
Four or more emergency department visits or inpatient stays for any condition (four or more any emergency department/inpatient).
As noted, because of the lack of data in the State 1 state Medicaid data set on inpatient stays with a primary diagnosis of a BH condition, some measures could not be calculated for the State 1 cohorts.
Costs of care were calculated directly from the claims using information on actual payments by Medicaid. These costs exclude copayments paid by recipients of care or payments made by other agencies or programs such as Medicare. Four cost outcomes were defined: (1) total costs; (2) costs for outpatient services; (3) costs for emergency department services; and (4) costs for inpatient services.
The quality measures were selected from among well-described claims-based measures to assess receipt of preventive or integrated care services. The measures do not assess overall quality of care delivered at the PBHCI or comparison clinics. Rather, they are meant to reflect the potential impact of PBHCI on care that consumers would have received at outside general medical clinics. To receive these services, consumers would need not only to be referred to a provider but also follow-through with the referral and receive the service from the provider. Because these measures assess care that depends on follow-through from both consumers and external providers, performance on these measures constitutes a stringent test of the impact of PBHCI. The following quality measures were examined:
Diabetes monitoring: the proportion of individuals with a diagnosis of diabetes who have had at least one claim for an HbA1c test during the year.
Flu vaccine: the proportion of all individuals in the caseload with a claim for a flu vaccination during the year.
Breast cancer screening: the proportion of women aged 50-74 years with a claim for a mammogram during the year.
Cervical cancer screening: the proportion of women aged 24-64 years with a claim for cervical cancer screening during the year.
Colorectal cancer screening: the proportion of individuals aged 51-75 years with a claim for a colorectal cancer screening procedure during the year.
Any outpatient PH visit: the proportion of all individuals with a claim for one or more outpatient claims with a primary diagnosis of a PH condition.
Follow-up after hospital discharge (for mental illness): the proportion of individuals discharged from an inpatientstay (for mental illness) with a BH visit in the subsequent 30 days.
Our primary approach is to estimate the impact of the PBHCI program using a DD model. This model compares the temporal trend across the pre-PBHCI and PBHCI periods between the PBHCI and comparison consumers. A method that has become increasingly popular in the empirical literature on the effects of policy interventions (Abadie, 2005; Bertrand, Duflo, and Mullainathan, 2001; Card and Krueger, 2000; Conley and Taber, 2005), the appeal of DD estimation comes from its simplicity and its potential to mitigate biases in the comparison between the PBHCI consumers and the comparison group that could be the result of permanent differences between those groups, as well as biases from the pre-post PBHCI comparison of consumers that could be the result of secular trends unrelated to the PBHCI intervention (Imbens and Wooldridge, 2007). The DD model is recommended by CMS for longitudinal evaluations (Howell, Conway, and Rajkumar, 2015).
We examined three types of outcome measures: utilization of emergency department and inpatient hospital services, costs of care, and health care quality. The utilization and quality measures are binary indicators--for example, whether or not an individual had four or more emergency department visits or inpatient stays for a PH condition. The cost outcomes include both binary indicators of whether or not an individual used a type of service (e.g., an inpatient stay) and continuous measures of total costs of care (e.g., the total cost for inpatient stays among individuals with an inpatient stay). Cost is calculated as a continuous variable indicating the amount paid for services by Medicaid.
For dichotomous variables, we estimated a series of logistic models similar to (M1).
|where the dependent variable indicates the value of the study outcome for the ith consumer in the jth state at time t (t=0 for pre-PBHCI and 1 for PBHCI period)|
|PC is a vector for the consumer factors|
|PBHCI is a dichotomous variable to indicate PBHCI and comparison groups|
|T is a dichotomous variable to indicate the pre-PBHCI and PBHCI periods|
|T*PBHCI is an interaction between PBHCI program and time, whose coefficient, β4, indicates the DD effect of PBHCI program implementation.|
Since the individuals are grouped within clinic caseloads, we specified the model with standard errors clustered at the clinic level to take account of within group correlations.
To examine health care costs, we note an important difference across types of costs. While every consumer in our study sample has annual outpatient costs and total annual costs, only a small proportion of them have incurred annual costs of emergency department or inpatient care. To analyze either annual outpatient costs or total annual costs, we chose to use generalized linear models with Gaussian distribution and log link following the recommendations by Manning and Mullahy (2001) as shown in M2:
|where εijt is the error term, and other variables are the same as in equation M1 discussed. As in (M1), (M2) was specified with standard errors clustered at the clinic level.|
For either annual emergency department costs or annual inpatient costs, we chose to estimate a two-part model with the first part examining the probability of using emergency department (or inpatient) care, and the second part examining the emergency department (or inpatient) costs among the emergency department (or inpatient) users. The first part is similar to (M1), while the second part is specified as generalized linear models with Poisson distribution with log link as recommended by Manning and Mullahy (2001).
The primary analysis was conducted using samples including all individuals seen at each clinic--that is, the entire clinic caseload for the each year of the study. This approach allows for movement of individual consumers in and out of each caseload over time for both the PBHCI and comparison clinics. In most cases, the movement in and out of clinic caseloads was quite large: of the total number of consumers seen in the clinics during the pre-PBHCI and first post-PBHCI years, about 50 percent were seen in both years. While defining the sample in this way provides the best test of the impact of PBHCI, which was implemented at the clinic level and targeted all consumers seen at the clinic, the approach also has limitations that should be acknowledged. In particular, many of the consumers seen in the PBHCI period were new consumers with limited PBHCI exposure. In addition, allowing movement into the clinic caseload introduced the possibility of confounding the DD model because the new consumers may have differed from the prior consumers. To address this limitation, we also conducted analyses focusing on the subset of continuous consumers--that is, those who were seen in the same clinic during both the pre-PBHCI and the post-PBHCI years.
|FIGURE 2.1. Overlap Between PBHCI Grantee Clinical Activities and Available Billing Data|
|TABLE 2.1. Number of PBHCI Clinics in the States Chosen for Analysis by Cohort|
|Cohort||Funding Year||Clinical Service Start||States 1, 2, and 3|
|2 and 3||2010||2011||3|
|TABLE 2.2. Years of Data Included in Analyses of State|
|State||Data Source||Time Period|
|TABLE 2.3. Numbers and Sample Sizes During the Pre-PBHCI Year for the Comparison Clinics Selected for Each PBHCI Cohort Within Each State|
|State||Data Source||PBHCI Cohort||Comparison Clinics||Pre-PBHCI Year Sample|
|TABLE 2.4. Comparison of Consumer-Yearsa Enrolled in PBHCI with Consumer-Years Identified in Clinic Caseloads in Medicaid Claims Data, for PBHCI Implementation Years Included in Analyses|
|Cohort||Year(s) Included||PBHCIb||Medicaid||Ratio of PBHCI |
|State 1, Cohort 1||2010-2013||1,079||2,379||0.45|
|State 1, Cohort 5||2013-2013||526||1,693||0.31|
|State 2, Cohort 1||2010-2012||1,067||2,447||0.44|
|State 2, Cohort 3||2011-2012||1,753||2,322||0.75|
|State 3, Cohort 3||2011||168||3,216||0.05|
This chapter presents the results of the DD analyses of the impact of the PBHCI programs on the outcomes described earlier. The analyses were conducted by state and, within states, by PBHCI cohort. This chapter reports results in which the data from the PBHCI implementation period are aggregated. Appendix B shows detailed year-by-year breakdowns of the results.
Table 3.1 describes the samples for each of the PBHCI cohorts and comparison groups. The sample sizes include all respondents in the analysis, both those who received services during the pre-PBHCI period and the PBHCI implementation period. The PBHCI sample sizes range from a low of 2,675 for Cohort 1 in State 2 to a high of 5,187 for Cohort 3 in State 3. The sample sizes are generally larger for the comparison groups, with the exception of Cohort 5 in State 1, where the comparison group sample is slightly smaller than the PBHCI sample. The total sample across all cohorts and states is 57,365, 70 percent of whom were seen in comparison group clinics.
There are differences between the PBHCI and comparison group samples across sex, age, and diagnosis as indicated by the figures in bold in Table 3.1. Differences are relatively minor with respect to sex; differences reach statistical significance in two cases (Cohort 1 in State 2 and Cohort 3 in State 3), but the differences are small in magnitude. In contrast, differences in age distribution between PBHCI and the comparison group are found for all cohorts. However, the nature of the age differences is not consistent. The PBHCI cohort is more likely to be in the lower age category than the comparison group in three of the five cohorts. The comparison of the PBHCI and comparison groups in diagnosis was conducted with respect to the proportion of the sample with a diagnosis of schizophrenia (versus other SMI diagnosis). These proportions were nearly identical in two of the five cohorts but were substantially higher in the PBHCI sample in two of the cohorts and higher in the comparison sample in one of the cohorts.
In four of the five comparisons, the PBHCI clinic caseloads included more consumers who were Black or Hispanic and fewer who were White. The exception, the State 3 Cohort 3 clinics, were very similar in race/ethnic composition. Race/ethnicity data should be interpreted with caution because of the large number who may identify with "other ethnicity," which may include misclassified or missing information. The two State 2 cohorts had nearly identical race/ethnicity distributions for both PBHCI and comparison clinic caseloads.
Results regarding the impact of PBHCI on emergency department and inpatient visits, shown in Table 3.2, are complex, with distinct patterns for utilization for behavioral and PH conditions. Overall, we observed a decrease in utilization that was most pronounced among high levels of emergency department and inpatient for PH conditions. We discuss the results in Table 3.3 by type of care, starting with emergency department utilization.
No significant effects of PBHCI on the likelihood of having at least one emergency department visit for a BH condition were found for any of the PBHCI cohorts. In contrast, there were effects of PHBCI on emergency department visits for PH conditions for three of the cohorts. In two of those cohorts there was a significant reduction in the likelihood of an emergency department visit, and in one, there was a significant increase in the likelihood of an emergency department visit.
Evidence regarding the impact of PBHCI on the likelihood of having one or more inpatient stay for a BH condition is also mixed. Of the three cohorts for which estimates were possible, PBHCI reduced the likelihood of an inpatient stay for a BH condition in two cases and increased the likelihood of an inpatient stay for a BH condition in one case. The results are more consistent for inpatient visits for PH conditions. The odds ratios are lower than one, indicating a reduction in the likelihood of an inpatient visit, in four of the five cases, but reach statistical significance in only one, that for State 1 Cohort 5.
Results for the three high-use measures, one for BH emergency department, one for PH emergency department or inpatient, and one for any emergency department or inpatient, suggest a distinct effect of PBHCI on frequent use of emergency department and inpatient for PH conditions. The first measure is an indicator of having three or more emergency department visits for a BH condition. The effect of PBHCI on this measure is significant in only one case, State 2 Cohort 1, where PBHCI increased the likelihood of high use. Results regarding a similar measure, which includes inpatient visits for BH conditions, are very similar and not shown in the table.
The second measure is an indicator of four or more emergency department or inpatient visits for PH conditions. The odds ratios for this measure are less than one for all five cohorts, reaching significance in three of the cohorts. The third measure is a composite indicator of having four or more emergency department or inpatient visits for any type of condition. The odds ratios for the effect of PBHCI are all lower than one, reaching statistical significance in three of the five cohorts, State 1 Cohort 1, State 2 Cohort 1, and State 3 Cohort 3.
Cost of Medicaid Reimbursements
Results regarding the impact of PBHCI on costs of care to Medicaid are shown in Table 3.3. The table shows the impact on total costs for each cohort as well as specific components of the total cost, broken down into outpatient costs, emergency department costs, and inpatient costs. All sample members have some outpatient costs because at least one visit to a PBHCI or comparison clinic was required for inclusion in the study. Therefore, for outpatient costs, we estimate the impact of PBHCI on the average (log) cost per person across the entire caseload. However, emergency department and inpatient services are not used by everyone in the sample. For these outcomes, we estimate two effects of PBHCI, an effect on having any costs (versus no costs) and an effect on the average (log) cost per user. We discuss the results starting with the impact of PBHCI on total costs and then examining the impacts on the component costs for each cohort.
The results suggest that PBHCI reduced total costs per person in three of the five cohorts. In the remaining two cohorts, both in State 1, there were no significant effects of PBHCI on total costs per person. The pattern of effects across the cost components was similar for the two State 1 cohorts. In State 1 Cohort 1, there was a significant reduction in the likelihood of having an emergency department visit, which would result in reduced costs, but a countervailing effect of an increase in costs per emergency department user. In State 1 Cohort 5, the impact on use of the emergency department is not significant, but there is a significant increase in costs per user. Because of the absence of claims on BH inpatient stays in the State 1 data set, we have not included inpatient costs in the analysis for that state.
In contrast, the two State 2 cohorts have different patterns of effects of PBHCI on the cost components, indicating that the reduction in costs was achieved through different pathways. In State 2 Cohort 1, the reduction in total costs is driven by a reduction in the average outpatient cost per person. In this cohort, the reduction in outpatient costs is large enough to counterbalance increases in the likelihood of using the emergency department as well as an increase in the cost per emergency department user and an increase in the cost per inpatient user. In State 2 Cohort 3, the reduction in total costs is driven by a reduction in the likelihood of using the emergency department and a reduction in the cost per inpatient user. Finally, in State 3, the reduction in total costs is driven by a reduction in the per user costs for both outpatient and emergency department.
The DD estimates of the impact of PBHCI on quality of care measures are presented in Table 3.4. Only seven odds ratios out of 32 tested reach statistical significance, and there are few general patterns of significant effects that hold across multiple cohorts or states. In State 1, none of the effects reach statistical significance for either cohort. In State 2, there is a significant positive effect of PBHCI on diabetes monitoring in Cohort 3 and a significant positive effect of PBHCI on cervical cancer screening in Cohort 1. The results for State 3 stand out for the large negative effects of PBHCI across multiple measures. As noted in Chapter One, the arrangements that the State 3 clinic made for primary care services were disrupted during the first year of implementation of PBHCI. That disruption may have influenced the quality of preventive care received by the clinic's consumers.
Estimates of the impact of PBHCI on having a follow-up visit after a hospitalization for a mental health condition were positive in all three cohorts for which that measure was examined and reached statistical significance in two of those cases.
Supplemental Analysis of Continuously Treated Individuals
The analysis reported examines the entire caseload of each clinic in each year. In that analysis, the actual individual consumers in the samples change from year to year, following the normal patterns of treatment retention in those clinics. There is also an interest for both methodological and substantive reasons in the subsample of consumers who are continuously treated in both the pre-PBHCI year and during the PBHCI implementation period. Methodologically, these patients represent a constant population, in both the PBHCI and comparison clinics, and therefore allow for a more rigorous DD test of the impact of PBHCI. Substantively, there is particular interest in consumers with chronic behavioral conditions who are treated in the same specialty BH clinics over an extended period of time.
At the same time, the analysis in the sample restricted to consumers treated in both the pre-PBHCI and the PBHCI implementation period has important limitations. First, given the substantial turnover in clinic caseloads from year to year, the continuously treated sample is significantly reduced in size. Table 3.5 shows the proportions of the samples that are retained in the first year of follow-up and for the entire period for which we have follow-up data. In the first year, retention ranges from 54 percent to 90 percent, while for the entire sample retention ranges from 45 percent to 84 percent. The reduction in sample size means that the continuously treated sample is not representative of the entire target population of the PBHCI program. Second, also as a consequence of the high turnover rate in the clinic caseloads, the sample size for the analysis restricted to the continuously treated consumers is substantially reduced, with limited statistical power to detect differences relative to the full sample analyses reported.
For this supplemental analysis, DD models were applied to samples from each cohort restricted to consumers with at least one visit during the pre-PBHCI year and one visit during the first PBHCI implementation year. Only the first year of PBHCI implementation could be examined because of small sample sizes of consumers treated continuously over longer periods of time. Three outcomes were examined to focus attention on the key findings from the main analysis: (1) four or more emergency department or inpatient visits for a PH condition; (2) four or more emergency department or inpatient visits for any condition; and (3) total costs per person.
Results of the supplemental analysis are shown in Table 3.6. For the two utilization measures, the results are similar to the results in the main sample. For the two State 1 cohorts, the direction of effect and the magnitudes of the odds ratios are nearly identical, although the effect on four or more emergency department or inpatient visits for any condition becomes nonsignificant. For the State 2 cohorts, the effects of PBHCI are stronger--that is, suggesting larger impacts of PBHCI in reducing high emergency department and inpatient utilization than in the main analysis. In contrast, for the State 3 cohort, the effects appear slightly weaker, although they are in the same direction. The impact of PBHCI in reducing frequent emergency department and inpatient use for PH conditions remains statistically significant in this sample. None of the impacts of PBHCI on total Medicaid costs per person are statistically significant in the supplemental analysis. In four of the five cases, the direction of effect is the same as in the main analysis.
Results from the models of the impact of PBHCI are summarized across the five cohorts in Table 3.7. In the table, significant effects of PBHCI are marked by a minus sign if PBHCI was associated with a lower prevalence of the outcome or a plus sign if PBHCI was associated with a higher prevalence of the outcome. Three aspects of the summary table are notable. First, there were different effects of PBHCI on emergency department and inpatient utilization across states and across cohorts within states. However, the effect of PBHCI on emergency department and inpatient utilization for PH conditions was, generally, that PBHCI reduced utilization. Moreover, the effects were most consistent for frequent use of these services.
Second, the evidence suggests that PBHCI reduced or did not affect the total per consumer costs to Medicaid. However, the total impact of PBHCI on costs appears to result from direct impacts on different care pathways across the states and cohorts. For instance, in the State 1 cohorts, PBHCI reduced or did not change the likelihood that a consumer would use an emergency department, but it did result in an increase in the average per person emergency department costs to Medicaid. In State 2 Cohort 1, PBHCI increased costs for emergency department and inpatient utilization, but those increases were more than offset by reductions in outpatient costs to Medicaid. This heterogeneity of effects deserves additional investigation.
Third, few of the quality of care measures for primary care services were impacted by PBHCI in either direction. There does not appear to be a pattern to the effects that were found. One exception is the pattern of reductions in quality indicators for State 3 Cohort 1.
|TABLE 3.1. Sizes and Selected Characteristics of the PBHCI and Comparison Groups Samples|
|Characteristics||State 1||State 2||State 3|
|Cohort 1||Cohort 5||Cohort 1||Cohort 3||Cohort 3|
|Percentage with schizophrenia||60||51||45||45||50||35||33||33||36||41|
|NOTE: Bold font indicates statistically significant differences between PBHCI and comparison samples.|
|TABLE 3.2. DD Estimates of the Impact of PBHCI on Utilization Measures|
|State||n||BH ED Visit||PH ED Visit||BH IP Stay||PH IP Stay||3+ BH ED Visits||4+ PH ED/IP||4 Any ED/IPa|
|Cohort 1||12,307||1.03||[0.91, 1.17]||0.82||[0.75, 0.91]||b||0.79||[0.45, 1.39]||0.81||[0.64, 1.02]||0.92||[0.82, 1.03]||0.89||[0.81, 0.98]|
|Cohort 5||6,353||0.87||[0.65, 1.16]||0.99||[0.75, 1.32]||b||0.60||[0.42, 0.87]||1.12||[0.71, 1.76]||0.95||[0.72, 1.24]||0.91||[0.72, 1.17]|
|Cohort 1||13,574||0.85||[0.72, 1.01]||1.18||[1.05, 1.31]||0.82||[0.70, 0.96]||1.02||[0.88, 1.17]||2.67||[1.95, 3.65]||0.78||[0.65, 0.94]||0.83||[0.69, 0.99]|
|Cohort 3||14,296||1.05||[0.95, 1.15]||0.96||[0.94, 0.98]||1.45||[1.19, 1.76]||0.98||[0.86, 1.10]||1.05||[0.77, 1.45]||0.88||[0.78, 0.99]||0.95||[0.89, 1.03]|
|Cohort 3||10,834||0.71||[0.43, 1.16]||0.81||[0.65, 1.00]||0.68||[0.52, 0.90]||0.85||[0.66, 1.09]||0.51||[0.19, 1.42]||0.52||[0.38, 0.72]||0.60||[0.44, 0.83]|
|NOTE: Results in bold font indicate statistically significant differences between PBHCI and comparison at p=0.05. a. Estimates of 4 or more for any ED/IP for State 1 do not include BH IP stays. b. Fewer than 5 events in the PBHCI clinic caseloads during the Pre-PBHCI Year.|
|TABLE 3.3. DD Estimates of the Impact of PBHCI on Medicaid Costs|
|State||Outpatient||Emergency Department||Inpatient||Total Costa|
|Cost Per Person||Any Use||Cost Per User||Any Use||Cost Per User||Cost Per Person|
|Cohort 1||0.23||[-0.28, 0.75]||0.87||[0.78, 0.97]||0.53||[0.34, 0.71]||0.15||[-0.37, 0.67]|
|Cohort 5||0.01||[-0.19, 0.20]||0.94||[0.71, 1.23]||0.70||[0.48, 0.91]||-0.02||[-0.31, 0.28]|
|Cohort 1||-0.31||[-0.40, -0.23]||1.35||[1.19, 1.52]||0.11||[-0.01, 0.23]||0.97||[0.83, 1.13]||0.40||[0.29, 0.50]||-0.24||[-0.31, -0.17]|
|Cohort 3||-0.05||[-0.16, 0.07]||0.95||[0.92, 0.98]||-0.02||[-0.10, 0.07]||1.09||[0.94, 1.26]||-0.51||[-0.82, -0.21]||-0.10||[-0.16, -0.04]|
|Cohort 3||-0.05||[-0.09, -0.02]||0.88||[0.71, 1.10]||-0.65||[-1.04, -0.27]||1.11||[0.87, 1.41]||-0.13||[-0.37, 0.11]||-0.06||[-0.08, -0.05]|
|NOTE: Results in bold font indicate statistically significant differences between PBHCI and Comparison at p=0.05. |
|TABLE 3.4. Impacts of PBHCI on Quality of Care Measures Based on DD Model|
|State||Diabetes Monitoring||Flu Vaccination||Breast Cancer Screening||Cervical Cancer Screening|
|Cohort 1||1,765||1.15||[0.87, 1.50]||12,307||0.79||[0.58, 1.08]||4,361||1.07||[0.76, 1.50]||5,993||1.18||[0.82, 1.70]|
|Cohort 5||820||1.12||[0.70, 1.79]||6,353||0.72||[.043, 1.23]||2,384||0.89||[0.50, 1.60]||3,268||0.71||[0.47, 1.07]|
|Cohort 1||1,658||1.04||[0.81, 1.33]||---a||---a||3,970||1.02||[0.89, 1.18]||7,678||1.31||[1.24, 1.38]|
|Cohort 3||1,588||1.16||[1.09, 1.24]||13,848||1.25||[0.91, 1.70]||3,939||1.01||[0.93, 1.10]||8,285||1.10||[0.96, 1.26]|
|Cohort 3||2,006||0.33||[0.24, 0.45]||10,376||0.24||[0.09, 0.64]||4,136||0.28||[0.09, 0.83]||5,488||0.35||[0.21, 0.59]|
|State||Colorectal Cancer Screening||Any Outpatient PH Visit||Follow-up After Hospitalization|
|Cohort 1||4,324||1.00||[0.85, 1.18]||12,307||1.14||[1.04, 1.26]|
|Cohort 5||2,306||0.98||[0.51, 1.89]||6,353||0.73||[0.23, 2.34]|
|Cohort 1||3,476||0.84||[0.67, 1.06]||13,574||1.16||[0.97, 1.39]||876||2.83||[1.53, 5.24]|
|Cohort 3||3,294||0.76||[0.55, 1.05]||14.296||1.10||[0.95, 1.28]||594||1.57||[0.59, 4.17]|
|Cohort 3||5,241||0.66||[0.38, 1.13]||10,834||0.87||[0.69, 1.11]||712||2.25||[1.58, 3.21]|
|NOTE: Results in bold font indicate statistically significant differences between PBHCI and Comparison at p=0.05. a. Fewer than 5 events in the PBHCI clinic caseloads during the Pre-PBHCI Year.|
|TABLE 3.5. Continuously Treated Sample as a Proportion of the Pre-PBHCI Sample|
|State (Cohort)||Years of Follow-up||Percentage Continuously Treated|
|1 Year of Follow-up||Complete Sample|
|State 1 (1)||4||80||45|
|State 1 (5)||1||77||77|
|State 2 (1)||3||54||39|
|State 2 (3)||2||90||84|
|State 3 (3)||1||59||59|
|TABLE 3.6. Analysis in Samples Restricted to Continuously Treated Consumers|
|State||n||4+ PH ED/IP||4+ Any ED/IP||Total Cost Per Person|
|Cohort 1||3,122||1.0||[0.8, 1.2]||0.9||[0.8, 1.0]||0.16||[-0.05, 0.37]|
|Cohort 5||4,556||0.9||[0.7,1.0]||0.9||[0.8, 1.0]||0.15||[-0.09, 0.39]|
|Cohort 1||2,720||0.6||[0.5, 0.7]||0.6||[0.5, 0.8]||-0.04||[-0.12, 0.04]|
|Cohort 3||7,534||0.8||[0.7, 0.9]||0.9||[0.8, 1.0]||0.00||[-0.10, 0.11]|
|Cohort 3||5,640||0.7||[0.6, 0.8]||0.8||[0.7, 1.1]||–0.05||[-0.10, 0.00]|
|TABLE 3.7. Summary of DD Results Across States and Cohorts|
|Outcomes||State 1||State 2||State 3|
|Cohort 1||Cohort 5||Cohort 1||Cohort 3||Cohort 1|
|BH ED visit|
|PH ED visit||–||+||-|
|BH IP stay||N/A||N/A||–||+||–|
|PH IP stay||-|
|3+ BH ED visits||+*|
|4+ PH ED/IP||–||–||–|
|4+ Any ED/IP||–||–||–|
|ED cost per user||+||+||–|
|IP cost per user||N/A||N/A||+||–|
|Breast cancer screening||–|
|Cervical cancer screening||+||–|
|Colorectal cancer screening|
|Any outpatient PH visit|
|Follow-up after hospitalization for mental illness||NA||NA||+||+|
|* Unstable estimate due to small sample size in the pre-PBHCI year. |
– = indicates significant negative effect of PBHCI (i.e., PBHCI reduced outcome).
+ = indicates significant positive effect of PBHCI (i.e., PBHCI increased outcome).
The prior PBHCI evaluation examined the impact of the program on care delivery and participant health status. However, it was limited to indicators collected directly from PBHCI grantees and three control clinics. The goal of this study was to extend the evaluation using a different research strategy that was not possible at the time of the prior analysis of Medicaid claims. The advantage of claims as a data source is that they include all health services that were reimbursed by Medicaid, including services provided by other clinics, emergency departments, and inpatient facilities. In fact, despite some limitations in their coverage, claims are the best source of information on Medicaid-covered health care services that individuals receive. An analysis of all health care services is particularly important for understanding the impact of PBHCI, since the program is designed to have systemic effects--that is, to produce shifts in the types and locations of care, not simply to fund direct services. Claims provide a unique opportunity to observe the effects of the program on care provided outside of the PBHCI clinics including costly and overutilized services such as visits to the emergency department.
The claims have the additional advantage of including information on a large population that did not participate in the PBHCI program. These data can be used to account for secular trends that affect service use independent of PBHCI using a DD approach, as recommended for longitudinal policy evaluation studies (Dimick and Ryan, 2014; Howell, Conway, and Rajkumar, 2015).
For this report, we used Medicaid claims data to examine the impact of PBHCI on utilization of emergency department and inpatient services, total Medicaid costs per patient, and quality of care. The findings, which we discuss in greater detail in the next section, are heterogeneous across the cohorts, but they include evidence that PBHCI can be successful in producing positive changes in utilization patterns. In particular, there is evidence that, in some of the cohorts, PBHCI reduced frequent utilization of emergency department and inpatient services and reduced total per person costs of care to Medicaid. While there was considerable variation in these effects across cohorts, there were no cohorts in which PBHCI significantly increased total per person costs of care to Medicaid. Moreover, even among the cohorts in which PBHCI reduced per person costs of care to Medicaid, there was variation in the specific costs components that were affected.
Utilization of Emergency Department and Inpatient Services
Medicaid claims data provide particularly valuable information on utilization of emergency department and inpatient services because the claims for these services are submitted by external providers who are unaffected and unfunded by the PBHCI program. The findings for these measures are complex, and they vary across states and cohorts within states. However, the pattern of results suggests that, in some cohorts, PBHCI was successful in one of the core aims of improving primary care services: shifting care for PH conditions away from emergency department and inpatient services. In three of the five cohorts, PBHCI reduced the likelihood that a consumer would have an emergency department visit, and in three cohorts, PBHCI reduced the likelihood that a consumer would have four or more emergency department or inpatient visits in a year. The results indicate that PBHCI may have some benefit for frequent users of emergency department and inpatient services. For instance, even in the one cohort in which PBHCI increased the likelihood that a consumer would have an emergency department visit, the program reduced the likelihood of being a frequent user.
The mechanisms of the impact of PBHCI on emergency department and inpatient utilization deserve more in-depth exploration in future studies. Studies could examine, using a mixture of qualitative and quantitative methods, whether PBHCI services are replacing emergency department utilization among PBHCI enrollees or whether they are having some clinical impact in reducing the need for attending to chronic disease problems, particularly those with a history of frequent emergency department utilization. There may be important lessons regarding effective organization of clinical services that could be generalized from clinics that are successful in this area. Studies could also clarify the reasons that PBHCI clinics were less successful in improving quality of care for PH conditions than they were in reducing frequent emergency department and inpatient utilization for those conditions.
One potential explanation for this pattern is that improvements in these two areas involve different clinical activities with distinct functions. Making primary care services available in a mental health clinic may reduce emergency department and inpatient utilization by providing an alternative venue for health care access for relatively minor problems. Consumers who have access to primary care services through their mental health clinic for acute ambulatory issues at an earlier stage (e.g., fever, cough) may not have the need for more intensive services later. Also, those receiving care in an ambulatory setting rather than an emergency department may also be less likely to end up with an avoidable inpatientstay. PBHCI may also be successful in some cases in changing the culture of the mental health clinic in terms of attending to PH issues and opening up discussions with patients about their general health may help reduce anxiety and foster more appropriate use of health care services. Furthermore, improving quality of care for PH conditions requires coordination of care with primary care providers, follow-through on the part of consumers, and provision of quality care by the external providers. These barriers to quality PH care may be more challenging for PBHCI clinics to overcome.
Cost of Care
The overall impacts of PBHCI on Medicaid costs were either neutral or cost saving. However, the impacts of the program varied across states and cohorts. Because of that variability, it is not possible to describe a "typical" effect of the grants on costs. In one cohort, a decrease in outpatient costs was large enough to counterbalance increases in emergency department and inpatient costs, producing a net reduction in total costs to Medicaid. In another cohort, a reduction in costs was achieved by small reductions in each category of costs. In one cohort, we found that PBHCI consumers were less likely to have one or more visits to an emergency department, but the costs per user of emergency department services was higher. This pattern is consistent with the suggestion that the PBHCI clinics were successful as substitutes for emergency department visits for mild illnesses or for receipt of other services potentially available through emergency departments such as food, shelter, or psychosocial crisis intervention. In Krupski et al.'s (2016) evaluation of PBHCI costs in two Seattle-area clinics, one of the clinics that had been offering integrated care for several years before receiving the PBHCI grant showed reduced inpatient hospitalization and associated overall costs, while the newer integrated clinic showed no such effect, suggesting that program maturity--including the amount of time that consumers have been receiving integrated medical care--may play a role in determining if and when cost savings from inpatient care are realized. At this time, however, we do not yet have clear mechanisms that explain the range of patterns of PBHCI effects on costs observed here.
One additional point regarding the timing of the impact of PBHCI on costs of care should be noted. In this study, we were able to examine PBHCI impacts over 1-4 years. However, the impacts on health care costs could occur over a longer time span. This study captures only those impacts on cost that occur within a short period of exposure to PBHCI, although the impacts could potentially extend across a much longer period of time.
The prior PBHCI evaluation found that the program was associated with improvements in consumer access to primary care. The claims analysis provides a different and more detailed perspective on PBHCI effects on quality of care because it expands the scope of observation to include all services for which claims were submitted. Since PBHCI primary care services provided at BH locations were not intended to be comprehensive, we would expect that, for some services, only the claims data would reveal the impact of PBHCI. In particular, services that were beyond the scope of most PBHCI programs, such as mammography, would only be recorded in the claims.
However, there is also uncertainty in the claims that stems from the fact that PBHCI clinics may not have been reliably billing for all of the PH services that they provided. Grantee quarterly reports describe difficulties in establishing claims submission processes in some clinics that lasted as long as several years, although most clinics were submitting some claims within the first year of program implementation. Anecdotal information from the first evaluation also suggested that grantees may have used grant funds (instead of billing) to cover the costs of some services provided; for example, if they expected claims to be denied (e.g., because of caps on allotted services, such as bloodwork) or if reimbursement rates were perceived as inadequate to compensate for the service provided.
For this reason, the results regarding quality of care for PH conditions should be interpreted with considerable caution. In particular, there may be substantial missing data in the subset of clinics where services, such as diabetes monitoring, were provided but no claims were submitted. At the same time, the measures are more valid for services that lie outside of the scope of services that were typically covered by the PBHCI grants, such as screening for breast or colon cancer. For PBHCI clinics with colocated primary care, these services would typically have been provided by an outside clinic that would have billed Medicaid directly. PBHCI would have had an impact on these measures if the program was successful in referring enrolled consumers to the appropriate providers and if the consumers followed up with those referrals and received the services. Since success on these measures involves actions of the consumers themselves as well as external providers, the measures set a high bar as assessments of the impact of PBHCI.
The absence of consistent effects of PBHCI on the quality of care for PH conditions should be interpreted in this light. Together, the measures comprise a diverse but not comprehensive assessment. They include measures of common preventive procedures, such as flu shots and cancer screening and monitoring of pre-existing diabetes, one of the most important chronic physical illnesses in the population of adults with SMI. In addition, no impact of PBHCI was found on the likelihood of having at least one outpatient PH visit over the course of a year. The lack of impacts on these quality of care measures may indicate that the PBHCI programs did not substantially expand delivery of elements of evidence-based preventive services and care for chronic conditions to their SMI population. Further study would be needed to assess the challenges encountered in attempting to achieve that goal.
In interpreting the results presented in Chapter Three, it is important to be aware of several limitations to this study. First, we did not have access to person-level identifying information that would allow us to directly identify PBHCI enrollees. Rather, the study is based on the presumption that all of the consumers who received care from an NPI associated with the PBHCI program were exposed to the program. For this reason, the PBHCI caseloads in this study include consumers who were not actually enrolled in the PBHCI program. While this is a limitation with respect to classification of PBHCI consumers, this method does have some advantages, as noted in the introduction. Specifically, there is likely to be within clinic selection into PBHCI, so that enrollees in a clinic's PBHCI program are systematically different from the clinic's other nonenrolled consumers. This selection process within the clinic would be difficult to reproduce in selecting comparison clinic caseloads. Estimating the impact of PBHCI with enrollees identified at the individual level would be even more biased for this reason. However, it is also clear that this method will produce an underestimate of the effect of PBHCI on the group that was enrolled--that is, the treatment effect among the treated.
Note that not all individuals enrolled in PBHCI would be reflected in Medicaid claims data, since programs could also target consumers who were uninsured, commercially insured, or Medicare beneficiaries. Based on data from the first three cohorts of grantees in State 1, State 2, and State 3, the proportion of clinics' PBHCI caseloads including Medicaid-only beneficiaries ranged from 25 percent to 63 percent (mean=52 percent; median=60 percent). The proportion of PBHCI caseloads, including individuals dually eligible for Medicaid and Medicare, ranged from 7 percent to 55 percent (mean=29 percent; median=27 percent). Since health care covered by Medicare would not be recorded in the Medicaid claims, our data underrepresents health care quality and costs for person dually eligible for Medicare and Medicaid.
Second, there are limitations to the claims data with respect to coverage of the services actually received by enrollees. Data from quarterly reports and site visits conducted for the previous evaluation indicate that PBHCI clinics typically did not immediately begin submitting bills to Medicaid for the primary care services they provided to enrollees. In these cases, the costs of care would have been borne by the clinic, perhaps covered by grant funds, rather than by Medicaid, and the services would not be recorded in the claims data sets. The types of services that are most likely to be missing are those provided directly by the clinic. Among the outcomes examined here, the service most likely to be omitted would be diabetes monitoring. Outpatient and total costs would also be underestimated. Outcomes related to emergency room use and hospital stays would not have been affected.
Medicaid claims data can also incompletely reflect provided services when the beneficiary is enrolled in a Medicaid managed care plan. For care provided to managed care enrollees, providers submit encounter records to managed care organizations, rather than claims to Medicaid. The managed care organizations are expected to report the encounter information to the states, and the states are expected to report that information to CMS. However, the data that are ultimately reported to CMS and included in the ResDAC data sets, such as those used in this study, are often incomplete (Nysenbaum, Bouchery, and Malsberger, 2014). Incomplete reporting of encounter data could bias our estimates of quality, utilization, and costs downward, but would only impact the findings regarding the impact of PBHCI if they affected PBHCI and comparison clinics differently. Since the comparison clinics were selected from within the same states as the PBHCI clinics, a differential impact of encounter reporting is unlikely. According to a recent report on encounter data in the ResDAC data sets, reporting met completeness standards for State 2 but not for State 3 during the period covered in this study (Byrd and Dodd, 2015). Encounter data for State 1 are included in the data set received directly from the state.
Third, the DD model is susceptible to bias from differential selection into the caseloads over time. This is a particular concern for the main analysis because we did not restrict the samples to individuals treated in both the pre-PBHCI and the PBHCI implementation period. Rather, we compared all caseloads of clinics during the two time periods, allowing individuals to move in and out of the sample over time. Turnover in clinic caseloads is high for both PBHCI and comparison clinics. There is a possibility that this method would introduce a bias into the DD estimate of the impact of PBHCI. For instance, a PBHCI clinic that developed a reputation for effectively treating complex cases may have been more likely to attract consumers with complex PH problems than the comparison clinics. If that occurred, then the estimates reported earlier in this chapter would underestimate positive program effects. The supplemental analysis that we conducted using the sample restricted to individuals who received services in both the pre-PBHCI and the PBHCI intervention period address this limitation. However, the supplemental analysis of continuously treated individuals introduces other limitations, as discussed in Chapter Three.
Fourth, the study was limited to three state-level studies, while PBHCI was implemented in 32 states over this period of time. A total of eight grantees were included in the study out of a total of 86 nationwide. Given the fact that Medicaid is a state-level program with wide variety in implementation, the heterogeneity of findings in this study of only three states makes it important to avoid generalizing the results of this study to the program as a whole. While this study demonstrates that the program has been successful in reaching some of its systemic goals in some states, we are not in a position to draw conclusions about the overall effect of the program on a national basis.
This report extends the previous RAND evaluation of PBHCI by examining for the first time the impact of the program on care provided beyond the four walls of the PBHCI clinics themselves. This level of analysis is important because integration of behavioral and PH care services for adults with SMI is ultimately a system-wide challenge that requires shifting of patterns of care across multiple locations and multiple provider types. Although the results are mixed with respect to the different outcomes examined and the different cohorts of PBHCI grantees studied, they suggest that the program can be successful in two of its primary aims: reducing frequent use of emergency department and inpatient services for PH care services and reducing total costs of care.
This pattern of findings provides evidence regarding the mechanisms through which PBHCI affected utilization and costs of care. First, PBHCI may affect care by directly substituting one type of care for another: PH care services provided at the PBHCI clinic may substitute for care that otherwise would have been sought at an emergency department. If consumers were less likely to visit emergency departments for PH care services, they would also be less likely to be admitted for an inpatient stay. Second, PBHCI may affect care by improving routine care for chronic physical illnesses for which consumers often seek care in emergency departments because controllable conditions have not been appropriately managed. Improvements in routine care may improve consumers' health status and thereby reduce their need for emergency department visits and for inpatient stays. Given that the results of this study showed no PBHCI-related improvement on quality of care for PH conditions, we conclude that positive impacts on utilization and cost occur through the first mechanism and not the second. Future studies of PBHCI, however, could be designed to directly examine these two alternative pathways.
The finding that PBHCI clinics can substitute some of the high-cost care otherwise received in emergency departments may have implications for the design of future cohorts of PBHCI grants. Specifically, this role of the program might be further strengthened by providing additional services that are sometimes provided in an emergency department (e.g., stitches for small wounds), hours of access could be extended, and PH care services could be directly targeted to consumers with a history of frequent emergency department visits. At the same time, the limited impact of PBHCI on quality of primary care services could be addressed through investments in more rigorous care coordination services such as comprehensive electronic disease registries (i.e., to make sure that consumers attend medical appointments, as needed) and supports to providers for the management of multiple comorbidities (e.g., additional supports for consumers with comorbid substance use disorders) that could impede the delivery of quality PH care.
Claims data from three states suggest that PBHCI can contribute to progress toward two of the three aims of health reform, improving the experience of care by creating access to appropriate ambulatory care providers, and by reducing the costs of care. The results also suggest hypotheses regarding the mechanisms through which PBHCI affects care that can be examined in future studies and areas of focus for strengthening the design of future PBHCI cohorts.
Abadie, Alberto, "Semiparametric Difference-in-Differences Estimators," Review of Economic Studies, Vol. 72, No. 1, January 2005, pp. 1-19.
Alakeson, Vidhya, Richard G. Frank, and Ruth E. Katz, "Specialty Care Medical Homes for People with Severe, Persistent Mental Disorders," Health Affairs, Vol. 29, No. 5, May 2010, pp. 867-873.
Bertrand, Marianne, Esther Duflo, and Sendhil Mullainathan, "How Much Should We Trust Difference-in-Differences Estimates?" mimeograph, Massachusetts Institute of Technology, July 2001.
Butler, Mary, Robert L. Kane, Donna McAlpine, Roger G. Kathol, Steven S. Fu, Hildi Hagedorn, and Timothy J. Wilt, Integration of Mental Health/Substance Abuse and Primary Care, Rockville, MD: Agency for Healthcare Research and Quality, AHRQ Publication No. 09-E003, October 2008. As of August 15, 2016: http://www.ahrq.gov/research/findings/evidence-based-reports/mhsapc-evi….
Byrd, Vivian L.H., and Allison Hedley Dodd, Assessing the Usability of Encounter Data for Enrollees in Comprehensive Managed Care 2010-2011, Princeton, NJ: Mathematica Policy Research, brief 22, August 2015. As of August 15, 2016: https://ideas.repec.org/p/mpr/mprres/db420e68311c4b299a84db2056a302c5.html.
Card, David, and Alan B. Krueger, "Minimum Wages and Employment: A Case Study of the Fast-Food Industry in New Jersey and Pennsylvania: Reply," American Economic Review, Vol. 90, No. 5, December 2000, pp. 1397-1420.
Chang, Chin-Kuo, Richard D. Hayes, Gayan Perera, Mathew T.M. Broadbent, Andrea C. Fernandes, William E. Lee, Mathew Hotopf, and Robert Stewart, "Life Expectancy at Birth for People with Serious Mental Illness and Other Major Disorders from a Secondary Mental Health Care Case Register in London," PLoS ONE, Vol. 6, No. 5, 2011. As of August 15, 2016: http://dx.doi.org/10.1371/journal.pone.0019590.
Colton, Craig W., and Ronald W. Manderscheid, "Congruencies in Increased Mortality Rates, Years of Potential Life Lost, and Causes of Death Among Public Mental Health Clients in Eight States," Preventing Chronic Disease, Vol. 3, No. 2, April 2006.
Committee on Crossing the Quality Chasm: Adaptation to Mental Health and Addictive Disorders, Board on Health Care Services, Institute of Medicine, Improving the Quality of Health Care for Mental and Substance-Use Conditions, Washington, DC: National Academies of Sciences, 2006.
Conley, Timothy G., and Christopher R. Taber, Inference with "Difference in Differences" with a Small Number of Policy Changes, Cambridge, MA: National Bureau of Economic Research, NBER Technical Working Paper No. 312, July 2005.
Department of Mental Health and MO Healthnet, Progress Report: Missouri CMHC Healthcare Homes, November 1, 2013. As of August 15, 2016: http://dmh.mo.gov/docs/mentalillness/prnov13.pdf.
Dimick, Justin B., and Andrew M. Ryan, "Methods for Evaluating Changes in Health Care Policy: The Difference-in-Differences Approach," Journal of the American Medical Association, Vol. 312, No. 22, December 2014, pp. 2401-2402.
Druss, Benjamin G., "Improving Medical Care for Persons with Serious Mental Illness: Challenges and Solutions," Journal of Clinical Psychiatry, Vol. 68, Supplement 4, 2007, pp. 40-44.
Druss, Benjamin G., and Barbara J. Mauer, "Health Care Reform and Care at the Behavioral Health--Primary Care Interface," Psychiatric Services, Vol. 61, No. 11, November 2010, pp. 1087-1092.
Druss, Benjamin G., Robert M. Rohrbaugh, Carolyn M. Levinson, and Robert A. Rosenheck, "Integrated Medical Care for Patients with Serious Psychiatric Illness: A Randomized Trial," Archives of General Psychiatry, Vol. 58, No. 9, September 2001, pp. 861-868.
Horvitz-Lennon, Marcela, Amy M. Kilbourne, and Harold Alan Pincus, "From Silos to Bridges: Meeting the General Health Care Needs of Adults with Severe Mental Illnesses," Health Affairs, Vol. 25, No. 3, May 2006, pp. 659-669.
Howell, Benjamin L., Patrick H. Conway, and Rahul Rajkumar, "Guiding Principles for Center for Medicare and Medicaid Innovation Model Evaluations," Journal of the American Medical Association, Vol. 313, No. 23, June 16, 2015, pp. 2317-2318.
Imbens, Guido, and Jeffrey Wooldridge, "Estimation of Average Treatment Effects Under Unconfoundedness," lecture notes 1 for the course "What's New in Econometrics?" held at the National Bureau of Economic Research, summer 2007.
Jones, Danson R., Cathaleene Macias, Paul J. Barreira, William H. Fisher, William A. Hargreaves, and Courtenay M. Harding, "Prevalence, Severity, and Co-occurrence of Chronic Physical Health Problems of Persons with Serious Mental Illness," Psychiatric Services, Vol. 55, No. 11, November 2004, pp. 1250-1257.
Kasper, Judy, Molly O'Malley Watts, and Barbara Lyons, Chronic Disease and Co-Morbidity Among Dual Eligibles: Implications for Patterns of Medicaid and Medicare Service Use and Spending, Washington, DC: Kaiser Commission on Medicaid and the Uninsured, publication #8081, July 2010. As of August 15, 2016: http://kaiserfamilyfoundation.files.wordpress.com/2013/01/8081.pdf.
Katon, Wayne J., "Clinical and Health Services Relationships Between Major Depression, Depressive Symptoms, and General Medical Illness," Biological Psychiatry, Vol. 54, No. 3, 2003, pp. 216-226.
Katon, Wayne J., Michael Schoenbaum, Ming-Yu Fan, Christopher M. Callahan, John Williams, Enid Hunkeler, Linda Harpole, Xiao-Hua Andrew Zhou, Christopher Langston, and Jürgen Unützer, "Cost-Effectiveness of Improving Primary Care Treatment of Late-Life Depression," Archives of General Psychiatry, Vol. 62, No. 12, December 2005, pp. 1313-1320.
Katon, Wayne J., and Jürgen Unützer, "Health Reform and the Affordable Care Act: The Importance of Mental Health Treatment to Achieving the Triple Aim," Journal of Psychosomatic Research, Vol. 74, No. 6, 2013, pp. 533-537.
Kronick, Richard G., Melanie Bella, and Todd P. Gilmer, The Faces of Medicaid III: Refining the Portrait of People with Multiple Chronic Conditions, Hamilton, NJ: Center for Health Care Strategies, October 2009. As of August 15, 2016: http://www.chcs.org/usr_doc/Faces_of_Medicaid_III.pdf.
Krupski, Antoinette, Imara I. West, Deborah M. Scharf, James Hopfenbeck, Graydon Andrus, Jutta M. Joesch, and Mark Snowden, "Integrating Primary Care into Community Mental Health Centers: Impact on Utilization and Costs of Health Care," Psychiatric Services, 2016.
Lawrence, David, and Stephen Kisely, "Review: Inequalities in Healthcare Provision for People with Severe Mental Illness," Journal of Psychopharmacology, Vol. 24, No. 4, November 2010, pp. 61-68.
Manning, Willard G., and John Mullahy, "Estimating Log Models: To Transform or Not to Transform?" Journal of Health Economics, Vol. 20, No. 4, 2001, pp. 461-494.
McGinty, Emma Elizabeth, Yiyi Zhang, Eliseo Guallar, Daniel E. Ford, Donald Steinwachs, Lisa B. Dixon, Nancy L. Keating, and Gail L. Daumit, "Cancer Incidence in a Sample of Maryland Residents with Serious Mental Illness," Psychiatric Services, Vol. 63, No. 7, July 2012, pp. 714-717.
Melek, Stephen P., Douglas T. Norris, and Jordan Paulus, Economic Impact of Integrated Medical-Behavioral Healthcare Implications for Psychiatry, Denver, CO: Milliman American Psychiatric Association Report, April 2014.
Newcomer, John W., "Metabolic Syndrome and Mental Illness," American Journal of Managed Care, Vol. 13, No. 7, 2007, pp. S170-177.
Nolte, Ellen, and Emma Pitchforth, What Is the Evidence on the Economic Impacts of Integrated Care? Copenhagen, Denmark: WHO Region Office for Europe, policy summary No. 11, 2014. As of August 15, 2016: http://www.euro.who.int/__data/assets/pdf_file/0019/251434/What-is-the-evidence-on-the-economic-impacts-of-integrated-care.pdf.
Nysenbaum, Jessica Beth, Ellen Bouchery, and Rosalie Malsberger, "Availability and Usability of Behavioral Health Organization Encounter Data in MAX 2009," Medicare and Medicaid Research Review, Vol. 4, No. 2, 2014.
Parks, Joe, Dale Svendsen, Patricia Singer, Mary Ellen Foti, eds., and Barbara Mauer, Morbidity and Mortality in People with Serious Mental Illness, Alexandria, VA: National Association of State Mental Health Program Directors, October 2006. As of August 15, 2016: http://www.nasmhpd.org/sites/default/files/Mortality%20and%20Morbidity%20Final%20Report%208.18.08.pdf.
Pincus, Harold Alan, Ann E.K. Page, Benjamin Druss, Paul S. Appelbaum, Gary Gottlieb, and Mary Jane England, "Can Psychiatry Cross the Quality Chasm? Improving the Quality of Health Care for Mental and Substance Use Conditions," American Journal of Psychiatry, Vol. 164, No. 5, 2007, pp. 712-719.
President's New Freedom Commission on Mental Health, Achieving the Promise: Transforming Mental Health Care in America, Rockville, MD, July 22, 2003. As of August 15, 2016: http://govinfo.library.unt.edu/mentalhealthcommission/reports/FinalReport/downloads/FinalReport.pdf.
PSYCKES Medicaid, "Quality Concerns," web page, undated. As of August 30, 2016: http://www.omh.ny.gov/omhweb/psyckes_medicaid/quality_concerns/.
Russell, Louise B., "Preventing Chronic Disease: An Important Investment, but Don't Count on Cost Savings," Health Affairs, Vol. 28, No. 1, January/February 2009, pp. 42-45.
Saha, Sukanta, David Chant, and John McGrath, "Is Mortality Risk in Schizophrenia Rising? A Systematic Review," Schizophrenia Bulletin, Vol. 33, No. 2, March 2007, pp. 245-246.
"SAMHSA Grant Announcements," SAMHSA website, December 22, 2014. As of September 6, 2016: http://www.samhsa.gov/grants/grant-announcements/sm-15-005.
Scharf, Deborah M., Nicole K. Eberhart, Nicole Schmidt Hackbarth, Marcela Horvitz-Lennon, Robin Beckman, Bing Han, Susan L. Lovejoy, Harold Alan Pincus, and M. Audrey Burnam, Evaluation of the SAMHSA Primary and Behavioral Health Care Integration (PBHCI) Grant Program: Final Report (Task 13), Santa Monica, CA: RAND Corporation, RR-546-DHHS, 2014. As of August 15, 2016: http://www.rand.org/pubs/research_reports/RR546.html.
Smith, Vernon K., Kathleen Gifford, Eileen Ellis, Robin Rudowitz, and Laura Snyder, Medicaid Today, Preparing for Tomorrow: A Look at State Medicaid Program Spending, Enrollment, and Policy Trends, Washington, DC: Kaiser Commission on Medicaid and the Uninsured, publication #8380, October 2012. As of August 15, 2016: https://kaiserfamilyfoundation.files.wordpress.com/2013/01/8380.pdf.
Substance Abuse and Mental Health Services Administration, Behavioral Health, United States, 2012, Rockville, MD, 2012.
Unützer, Jürgen, Wayne J. Katon, Ming-Yu Fan, Michael C. Schoenbaum, Elizabeth H. B. Lin, Richard D. Della Penna, and Diane Powers, "Long-Term Cost Effects of Collaborative Care for Late-Life Depression," American Journal of Managed Care, Vol. 14, No. 2, February 2008, pp. 95-100.
Walker, Elizabeth Reisinger, Robin E. McGee, and Benjamin G. Druss, "Mortality in Mental Disorders and Global Disease Burden Implications: A Systematic Review and Meta-Analysis," JAMA Psychiatry, Vol. 72, No. 4, April 2015.
|TABLE A.1. Utilization and Quality Measures Considered for Study, Drawn from New York State's PSYCKES or NQF|
|Primary care||Outpatient medical visit||All SMI||Y||N|
|Follow-up visit after discharge||All SMI||N||Y|
|Diabetes care||Diabetes screening (HbA1c)||All SMI||Y||Y|
|Diabetes monitoring (HbA1c)||SMI with diabetes||Y||Y|
|Eye exam for SMI w/diabetes||SMI with diabetes||Y||Y|
|Asthma care||Use of appropriate medications for people with asthma||SMI with asthma||N||Y|
|Cancer screening||Mammogram||SMI women aged 50-74 years||N||Y|
|Screening for cervical cancer||SMI women aged 24-64 years||N||Y|
|Screening for colorectal cancer||SMI aged 51-75 years||N||Y|
|Prevention||Flu vaccinations||All SMI||N||N|
|Hypertension||Initiation of high blood pressure treatment||All SMI||N||N|
|Behavioral health||Adherence to antipsychotics||Individuals with schizophrenia||N||Y|
|High utilization||4+ PH ED/IP||All SMI||Y||N|
|4+ Any ED/IP||All SMI||Y||N|
|3+ BH ED visits||All SMI||Y||N|
|Avoidable hospitalization||Hospitalization for dehydration||All SMI||Y||N|
|Hospitalization for asthma||SMI with asthma||Y||N|
|Hospitalization for diabetes||SMI with diabetes||Y||N|
|TABLE B.1. Utilization and Quality Measures in the PBHCI and Comparison Clinics During the Pre-PBHCI and Post-PBHCI Period, State 1, Cohort 1|
|BH ED Visit||491||79||16.1||1,470||243||16.5||546||101||18.5||1,842||351||19.1||568||111||19.5||1,965||363||18.5|
|PH ED Visit||491||191||38.9||1,470||588||40.0||546||213||39.0||1,842||795||43.2||568||216||38.0||1,965||892||45.4|
|PH IP Stay||491||8||1.6||1,470||20||1.4||546||8||1.5||1,842||24||1.3||568||8||1.4||1,965||41||2.1|
|BH IP Stay||N/A||N/A||N/A||N/A||N/A||N/A||N/A||N/A||N/A||1,842||1||0.1||N/A||N/A||N/A||N/A||N/A||N/A|
|3+ BH ED Visits||491||18||3.7||1,470||47||3.2||546||26||4.8||1,842||111||6.0||568||40||7.0||1,965||124||6.3|
|4+ Any ED/IP||491||118||24.0||1,470||368||25.0||546||134||24.5||1,842||501||27.2||568||147||25.9||1,965||591||30.1|
|Breast cancer screening||170||32||18.8||538||125||23.2||189||29||15.3||678||161||23.7||204||10||4.9||688||23||3.3|
|Cervical cancer screening||204||16||7.8||711||69||9.7||243||25||10.3||911||96||10.5||268||22||8.2||964||79||8.2|
|Colorectal cancer screening||221||12||5.4||564||45||8.0||227||20||8.8||666||61||9.2||228||4||1.8||662||47||7.1|
|BH ED Visit||659||126||19.1||2,180||430||19.7||606||143||23.6||1,980||432||21.8|
|PH ED Visit||659||247||37.5||2,180||1,014||46.5||606||278||45.9||1,980||934||47.2|
|PH IP Stay||659||8||1.2||2,180||31||1.4||606||12||2.0||1,980||27||1.4|
|BH IP Stay||N/A||N/A||N/A||N/A||N/A||N/A||N/A||N/A||N/A||N/A||N/A||N/A|
|3+ BH ED Visits||659||26||3.9||2,180||121||5.6||606||55||9.1||1,980||154||7.8|
|4+ Any ED/IP||659||156||23.7||2,180||645||29.6||606||205||33.8||1,980||658||33.2|
|Breast cancer screening||243||12||4.9||758||25||3.3||211||12||5.7||682||45||6.6|
|Cervical cancer screening||321||21||6.5||1,079||76||7.0||310||16||5.2||982||50||5.1|
|Colorectal cancer screening||256||11||4.3||695||56||8.1||226||14||6.2||579||41||7.1|
|TABLE B.2. DD Estimates of the Impact of PBHCI on Utilization and Quality Measures, State 1, Cohort 1|
|BH ED visit||12,307||1.0||0.9||1.2||0.5||0.6||1.0||0.8||1.1||1.1||0.9||1.4||1.0||0.8||1.1||1.1||0.9||1.4|
|PH ED visit||12,307||0.8||0.7||0.9||-4.1||0.0||0.9||0.7||1.0||0.8||0.6||0.9||0.7||0.6||0.8||1.0||0.9||1.0|
|BH IP stay||N/A||N/A||N/A||N/A||N/A||N/A||N/A||N/A||N/A||N/A||N/A||N/A||N/A||N/A||N/A||N/A||N/A||N/A|
|PH IP stay||12,307||0.8||0.4||1.4||–0.8||0.4||0.9||0.6||1.5||0.5||0.3||0.9||0.7||0.3||1.4||1.1||0.6||2.3|
|3+ BH ED visits||12,307||0.8||0.6||1.0||–1.8||0.1||0.6||0.5||0.9||1.0||0.8||1.2||0.6||0.4||0.8||1.0||0.8||1.3|
|4+ any ED/IP||12,307||0.9||0.8||1.0||–2.4||0.0||0.9||0.8||1.0||0.8||0.7||1.0||0.8||0.7||0.8||1.0||0.9||1.1|
|Breast cancer screening||4,361||1.1||0.8||1.5||0.4||0.7||0.8||0.6||1.1||2.0||1.7||2.3||2.0||0.8||5.4||1.1||0.7||1.9|
|Cervical cancer screening||5,993||1.2||0.8||1.7||0.9||0.4||1.2||0.9||1.6||1.2||0.8||2.0||1.2||0.9||1.5||1.2||0.8||1.9|
|Colorectal cancer screening||4,324||1.0||0.9||1.2||0.1||0.9||1.5||1.2||1.9||0.4||0.3||0.5||0.8||0.5||1.3||1.4||1.1||1.7|
|NOTE: Results in bold font indicate statistically significant differences between PBHCI and comparison at p=0.05.|
|TABLE B.3. DD Estimates of the Impact of PBHCI on Medicaid Costs, State 1, Cohort 1|
|Cost Category||Aggregate Effect||2010||2011||2012||2013|
|Estimate||Standard Error||Z||P(Z)||95% CI||Estimate||95% CI||Estimate||95% CI||Estimate||95% CI||Estimate||95% CI|
|NOTE: Results in bold font indicate statistically significant differences between PBHCI and comparison at p=0.05.|
|TABLE B.4. DD Results for Cost Measures, State 1, Cohort 1|
|Cost Category||Any Use||Cost Per User|
|NOTE: Results in bold font indicate statistically significant differences between PBHCI and comparison at p=0.05. N/A=We did not have BH inpatient data for State 1.|
|TABLE B.5. Utilization and Quality Measures in the PBHCI and Comparison Clinics During the Pre-PBHCI and Post-PBHCI Period, State 2, Cohort 1|
|BH ED Visit||228||34||14.9||2,293||280||12.2||228||34||14.9||2,293||280||12.2||266||32||12.0||3,025||372||12.3|
|PH ED Visit||228||130||57.0||2,293||1,427||62.2||228||130||57.0||2,293||1,427||62.2||266||157||59.0||3,025||1,734||57.3|
|PH IP Stay||228||26||11.4||2,293||277||12.1||228||26||11.4||2,293||277||12.1||266||34||12.8||3,025||336||11.1|
|BH IP Stay||228||25||11.0||2,293||235||10.2||228||25||11.0||2,293||235||10.2||266||25||9.4||3,025||252||8.3|
|3+ BH ED Visits||228||2||0.9||2,293||58||2.5||228||2||0.9||2,293||58||2.5||266||8||3.0||3,025||81||2.7|
|4+ Any ED/IP||228||88||38.6||2,293||917||40.0||228||88||38.6||2,293||917||40.0||266||84||31.6||3,025||1,072||35.4|
|Breast cancer screening||61||16||26.2||619||207||33.4||61||16||26.2||619||207||33.4||84||24||28.6||886||244||27.5|
|Cervical cancer screening||113||23||20.4||1,281||392||30.6||113||23||20.4||1,281||392||30.6||149||41||27.5||1,714||451||26.3|
|Colorectal cancer screening||58||5||8.6||533||45||8.4||58||5||8.6||533||45||8.4||71||3||4.2||778||82||10.5|
|Measure||2011||2012||Post PBHCI Sample|
|BH ED Visit||954||117||12.3||2,775||307||11.1||1,227||146||11.9||2,806||263||9.4||2,447|
|PH ED Visit||954||542||56.8||2,775||1,636||59.0||1,227||713||58.1||2,806||1,699||60.5||2,447|
|PH IP Stay||954||82||8.6||2,775||279||10.1||1,227||134||10.9||2,806||303||10.8||2,447|
|BH IP Stay||954||76||8.0||2,775||207||7.5||1,227||66||5.4||2,806||180||6.4||2,447|
|3+ BH ED Visits||954||22||2.3||2,775||74||2.7||1,227||31||2.5||2,806||67||2.4||2,447|
|4+ Any ED/IP||954||283||29.7||2,775||1,001||36.1||1,227||395||32.2||2,806||1,038||37.0||2,447|
|Breast cancer screening||315||46||14.6||793||133||16.8||385||59||15.3||827||140||16.9||784|
|Cervical cancer screening||522||92||17.6||1,573||376||23.9||681||116||17.0||1,645||403||24.5||1,352|
|Colorectal cancer screening||281||21||7.5||693||60||8.7||348||32||9.2||714||60||8.4||700|
|TABLE B.6. State 2, Cohort 1 DD Results for Utilization and Quality Measures|
|BH ED visit||13,574||0.9||0.7||1.1||–1.4||0.176||0.8||0.7||0.9||0.9||0.7||1.1||1.0||0.8||1.3|
|PH ED visit||13,574||1.2||1.1||1.3||3.9||0.000||1.3||1.1||1.5||1.2||1.1||1.3||1.2||1.1||1.3|
|BH IP stay||13,574||0.8||0.7||1.0||–1.8||0.073||1.1||0.9||1.2||1.0||0.8||1.1||0.7||0.5||1.1|
|PH IP stay||13,574||1.0||0.9||1.2||0.5||0.634||1.2||1.1||1.4||0.9||0.8||1.0||1.1||0.9||1.5|
|3+ BH ED visitsa||13,574||2.8||2.0||3.8||6.2||0.000||3.5||2.4||5.0||2.4||1.6||3.5||2.9||2.3||3.6|
|4+ any ED/IP||13,574||0.9||0.7||1.0||–1.7||0.098||0.9||0.7||1.1||0.8||0.7||1.0||0.9||0.8||1.0|
|Breast cancer screening||3,970||1.0||0.9||1.2||0.6||0.524||1.5||1.4||1.6||1.2||0.9||1.5||1.2||1.0||1.4|
|Cervical cancer screening||7,678||1.3||1.2||1.4||10.5||0.000||1.8||1.6||2.1||1.4||1.3||1.4||1.2||1.1||1.4|
|Colorectal cancer screeninga||3,476||0.8||0.7||1.0||–1.8||0.066||0.4||0.2||0.5||0.8||0.6||1.1||1.1||0.8||1.4|
|NOTE: Results in bold font indicate statistically significant differences between PBHCI and comparison at p=0.05. |
|TABLE B.7. State 2, Cohort 1 DD Results for Cost Measures|
|Cost Category||Aggregate Effect||2010||2011||2012|
|Estimate||Standard Error||Z||P(Z)||95% CI||Estimate||95% CI||Estimate||95% CI||Estimate||95% CI|
|NOTE: Results in bold font indicate statistically significant differences between PBHCI and comparison at p=0.05.|
|TABLE B.8. DD Results for Cost Measures, State 2, Cohort 1|
|Cost Category||Any Use||Cost Per User|
|TABLE B.9. Utilization and Quality Measures in the PBHCI and Comparison Clinics During the Pre-PBHCI and Post-PBHCI Period, State 2, Cohort 3|
|BH ED Visit||890||127||14.3||3,279||302||9.2||890||127||14.27||3,279||302||9.21|
|PH ED Visit||890||592||66.5||3,279||1,871||57.1||890||592||66.52||3,279||1,871||57.06|
|PH IP Stay||890||107||12.0||3,279||365||11.1||890||107||12.02||3,279||365||11.13|
|BH IP Stay||890||53||6.0||3,279||195||5.9||890||53||5.96||3,279||195||5.95|
|3+ BH ED Visits||890||35||3.9||3,279||58||1.8||890||35||3.93||3,279||58||1.77|
|4+ Any ED/IP||890||401||45.1||3,279||1,080||32.9||890||401||45.06||3,279||1,080||32.94|
|Breast cancer screening||226||39||17.3||911||145||15.9||226||39||17.26||911||145||15.92|
|Cervical cancer screening||491||118||24.0||1,917||466||24.3||491||118||24.03||1,917||466||24.31|
|Colorectal cancer screening||200||22||11.0||766||70||9.1||200||22||11.00||766||70||9.14|
|BH ED Visit||1,146||174||15.18||3,859||344||8.91||1,176||174||14.80||3,946||381||9.66|
|PH ED Visit||1,146||770||67.19||3,859||2,259||58.54||1,176||800||68.03||3,946||2,335||59.17|
|PH IP Stay||1,146||117||10.21||3,859||364||9.43||1,176||133||11.31||3,946||423||10.72|
|BH IP Stay||1,146||61||5.32||3,859||195||5.05||1,176||85||7.23||3,946||151||3.83|
|3+ BH ED Visits||1,146||42||366||3,859||53||1.37||1,176||43||3.66||3,946||71||1.80|
|4+ Any ED/IP||1,146||523||45.64||3,859||1,313||34.02||1,176||542||46.09||3,946||1,370||34.72|
|Breast cancer screening||284||50||17.61||1,096||170||15.51||491||118||24.03||1,917||466||24.31|
|Cervical cancer screening||643||163||25.35||2,273||540||23.76||660||171||25.91||2,301||549||23.86|
|Colorectal cancer screening||259||20||7.72||881||68||7.72||275||20||7.27||946||78||8.25|
|TABLE B.10. Utilization and Quality Measures, State 2, Cohort 3|
|BH ED visit||14,296||1.0||1.0||1.1||1.0||0.310||1.1||1.0||1.2||1.0||0.9||1.1|
|PH ED visit||14,296||1.0||0.9||1.0||–3.1||0.002||1.0||0.9||1.0||1.0||0.9||1.0|
|BH IP stay||14,296||1.5||1.2||1.8||3.7||0.000||1.1||0.9||1.2||2.0||1.4||2.7|
|PH IP stay||14,296||1.0||0.9||1.1||–0.4||0.693||1.0||0.9||1.2||1.0||0.8||1.1|
|3+ BH ED visits||14,296||1.0||0.8||1.4||0.2||0.810||1.2||0.9||1.6||0.9||0.6||1.3|
|4+ any ED/IP||14,296||1.0||0.9||1.0||–1.2||0.238||1.0||0.9||1.0||0.9||0.9||1.0|
|Breast cancer screening||3,939||1.0||0.9||1.1||0.3||0.781||1.0||1.0||1.2||1.0||0.9||1.1|
|Cervical cancer screening||8,285||1.1||1.0||1.3||1.5||0.141||1.1||1.0||1.3||1.1||1.0||1.3|
|Colorectal cancer screening||3,327||0.8||0.6||1.0||–1.7||0.087||0.8||0.6||1.1||0.7||0.5||1.0|
|TABLE B.11. DD Results for Cost Measures, State 2, Cohort 3|
|Cost Category||Aggregate Effect||2011||2012|
|Estimate||Standard Error||Z||P(Z)||95% CI||Estimate||95% CI||Estimate||95% CI|
|TABLE B.12. DD Results for Cost Measures, State 2, Cohort 3|
|Cost Category||Any Use||Cost Per User|
PBHCI grants have been awarded to one group (or cohort) of clinics per year since 2009, except for 2010, when grants were awarded to both cohorts 2 and 3.
See previous footnote.
RAND collected data on the estimated distribution of health insurance status among PBHCI enrollees (from the first three cohorts of grantees) using a program-level web survey fielded in 2013. No estimates were provided by one of the State 1 grantees. Survey methods are described in detail in Scharf et al., 2014; however, insurance status results were not previously published.