Linking State-Level Health Expenditure and Utilization Data to Identify Sources of Variation in Health Service Prices, Utilization, and Expenditures
Len M. Nichols, Ph.D.*
Final Report for DHHS Contract No. HHS-100-95-0021 Department of Health and Human Services The Assistant Secretary of Planning and Evaluation Health Division Ariel Winter and George Greenberg, Project Officers
February 15, 1999
* Principal Research Associate, The Urban Institute. An earlier version of this paper was presented at the American Public Health Association meetings November 18, 1998, in Washington, D.C. William White provided many helpful comments at that time. I am grateful to many ASPE staff, including George Greenberg, Ariel Winter, John Drabek, Holly Harvey, and Ted Anagnoson (now returned to California State University, Los Angeles) for their continued advice and support. I am also grateful to many other researchers for helpful conversations and comments on earlier drafts, including Jim Cultice and Evelyn Moses of HRSA, Katie Levit, Hans Dutt, and Ted Sekscenski of HCFA(now known as CMS), Barry Friedman of AHCPR, and Charles Roehrig and Beth Jones of Vector Research. Finally, I am indebted to both Joseph Llobrera and Adam Badawi (now returned to UC-Berkeley) for their consistently outstanding research assistance. I remain solely responsibility for any remaining errors and ambiguities. The opinions and judgments expressed herein are mine alone and do not reflect those of the Urban Institute, its trustees or sponsors, nor those of the Department of Health and Human Services.
Executive SummaryI. Introduction and Purpose of StudyII. Overview of an Estimation Strategy and Potential Application of ModelIII. Health Services Candidates for Expenditure DecompositionIV. Variables and Estimation TechniquesV. Estimation TechniquesVI. Discussion of Empirical ResultsVII. Summary and ConclusionsReferencesTables
The debate over comprehensive health reform in 1993-94 revealed many strengths and weaknesses in the U.S. health care system. Simultaneously, the policy development process underlying that national debate exposed the limits of our collective analytic capacity to explain the implications of expected or proposed changes in our health system. Among the more salient gaps between what policy makers wanted (and continue to want) and what the research community could (can) deliver were data and information representative of and specific to individual states. The vast majority of federal data collection efforts were and remain organized to yield nationally representative data and estimates. Budget constraints typically preclude adding enough sample to support state-specific estimates. Thus, the research community?s ability to address individual states? questions about how this or that reform will affect their citizens was and remains severely limited.
This "Linking" project was conceived in response to a question that came from ASPE staff. The question doubtlessly emerged from living through that 1993-94 policy development process while anticipating ever tighter budget constraints: "Can you think of any creative ways in which existing data could be used to enhance our analytic capacity to deliver state-specific answers to health policy questions?"
As it turned out, experience had informed me of two separate DHHS data collection efforts that, upon reflection, could contribute to a state-centered analytic framework for health policy analysis. HCFA(now known as CMS) had recently regularized its production of state-specific health expenditure estimates, by service, and had produced a time-series back to 1980. And in 1994, HRSA, in response to anticipated duties under comprehensive reform, had commissioned a model that was designed to produce population-based, state-specific utilization estimates for a similar group of health services. While thinking about these data, I merely noted that expenditures divided by use (or quantity) equals price.
The first year of the Linking project assessed the feasibility of combining the HCFA(now known as CMS) expenditure data with the HRSA utilization data to yield state- and service-specific prices that are analytically meaningful. By "analytically meaningful" I mean prices that are aggregated enough to be meaningful to policy makers and disaggregated enough to still be empirically related to known determinants, like similarly aggregated measures of provider supply and market demand components.
The most difficult part of deriving meaningful prices from these different data sources is adjusting existing estimates to ensure congruent definitions of particular services. For example, published HCFA(now known as CMS) estimates for community hospital expenditures include federal and non-federal hospital revenue as well as both inpatient and outpatient revenue. Ideal estimates of analytically meaningful community hospital prices -- per day, per admission, or per outpatient visit -- would separate federal from non-federal hospitals as well as outpatient from inpatient revenues, since these are all likely to be engendered in markets that are quite different from each other. The first year?s report describes strategies for overcoming these and other difficulties inherent in the ways the data are collected. It also indicates for which health services analytic price derivation is possible and for which it is problematic.
In the second year of the Linking project I used published and unpublished HCFA(now known as CMS) data along with the HRSA model data and AHA data in the public domain to derive state-specific prices for three services: community hospital inpatient beddays, community hospital inpatient admissions, and community hospital outpatient visits (including emergency room visits). I then estimated multivariate models, for the period 1991-93, that explained the variance in these services? prices, utilization, and expenditures. Explanatory variables included state-level measures of provider supply, population demographics, economic structure, insurance coverage, and state policies. This report describes these derivations and analyses as well as some implications of the resultant findings.
A number of methodological issues are raised by any effort this ambitious and complex. The most important center around the proper treatment of time trends and unmeasured state-specific idiosyncrasies. A variety of empirical techniques that address these issues are employed, most of which serve as robustness tests of the implications of the basic model. In general, some salient findings hold across the various models, but I would suggest that additional years of data should be analyzed before concrete policy implications should be drawn from the framework that I have developed.
Perhaps the major finding is that the empirical models at the state level performed reasonably well. Over 80% of the variance in the inpatient dependent variables (prices, utilization, and expenditures) is explained, as well as over 50% of the variance in these variables for outpatient services. Most of the statistically significant individual effects on the dependent variables have intuitive interpretations. By estimating price, utilization, and expenditure equations at the same time, specific avenues by which certain variables affect expenditures ? through price or through use ? are revealed.
For example, states with more community hospital beds per capita appear to have lower prices per inpatient day and more days per capita, but they do NOT have higher inpatient expenditures. By contrast, higher physician supply does increase inpatient expenditures through higher prices, but not through higher utilization, either in admissions or days per capita. Physician supply also increases outpatient expenditures, though this path lies through a larger number of outpatient visits per capita and not through higher prices. The physician supply results withstood extensive analysis of potential bias from endogeneity, a sidebar analysis made relevant by the importance of the long-standing physician induced-demand debate.
State policies were generally not very important in the empirical models, though technical reasons prevented many of them from being tested definitively. More years of data should help resolve most of these problems. Somewhat surprisingly, HMO market share had virtually no statistically significant effect on any variable of interest. At least two interpretations of this finding are possible. Perhaps HMO market share is not highly correlated with managed care market share more broadly defined (but harder to measure). Or perhaps the name of the organizational type is less important than how the local delivery system is organized in particular areas, i.e., many organizations may behave like HMOs, or many providers may have already come to match HMO-like delivery patterns in today?s more competitive health service market environment.
The broad conclusion from this work is that it does appear possible to derive meaningful health service prices at the state level. Thus this work shows the potential for researchers to build models that might be used to explore the fundamental relations among demographic-based utilization data, health delivery system staffing levels, and health service expenditures at the state level. This kind of model could be useful for both federal and state policy analysts, both to analyze market trends and to simulate the effects of proposed policies. For example, the model envisioned could estimate the effect of physician supply on hospital inpatient prices or outpatient utilization, and on overall hospital expenditures, in , e.g., Alabama. This could be relevant to state or federal officials contemplating either expected market trends or graduate medical education support (or other policies) that might be expected to affect local physician supply in the long run. This kind of inference is the point, after all, of state-based health policy analysis.
For a brief and shining moment during the first two years of the Clinton Administration, the nation debated comprehensive health reform. The President?s proposal shared with many others a focus on the individual states? specific and different health care systems and needs. Inspired by that policy development process of 1993-94, DHHS has come to focus more on producing state-specific estimates in a number of ongoing data collection and analytical areas. This project draws on two of these DHHS efforts and explores the possibility of constructing a new model that might serve the policy development process well in the future.
The Health Resources and Services Administration (HRSA) has long been the nation?s primary source of health professional supply data and forecasted requirements. In 1994, and in response to its anticipated role under a revised health care system, HRSA commissioned and helped develop a General Services Demand Model (GSDM) that could generate demographically based, state-specific estimates of health utilization, by service type. HRSA and GSDM were primarily interested in these utilization estimates as inputs into state-specific provider requirements forecasts, an application of a methodology that HRSA had long applied at the national level. State-specific forecasts of provider requirements are clearly relevant to many state policies, such as medical schools admissions and nursing education funding.
The Health Care Financing Administration(now known as Centers for Medicare and Medicaid Services(CMS)) (HCFA(now known as CMS)) is the official source of our national health accounts (NHA), which are measures of expenditures by health service and payor type for the U.S. as a whole. These data are immensely useful for policy purposes, as they track how much we spend on what in pursuit of health. Most germane to this project, HCFA(now known as CMS) has now regularized (every other year) the compilation of state health expenditure accounts (SHEA), state-specific estimates of health spending for the same health services as are tracked at the national level, with only a little less disaggregation than the national data permit. Estimates for the years 1980-1993 have recently been completed, and the 1991 estimates have also been adjusted for patient border crossing. These data have proven to be useful for policy makers at both the federal and the state levels who seek to understand trends in expenditure and service delivery patterns.
This project was motivated by a simple question: are the underlying data of the GSDM and of the SHEA congruent enough to permit the derivation of health service prices that are analytically meaningful for policy analysis? By "analytically meaningful" I mean prices that are aggregated enough to be meaningful to policy makers and disaggregated enough to still be empirically related to known determinants, like similarly aggregated measures of provider supply and market demand components.
If such prices could be constructed from linking these data, then researchers could estimate and build models that might be used to explore the fundamental relations among demographic-based utilization data, health delivery system staffing levels, and health service expenditures at the state level. This kind of model could be useful for both federal and state policy analysts. For example, the model envisioned could estimate the effect of a 20% increase in HMO market share or of a 10% decrease in the physicians per 1000 population ratio on utilization, prices, and expenditures of inpatient community hospital services in any particular state or for the nation as a whole.
How might such questions be answered? From SHEA, we know health expenditures on any given service in a particular state. Call this estimate, X, where state- and service-specific expenditure subscripts are suppressed for simplicity. From conventional economics, we know that expenditures equal price (P) times quantity (Q), i.e., X = PQ. GSDM was designed to produce Qs for many of the same services that SHEA estimates in every state. This project is about assessing the analytic value of using GSDM?s Qs to decompose SHEA?s Xs into service-specific P and Q components, since P = X/Q.
If the decomposition of a specific service?s pair of Q and P is indeed feasible, then one could estimate price, utilization, or expenditures at the state level as a function of the usual postulated determinants of health service demand and supply (e.g., per capita income, physicians per capita, etc.). The estimated coefficients would allow inferences to be made about the marginal effects of changes in the determinants. These inferences could be useful for simulation exercises designed to explore the implications of market trend scenarios or the effects of specific policies. The unit of analysis in this estimation would be the state in a particular year. Eventually, many years of data could be pooled together to permit researchers to control for unmeasurable state-specific effects and perhaps for time-specific effects as well.
To illustrate the estimation and the application of the resulting model, we suppress state and time subscripts in what follows. Let Qd be the quantity demanded of a specific health service, Qs the quantity supplied of the same health service, and note that services cannot be stored, so that Qd = Qs in every year. Then one could write the demand equation as:
(1) Qd = f(P,DEMOG,INSCOV,ECON,MANDATES),
where DEMOG = a vector of sociodemographic characteristics, such as population racial composition, age composition, etc., INSCOV = a vector of health insurance coverage characteristics, such as percent covered by Medicaid, percent covered by employer-sponsored insurance (ESI), percent uninsured, etc., ECON is a set of variables that describe the state?s economic structure, e.g., median income, the percent of workers employed by service and retail firms, the percent of workers in small firms, etc., and MANDATES is a set of insurance coverage mandates, e.g., mental health, inpatient treatment for alcohol and substance abuse, etc.
The supply equation might be:
(2) Qs = g(P,MEDSUPPLY,POLICY)
where MEDSUPPLY = medical service supply variables, such as the number of physicians per 1000 population, the number of community hospital beds per 1000, the percent of the state?s population that is enrolled in an HMO, etc., and POLICY = a vector of state policies that might be expected to affect health service supply, including managed care regulations like the presence of any willing provider or freedom of choice laws.
Because Qd = Qs, Q and P are endogenous to each other at the state level, and the system of Q and P equations that one could derive from equations (1) and (2) above is over-identified, Q and P may be estimated separately as reduced form equations with all the determinants of both as explanatory variables for each, as in equations (3-5) below.
(3) Q = q(DEMOG,MEDSUPPLY,ECON,INSCOV,MANDATES,POLICY)
(4) P = p(DEMOG,MEDSUPPLY,ECON,INSCOV,MANDATES,POLICY)
(5) X = PQ = x(DEMOG,MEDSUPPLY,ECON,INSCOV,MANDATES,POLICY)
This reduced form estimation is adequate for our purposes since we are less interested in the structural coefficients of the demand and supply relationships [eqs. (1) and (2)] and more interested in the relationship between our explanatory variables and equilibrium values of the dependent variables. This analysis is designed to help policy formulation, and policy makers are often much more interested in the "bottom line" or net effects than in structural elasticities and similar details. One could, however, use two-stage least squares regression (2SLS) to estimate the overidentified structural forms (1)-(2), and I do this later to address an endogeneity issue that arises from a particular interpretation of some of the reduced form results.
The coefficients in these equations represent estimates of marginal effects of each explanatory variable on the equilibrium level of the dependent variable of interest. In other words, we could first estimate the equilibrium effect of e.g., beds per capita on price per admission, admissions per capita, and total inpatient hospital expenditures. Then we could simulate what would happen to each of these dependent variables of interest if beds per capita decreased by 10%. By using the values of all other variables that are appropriate to a given state, say Alabama, we could use the full equation?s coefficients to generate baseline predicted values of the dependent variables and then simulate the percentage change in that value that would be engendered from an x% change in beds per capita or any other variable that was estimated to have had a significant effect on the dependent variables or variables of interest.
This structure and econometric approach would also enable researchers to determine whether specific explanatory variables are relatively more important to price or utilization determination. If anything, the relevance of these simplified, broad, service-specific implicit prices may increase over time as capitation and per diem replace fee-for-service as the norm for provider payments. At a minimum, such estimation would indicate if managed care penetration or other supply reorganization mechanisms are actually affecting aggregate utilization and average service prices at the state level. Given the intense re-shuffling "on the ground" at the present moment in our health delivery system, i.e., the ever-stewing "alphabet soup" of HMO, PPO, PHO, PSO, POS, EPO, overlapping physician networks, changing control and incentive mechanisms for providers of all types, etc., this kind of aggregate model may serve as a useful monitoring device that can indicate whether any or all of the purported changes are having net effects on some basic financial outcomes of primary interest.
* * *
The next section describes how we derived estimates for the health service Ps and Qs we focus on in this paper. As the reader might guess, we selected the most feasible health services given the current structures of GSDM and SHEA. Others could be done with more effort and time. Following a description of our health service Ps and Qs, the remaining sections describe the estimation techniques employed, report the preliminary results, and conclude with some inferences and suggestions for further research in this area.
As might be expected from two distinct data efforts undertaken by different parts of DHHS pursuant to different important and longstanding missions, few utilization categories in GSDM map perfectly to SHEA aggregate expenditure figures. Nevertheless, after a detailed review of both GSDM and SHEA documentation, it appears to be analytically feasible to decompose an aggregate expenditure figure into price and quantity for five health service categories which in total account for approximately 70% of personal health care expenditures in the U.S. In order to accomplish this, some estimates must be re-organized. The remainder of this section will describe the three categories of analysis we focus on in this paper, explain the relevant parts of GSDM and SHEA, address discrepancies between these sources, describe other sources where appropriate, and point out the caveats regarding the use of prices computed within this framework.
Hospital inpatient. Hospital inpatient expenditures account for just over 27% of personal health care expenditures. GSDM forecasts demand for three inpatient service categories: short-term general and community beddays for those under age 65, short-term general and community beddays for those 65 and older, and long-term/psychiatric/other hospital beddays. Inpatient visits to Federal hospitals are spread across these three categories as appropriate. HCFA(now known as CMS)?s NHA has three expenditure categories related to inpatient hospital stays: non-Federal short-term community, non-Federal and non-community, and Federal hospital expenditures. But for the SHEA, HCFA(now known as CMS) publishes only one total hospital spending estimate, which includes all types of hospitals as well as both inpatient and outpatient expenditures.
Still, two sets of hospital inpatient Ps and Qs in our framework appear feasible: (1) short-term general and community beddays for all age groups; and (2) short term general and community admissions. The X for these services is the same and can be generated with SHEA and federal program data, and the Qs can be estimated from the AHA data alone, so that the Ps can be derived without using GSDM estimates at all. Because GSDM and NHA differ in their classifications of federal and obstetric hospitals, combining GSDM and NHA/SHEA data for inpatient hospital Ps and Qs, while technically feasible, is more complicated than the method we describe below.
Figure 1 depicts the analytic tasks at hand. The SHEA expenditure estimate must be divided into its constituent parts that are most relevant for analysis. The SHEA estimate of total hospital spending in a given state includes the revenue of federal, non-federal community, and non-federal non-community hospitals. Federal program data permit the subtraction of federal expenditures by state, and AHA federal hospital expense data by state could be used as benchmarks for this adjustment. AHA Annual Survey data provide estimates,
by state, of expenses for each type of hospital. One way to proceed would be to divide state-specific non-federal hospital expenditures into community and non-community in proportion to each type of hospital?s shares of total expenses in each state. This would be inaccurate, however, to the extent that the markups between expenses and revenues differed by type of hospital.
Another alternative estimate could be derived from the AHA Panel Survey data, which reports percentage margins between expense and revenue by nine Census regions for community hospitals. These data could be used to inflate community hospital expenses into estimates of community hospital revenue. This estimate, when subtracted from the SHEA hospital spending total less federal expenditures, would yield a residual estimate of non-federal non-community hospital expenditures. Finally, the AHA provides HCFA(now known as CMS) with state-specific margins for community hospitals, which would be somewhat more accurate than the region-specific margins released with the public use version of the AHA Panel Survey. It is possible that a similar arrangement could be worked out with the AHA for a similar research effort. It should be noted that at the national level, community expenditures are about 93% of total non-federal hospital expenditures. Thus, the variance in these three measures of state-specific community and non-community hospital spending is not likely to be great.
For this exploratory study, HCFA(now known as CMS) graciously agreed to provide state-specific breakdowns of spending estimates in federal, non-federal community, and non-federal non-community hospitals for the years 1991-93. These data allowed us to focus on the bottom half of Figure 1, i.e., decomposing short term community hospital expenditures into inpatient and outpatient spending. Upon further research we concluded that non-community hospitals were too heterogeneous to lump together in an analytic whole. Psychiatric hospitals comprise the largest percentage of noncommunity hospital revenues, but the proportion of non-community hospitals that are psychiatric hospitals varies quite a lot across the various states. For this reason, in this study we concentrated on community hospital beddays and admissions alone.
Another complication is the fact that hospital-based nursing home revenue is embedded in the inpatient expenditure estimates. Thankfully, however, HCFA(now known as CMS) uses Medicare cost reports to estimate that less than 3% of community hospital revenue, on average, was derived from nursing home services in the time period of our study. Finally, the AHA also uses the Annual Survey to generate state-specific estimates of the shares of community hospital revenue that are derived from inpatient and from outpatient sources. This estimate is very close, at the national level, to HCFA(now known as CMS)?s independent estimate of this same split. The inpatient share, multiplied by our final estimate of non-federal community hospital spending, will yield the X for inpatient community hospital services. Then, simply dividing by the AHA?s own estimate of inpatient community beddays, or admissions, by state, i.e., the Qs, will yield the corresponding Ps.
Caveat. Deriving Ps and Qs for inpatient services in these ways can yield an average price per inpatient day and average price per admission for community hospitals in each state. Some analysts might be tempted to compare simple prices across states and infer that some states have more or less competitive hospital markets on the basis of these price differentials alone. The difficulty with making such an intuitive but naive comparison is that it implicitly assumes identical case mixes and input prices for all hospitals. While case mixes may very well be roughly equivalent at the state level, input prices are well known to vary considerably across states. In addition, there are sufficiently large differences in demographics and health insurance coverage across the country that it would be risky to make simple claims on the basis of these strong assumptions. Thus, the most appropriate use of the derived P and Q for hospital inpatient services is not in simple comparisons of levels but rather for the multivariate analysis of the determinants of P and Q across the several states. This analysis will permit the application of the model to the particulars of a given state to estimate the likely effects of specific policy or market phenomena.
Hospital outpatient. For quite some time, roughly since Medicare switched to the PPS inpatient hospital payment system in 1983, community hospital outpatient spending has been growing faster than inpatient, and in 1995 it accounted for 27% of total hospital spending, 30% of community hospital spending, and about 12% of total personal health care expenditures. GSDM has two relevant outpatient utilization categories, short-term hospital outpatient visits and short term hospital emergency room visits (ERs). Both include visits to Federal short-term hospitals. The AHA publishes direct estimates of outpatient visits, as well as of ERs, from the annual survey data.
I derived the NHA/SHEA-based estimate of X for outpatient services, including ERs, as a by-product of the process described above for inpatient spending decomposition. Once total community hospital expenditures within the state is established, then the state specific outpatient share estimate from the AHA will be applied to derive total outpatient spending, including ERs. Thus I investigate one outpatient P and Q category, short-term general outpatient visits (including ER).
Caveat. One concern here is that the GSDM and direct AHA utilization estimates are very far apart. For example, the GSDM outpatient visit estimate for the nation in 1994 was 121 million vs. over 292 million hospital outpatient visits reported by the AHA for that same year. At the state level the degree of discrepancy varies slightly, but in most cases the GSDM prediction is roughly half the number of outpatient visits reported in the AHA Annual Survey. There are two very important reasons for this difference. The GSDM benchmarked 1992 outpatient visits to the total visits to these sites estimated by the National Center for Health Statistics (NCHS) from the National Health Interview Survey (NHIS) data. The GSDM modelers concluded that the raw AHA outpatient counts included some double counting from referrals of outpatients to radiology, pathology, and rehabilitative departments within the hospital, all of which are counted as visits in addition to the basic outpatient clinic visit. Thus it is highly likely that an outpatient visit that required an x-ray would be counted as two outpatient visits by AHA annual survey respondents.
For this reason, we are inclined to use the GSDM estimate for outpatient (+ ER) visits, with two provisos. First, GSDM includes outpatient visits to federal hospitals; these visits are difficult to remove from the overall estimate. Second, GSDM forecasts of years after the benchmark year of 1992 essentially use the 1992 ratio of NHIS visits to AHA visits to deflate reported AHA visits. This implicitly assumes that outpatient treatment patterns are the same from year to year, which may be less true as larger and larger shares of all health services are delivered in this setting. Thus, over time as the data base for this research grows, we would compare AHA to NHIS reported estimates every other year to detect any trends in the degree to which the AHA survey methods overstate the true number of person visits to outpatient departments and ERs, and adjust our Q estimates accordingly.
To summarize, we will estimate Ps from known Xs and Qs for community hospital inpatient beddays, community hospital admissions, and community hospital outpatient visits (which include emergency room visits). Tables 3-10 report the values of the price, utilization, and expenditure variables for each state and year. These are the state-level dependent variables whose variances I try to explain with the empirical models that follow. The next section describes the explanatory variables and estimation techniques that were used to derive our reduced form models of these Ps, Qs, and Xs.
- Variables and Estimation Techniques DEPENDENT VARIABLES
All hospital inpatient utilization and capacity data come from the American Hospital Association, and have been adjusted to account for discrepancies between financial years and calendar years. Hospital Expenditure data was supplied by HCFA(now known as CMS). Hospital outpatient visits were taken or derived from HRSA?s GSDM.
IPDPRICE Price per community hospital inpatient day. Derived by dividing the total community hospital inpatient spending for a state by the number of community hospital inpatient days for that state.
IPDCAP Community hospital inpatient days per one thousand people. Derived by dividing the total community hospital inpatient days for a state by the state?s Census-defined population, and then multiplying this ratio by one thousand.
IPEXPCAP Community hospital inpatient expenditures per one thousand people. Derived by dividing statewide community hospital inpatient expenditures by the state?s Census-defined population, and then multiplying this ratio by one thousand.
ADMPRICE Price per community hospital inpatient admission. Derived by dividing total community hospital inpatient expenditures by the number of community hospital inpatient admissions.
ADMCAP Community hospital inpatient admissions per one thousand people. Derived by dividing statewide community hospital inpatient admissions by the state?s Census-defined population, and then multiplying this ratio by one thousand.
OPPRICE Price per community hospital outpatient visit. Derived by dividing the total statewide community hospital outpatient expenditures by the total number of community hospital outpatient visits.
OPVISCAP Community hospital outpatient visits per one thousand people. Derived by dividing the total community hospital outpatient visits for a state by the state?s Census-defined population, and then multiplying this ratio by one thousand.
OPEXPCAP Community hospital outpatient expenditure per one thousand people. Derived by dividing the total community hospital outpatient expenditures for a state by the state?s Census-defined population, and then multiplying this ratio by one thousand.
PERBLACK The CPS-defined percent of a state?s population which is Black.
PERASIAN The CPS-defined percent of a state?s population which is Asian.
PERHISP The CPS-defined percent of a state?s population which is Hispanic (Black Hispanics categorized as Black).
PERYOUNG The CPS-defined percent of a state?s population which is under 20.
PEROLD The CPS-defined percent of a state?s population which is over 64.
Medical Services Capacity:
BEDCAP The number of community hospital beds per one thousand people. Derived by dividing the total number of community hospital beds by a state?s Census-defined population, and then multiplying this ratio by one thousand.
DOCCAP The number of practicing physicians per one thousand people. Derived by dividing the number of active physicians in a state, as reported by the American Medical Association (AMA), by the state?s Census-defined population, and then multiplying this ratio by one thousand.
RESBED The number of medical residents per one thousand community hospital inpatient beds. Derived by dividing the number of medical residents in a state, as reported by the AMA, by the number of community hospital inpatient beds, and then multiplying this ratio by one thousand.
SPECGEN The ratio of specialist doctors to generalist doctors. Derived by dividing the number of specialists by the number of primary care physicians, with both numbers coming from the AMA. We defined primary care doctors as family practitioners, general practitioners, pediatricians, OB/GYNs, and general internal medicine practitioners. The remainder of physicians were categorized as specialists.
MEDINC Household median income, as defined by the CPS, divided by one thousand.
PSERVRET Percent of the state?s employed population working in the service and retail sectors, as defined by the CPS.
PSFRM25 Percent of the state?s employed population working in firms with less than 25 people, as defined by the CPS.
PERESI Percent of the under-65 population covered by employer-sponsored health insurance, as defined by the Current Population Survey (CPS).
UNINSUR Percent of the state?s population without health insurance, as defined by the CPS.
PERMCAID Percent of the state?s under-65 population covered by Medicaid, as defined by the CPS.
HMO Percent of the state?s insured population, as defined by the CPS, enrolled in HMOs. The HMO enrollment data came from Interstudy.
Border Crossing Adjustment:
NETFLOW The net percent of hospital spending coming from out-of-state residents. Derived by subtracting the 1991 percent of hospital spending purchased out of state by state residents from the 1991 percent of hospital spending purchased by out-of-state residents. These adjustments come from Basu (1996).
AWPFOCPR A variable indicating the presence of the any willing provider option or freedom of choice, based on Marsteller et al (1997).
BC_ALC A dummy variable indicating a mandate to cover inpatient alcoholism treatment, as reported by Blue Cross/Blue Shield Association
BC_CHIRO A dummy variable indicating a mandate to cover chiropractic services, as reported by Blue Cross/Blue Shield Association
BC_DRUG A dummy variable indicating a mandate to cover inpatient inpatient drug treatment, as reported by Blue Cross/Blue Shield Association
BC_MENT A dummy variable indicating a mandate to cover mental health services, as reported by Blue Cross/Blue Shield Association.
YEAR92 A dummy variable for all observations containing data from 1992
YEAR93 A dummy variable for all observations containing data from 1993
ALLSTAT A group variable representing the string of 50 state dummy variables (this convention is used by Stata?s xtreg procedure to invoke fixed-effects models)
Summary statistics for all variables save ALLSTAT are presented in Table 11.
The empirical models that we estimate are all designed to test for the relation between explanatory variables and our dependent variables in a multivariate context so that the possibility of spurious correlations are minimized. The basic equation that we estimate can be described as:
(6) Yit = a + b 1*DEMOGit + b 2*MEDSUPPLYit + b 3*ECONit + b 4*INSCOVit +
b 5*MANDATESit + b 6*POLICYit + b 7*NETFLOW + b 8*YEAR92 + b 9*YEAR93 + e it,
where Yit may represent Pit, Qit, or Xit, and the b i and DEMOGit , MEDSUPPLYit, etc., are all vectors, as described above. Note the variables all have subscripts for state (i) and time (t). The data set is a panel of 50 states (we exclude the District of Columbia because it is an outlier in so many ways) for 3 years, 1991-1993. Because of complexities that are described below, we estimate this basic model in three different ways.
The simplest model, Model 1, estimates (6) for each dependent variable with a standard ordinary least squares (OLS) procedure. The constant term (a ) in this simple model is assumed to be the same for all states and time periods. This establishes a type of naïve benchmark results, reported in detail in Tables 12a-h (one for each dependent variable; I summarize the substantive results of all the models? estimation in Table 17 and discuss some implications later). Unfortunately, Model 1 may yield biased standard errors, since variable values in a given state in year 1 are highly correlated with the same variable?s values in year 2, etc. For example, physician supply per 1000 population in California in 1992 is very similar to physician supply per 1000 population in California in 1991 and in 1993. Thus, while Model 1 has 100 more observations than a model estimated on only one year of data, the 100 observations may not be truly independent of the first year?s 50. This non-independence tends to bias the standard errors downward, perhaps inflating the estimated t-values and inferred significance levels. Thus, model 1 identifies the maximum set of possibly statistically significant explanatory variable associations with each dependent variable.
There are two straightforward alternatives for addressing this potential bias in the standard errors of coefficient estimates in Model 1. One is to estimate year-specific equations. While this obviously reduces already scarce degrees of freedom, it also guarantees that the estimated significant influences on Y result from cross-sectional variation and not from implicit within-state over-time correlation. This technique involves estimating (7) for each year of data:
(7) Yi= a + b 1*DEMOGi + b 2*MEDSUPPLYi + b 3*ECONi + b 4*INSCOVi
b 5*MANDATESi + b 6*POLICYi + b 7*NETFLOW + e i,
which is the same as equation (6) but without time subscripts or the year dummies. The results of these year-specific estimations are labeled Models 2-4 and reported in detail in Tables 13a-h à 15a-h. Once again, note the constant term does not vary across states. (Model 2 is for 1991 and the results are reported in Tables 13a-h, Model 3 is for 1992 and the results are reported in Tables 14a-h, etc.)
The second technique for correcting the observation-independence problem is a fixed-effects model, called Model 5, that can be expressed as equation (8):
(8) Yit = a i + b 1*DEMOGit + b 2*MEDSUPPLYit + b 3*ECONit + b 4*INSCOVit
b 5*MANDATESit + b 6*POLICYit + e it,
which is similar to equation (6) except for the presence of state-specific intercepts (a i ) and for the absences of the border-crossing adjustment (NETFLOW) and time dummies. The state-specific intercepts are what gives the model the name "fixed-effects." The term is meant to convey the idea that any otherwise unmeasured state-specific effects on Y are reflected in this set of intercepts. Since each state?s intercept is presumed to be known (estimable) and constant through time, it is described as "fixed." Fixed-effects models have become quite common in work wherein unmeasured heterogeneity is thought to be important empirically (e.g., work effort of individuals in labor market studies, regulatory environments of states in policy studies).
The NETFLOW variable is the result of a large amount of painstaking work at HCFA(now known as CMS) (see Basu, 1996), and resource constraints prevent the production of this estimate for every year. At the present time, a NETFLOW estimate is in the public domain only for 1991. While preliminary evidence gathered by HCFA(now known as CMS) in subsequent work would suggest that the NETFLOW percentages, i.e., the border-crossing patterns Americans engage in pursuit of health services, do not vary very much at least over short time periods, I only have one observation of NETFLOW for each state for one year, 1991. Thus in (8), NETFLOW is perfectly collinear with the state-specific intercepts, and has to be dropped from the equation to permit fixed-effects estimation to take place. Of course, in this case interstate variation in NETFLOW is implicitly embedded in the state-specific coefficients of the fixed-effects approach. I discuss some issues related to the omitted time dummies a bit later in the paper.
Detailed results from the estimation of equation 8 (Model 5) are reported in Tables 16a-h. This estimation approach has the virtues of more degrees of freedom and the state-specific fixed-effects negate the standard-error bias from having multiple observations from the same state. In some ways this model produces the results that are likely to be most credible to analytically sophisticated readers, but they are perhaps most credible when they are consistent with those of the year-specific models which may suffer from degrees of freedom shortcomings but are also unbiased.
I say this because when year-specific dummies were added to (8) (results not shown), they dominated all other variables, i.e., practically all individual coefficients became insignificant in all equations. But in the absence of the time dummies, the explanatory power of equation (8) is quite strong, with highly significant overall F-statistics and with R2s consistently around 80% (see Tables 16a-h). Furthermore, if time were the truly dominant variable here, the individual year-specific equations, Models 2-4, would have no explanatory power, yet they clearly do (their overall Fs are quite significant, see Tables 13a-15h). Finally, as weaker but still corroborative evidence, while the time dummies were significant in Model 1, they did not swamp the significance of the other coefficients in that model, as seen from the number of significant variables in column 1 of Table 17. So, I think it is reasonable to conclude that there is some convincing evidence of cross-sectional relationships between some intuitive explanatory variables and our constructed Ps, Qs, and Xs, even though the explicit effect of time remains a complication to explore further. I will return to this issue in the concluding section on further research.
Tables 12-16 present a rather large number of coefficients and goodness-of-fit statistics to wade through, so for convenience I compile the salient qualitative findings in Table 17. The columns in Table 17 correspond to the models estimated, and the rows are the dependent variables, the Ps, Qs, and Xs, whose variances we are trying to explain. Each cell in Table 17 lists the independent variables that are significant in the particular equation. (I omit significant demographic variables in Table 17, since they are included as control variables, i.e., there are no hypotheses associated with them).
For Models 1 and 5, Table 17 includes elasticity estimates, derived from the coefficients and variable means, in parentheses. For example, Model 5?s estimate of price per inpatient day (IPDPRICE) yields an income elasticity estimate of 0.67. This estimate suggests that if a state has 10% higher than average median household income, its average price per inpatient day would be 6.7% higher.
Note that very few estimated elasticities are larger than unity. (An elasticity larger than unity means that a 10% increase in the independent variable will lead to a greater than 10% change in the dependent variable). Low elasticities are at least partially the result of estimating reduced form equations, which are designed to test for the net effect on the observed values of the dependent variables. So, a finding of no effect in our equations does not necessarily mean that the variable is irrelevant to either the supply or demand side, or even both. But it does mean that the variable?s countervailing effects, if they are present, cancel each other out or at least minimize them beyond the capacity of state-level models to discern. For many policy makers, the ultimate audience for policy analysis models, this net effect is perhaps the only one that matters, and that is why we focus upon it.
Looking at Table 17 as a whole, we observe that Model 1 does indeed uncover a larger number of significant effects on equilibrium Ps, Qs, and Xs, than the other models. This is expected, since for the reasons described in section V, the standard errors in Model 1 are probably biased downward. The year-specific models (2-4) usually, but not always, produce the fewest significant coefficients, which is likely due to the relatively small number of degrees of freedom. A cross-section effect has to be quite strong to survive in a regression with 50 observations and 22 candidate explanatory variables. The final general observation is that the inpatient equations perform better than the outpatient equations, though even the outpatient equations? explanatory power is strong in Model 5, (R2 > 50%, Fs significant at the .001 level or better), the one whose estimators are unbiased and have minimum variance.
One consistent theme in the results summarized in Table 17 is that medical supply capacity matters. Higher community hospital beds per thousand population was associated with lower prices per inpatient day and higher numbers of inpatient days as well as more admissions per capita. Interestingly, in Model 5, the countervailing effects of beds cancel and there is apparently no net effect on inpatient expenditures, though the simpler models suggest that beds? utilization-increasing effect outweighs the apparent price-decreasing effect of a larger beds supply.
Physicians supply matters as well, and is apparently unambiguously associated with higher inpatient hospital expenditures. Looking at expenditures through its per day components, price per inpatient day (IPDPRICE) and number of days (IPDCAP), the mechanism by which physicians ultimately influence expenditures is not crystal clear, especially since the simple model and the year-specific models all suggest that more physicians lead to more inpatient days but have no effect on price per day, while Model 5 reports instead that more physicians do indeed lead to increased prices per day but decrease days per capita. Model 5?s inpatient expenditure equation (IPEXPCAP) resolves this conflict by finding that the net effect of higher physician supply is to increase expenditures. All the other models? IEXPCAP equations agree with this estimate of the net effect of physician supply on inpatient expenditures.
The underlying pattern of the effect of physician supply on inpatient expenditures is perhaps made a bit clearer when analyzing the per admission equations. There, Model 5 (along with Models 1 and 4) finds more physicians associated with higher prices per admission, but no model finds physicians associated with more admissions per capita. This finding is consistent with the per day inference from Model 5 alone: more physicians are associated with higher hospital prices, but not more hospital use. If the primary linkage between physicians and hospital expenditures per capita is through higher hospital prices, both per day and per admission, it is perhaps surprising that the specialist-generalist ratio had no discernible marginal effect in any inpatient equation. One might suspect that this could be due to high collinearity between SPECGEN and DOCCAP, though excluding DOCCAP from runs (not shown) did not yield significant SPECGEN coefficients either.
Turning to the role of physician supply and our outpatient outcomes of interest, we observe that Model 5 finds physician supply increasing overall outpatient expenditures, and the mechanism would appear to be through a larger number of outpatient visits (OPVISCAP). Model 5 discerned no outpatient price effect (OPPRICE), but the simpler models suggested that greater physician supply lowers outpatient prices.
Model 5 also finds that having more beds per capita lowers overall outpatient expenditures (OPEXPCAP), and the mechanism would appear to be by lowering outpatient visits per capita. Here the simpler models tended to find that beds increased outpatient expenditures. Taking Model 5 as the best single candidate for quantitative estimates, the relative elasticities suggest that increasing physicians per capita will increase outpatient expenditures by more, in percentage terms (elasticity = +2.71), than increasing bed supply will decrease them (elasticity = -0.64).
One may be tempted to conclude that policy and mandate variables are not important since none appear significant in Model 5, but this would be premature. Since Model 5 has state-specific dummy variables, it cannot admit variables that are perfectly collinear with them, i.e., unique to each state and invariant over time, would be. If a state had, for example, laws requiring mental health coverage in each of our three years, 1991-1993, then that dummy variable is perfectly collinear with the state dummy in the fixed-effects model. As it turned out, all the policy and mandate variables except for AWPFOCPR and BC_DRUG had to be excluded from the fixed-effects estimation. These two variables had no significant effects on our Ps, Qs, and Xs, but because of the collinearity issue, we should focus on the year-specific models (2-4) for broader tests of the effects of policy variables.
There we find one consistent result: states with an insurance coverage mandate for mental health services have lower hospital expenditures, apparently through the mechanism of lower inpatient days per capita. Some might interpret this result to indicate that mental health coverage reduces the risk of physical health problems requiring hospitalization. Others might suggest that states which have passed mental health mandates also have other idiosyncratic features that led to essentially spurious correlation between the presence of a mental health mandate and lower inpatient community hospital usage and expenditures. In principal, a fixed-effects model could control for otherwise unmeasurable state-specific effects, but we could not simultaneously run the mental mandate in the fixed-effects model, for the collinearity reason described above. Thus definitive testing of these competing hypotheses must await data sets with more years of data on each state.
We could obviously devote considerable space to discussing every single finding reported in Table 17, but it may be useful to remember that all these results are in a real sense preliminary. Model 5 is our single best candidate expression for the relations between determinants and outcomes, but given the swamping effect that the time dummy variables had on this model, we are most persuaded by its results when they agree with those of the simpler year-specific models in which time effects are removed by construction. When the results conflict, my instinct says to put relatively more weight on the fixed-effects model results, but we must remember that we had to subsume the implicit effects of patient border crossing and of some policy variables from that model for our current data set. Finally, while three years is adequate to begin to test our basic hypotheses, a longer panel data set, with at least 5 years of data for every state, would allow much more confidence to be placed in the apparent empirical regularities our current work has identified.
A Digression on the Potential Endogeneity of Physician Supply
A large literature on physician-induced demand exists and the issue remains controversial. I began this project with no intention of taking sides within that literature, but the results so far on the effects of physician supply might well be invoked by supporters of the physician-induced demand hypothesis, since higher physician supply per capita was associated with both higher inpatient and higher outpatient expenditures. A natural counter-argument to this interpretation is that physician supply is endogenous, since physicians might reasonably choose to locate where prices and expenditures are known to be high already. Since our data lend themselves to this issue and since it remains so controversial in health policy circles, I decided to test for the endogeneity of physician supply.
I used 2SLS to estimate structural supply and demand equations (like equations (1) and (2) on page 4) and performed Hausman specification tests on the DOCCAP (physicians per 1000 population) variable in each supply equation. Then, for the supply curves in which the endogeneity of DOCCAP could not be rejected, I re-estimated them along with their corresponding demand curves in a 2SLS framework as if there were three endogenous variables, Q, P, and DOCCAP. This entailed deriving predicted by estimating DOCCAP as a reduced form on all other explanatory variables in the system of equations, and then using as an instrument, along with similarly predicted P () and predicted Q (), in the estimation of structural supply and demand functions. In general, we use and as explanatory variables in the supply price equations, and in the demand quantity equations. The results of all this are summarized in Table 18, with the statistical output specific to each equation?s estimation in Tables 19a-m.
There are three sets of structural supply and demand curves to consider: inpatient bed days, admissions, and outpatient visits. The first thing to notice from Table 18 is that the estimated inpatient supply curves are downward sloping (the -- and --variables are negative and significant in their respective supply equations). This is not as surprising as it may at first seem, since hospitals have considerable excess capacity and downward sloping supply curves are consistent with unrealized economies of scale. But the truly surprising result reported in Table 18 (and Tables 19h) is that the demand for outpatient visits is apparently upward sloping (is positively related to OPVISCAP). While upward sloping demand curves are unusual, they are not unheard of when perceived quality and price are related in complicated ways, as may be the case for many health services.
Considering the inpatient bed days supply curve specifically, we note that DOCCAP passed the Hausman test for endogeneity (the coefficient on the (reduced form) error variable, , is insignificant, as reported in the first row of Table 18 and in Table 19c). That is, we could not reject the hypothesis that DOCCAP is exogenous to the supply price of inpatient days. For admissions and outpatient visits, however, DOCCAP failed the Hausman test, and thus one may infer that physician supply is indeed endogenous to admission supply prices and outpatient visit supply prices. Given this finding, we then re-estimated the structural supply and demand equations using predicted as an instrument for DOCCAP on the RHS of the supply equations.
Comparisons of Tables 19d and 19j show that for admissions, as Tables 19g and 19l do for outpatient visits, the variables that are significant determinants of supply prices do not change, but the coefficients do, especially for the vs. DOCCAP variable. The ultimate question that is relevant to the induced-demand controversy is, are the net estimates of physician supply on inpatient and outpatient expenditures different when we adjust for the endogeneity of DOCCAP? This can be answered by considering the following. Let
(6) P = a 0 + a 1* + a 2* + a 3*S + e p
represent the structural form of the supply curve, where a 3 is a vector of coefficients and S is a vector of other supply variables (beds per capita, specialist-generalist ratio, etc). Then let
(7) Q = b 0 + b 1* + b 2*Z + e Q
be the structural form of the demand curve, again where b 2 and Z are vectors. Expenditures are the product of price and quantity,
(8) X = P*Q = P()*Q(P(),
and so we can infer that the marginal effect of on expenditures is
(9) ¶ X/¶ = Q*a 2 + P*b 1*a 2
and the elasticity would be (¶ X/¶ )*(/X) evaluated at the means for Q, P, X, and . Table 20 compares these calculations to the results from our reduced form models.
While the estimated magnitudes of physician supply effects differ between structural and reduced form formulations, qualitatively the results suggest the same conclusion: higher physician supply is associated with higher inpatient and outpatient expenditures, i.e., our results support the physician-induced demand hypothesis. The usefulness of this digression is that we tested for the result?s robustness to the possibility that physician supply is endogenous to supply prices. We found that endogeneity cannot be rejected for admissions or outpatient visits, but the basic expenditure-increasing effect of greater physician supply, consistent with some kind of demand-inducing behavior, remains.
An Illustration of How the Estimation Results Might Be Used
We conclude our discussion of the results with an illustration of how the estimation might be used to generate implications for specific states. We focus on Model 5, inpatient price per day (IPDPRICE), and the independent variable beds per 1000 (BEDCAP). Table 17 indicates that the elasticity of IPDPRICE with respect to BEDCAP was estimated to be -.33, i.e., a 10% increase in beds per 1000 was estimated to decrease price per inpatient day, on average across all states, by 3.3%. But we can use the model to estimate the effect for specific states by evaluating the estimated model with the variable values for particular states. Algebraically,
Thus the percentage change in IPDPRICE (Yit in general) in a particular state can be predicted to result from a certain change in a specific explanatory variable (BEDCAP, or Xk in general) using our model?s estimates. Table 21 shows the results of this calculation for Alabama and for California. In each state?s case, we used 1992 values and we constructed D X to correspond to a 10% decrease based on the particular states? observed values of bed per 1000. Table 21 shows that for a low IPDPRICE state like Alabama, a 10% reduction in beds would be expected to bring about a 4.7% increase in prices per inpatient day, whereas in a high IPDPRICE state like California, reducing beds by 10% is expected to increase prices per inpatient day by 0.4%. The national average effect of a 10% reduction in beds, based on the elasticity from the model evaluated at overall sample means, would be an increase in price per inpatient day of 3.3%. The rather different state-specific effects might be of interest to local policy makers who are asked to consider approving mergers or conversions that would likely lead to hospital closings and reductions in capacity. Similar kinds of calculations could be done for every significant variable in the model, and evaluated for each state individually.
This project was designed to investigate whether one could, with existing data, construct state-level health service price, utilization and expenditure measures that were aggregated enough to be meaningful to policy makers and disaggregated enough to be empirically related to hypothesized determinants. For community hospital services, both inpatient and outpatient, this paper has presented evidence to suggest that the answer may well be yes. The paper described how the variables can be constructed, reported descriptive statistics and the results of multivariate analyses, interpreted some of the findings, and illustrated how the model might be used to answer specific questions about the likely effects of hypothetical policy changes or market trends in the context of specific states.
While this line of research would appear to be promising, there are limitations to keep in mind. At the present time our panel data set for all states only extends 3 years, and the effect of including year or time dummies in the preferred fixed-effects model so far proved unsatisfactory. Longer time series, at least 5 years, should be incorporated into the estimation framework before it can support concrete policy development estimates with a desirable level of confidence. Given the increasing competition in health services, both among inpatient and outpatient settings (e.g., ambulatory surgery centers), and among skilled nursing facilities, hospitals, and nursing homes, additional services should be analyzed to provide a richer picture of relevant prices and expenditures. Two candidates identified in the first year?s study, ambulatory physician services and nursing home services, would appear to be feasible given current data availability, though somewhat more difficult than the three services we focused on in this paper.
In addition, some important policy variables are probably measured with error, and thus may not have received fair tests in the preliminary work shown here. For example, we proxy the importance of managed care in a state with the fraction of a population enrolled in an HMO (license holders and respondents to Interstudy?s surveys). With the proliferation of different types of managed care organizations today, and the importance of self-insured plans whose enrollees? care is managed but may not be counted among managed care companies? enrollees, the use of this available HMO variable is possibly suspect, and hinges upon an assumption that HMO enrollment is highly correlated with the importance of actually managed care in an area. This assumption will be tested in future work.
Still, the model and framework may be most useful in the short run as a first pass reconnaissance flight potentially indicating areas that might merit closer research scrutiny. For example, we found that increased physician supply was associated with increased inpatient and outpatient expenditures, due to an apparent price effect on the inpatient side and a utilization effect on the outpatient side. Research devoted to specific studies at the micro level might pursue the question, "WHY is physician supply linked to inpatient prices and outpatient use?" This may be much more productive, ultimately, than finally establishing, definitively and expensively, whether physicians do indeed respond to payment incentives in managed care contracts.
In addition, for policy purposes, developing a stylized metric for the impacts of demand and supply determinants on Ps and Qs may aid in forecasting the amount of cost savings that are feasible from certain policy or market trends or cost increases that are inevitable given other trends. For example, suppose a properly specified managed care market share variable turns out to be significantly and negatively related to inpatient and outpatient expenditures, primarily due to utilization effects on inpatient care and utilization controls for outpatient care. Then this model could simulate how different managed care growth paths might change prices, usage patterns, and expenditures in any state.
Finally, the model illustrated here could indicate whether the effects of particular variables, like beds per 1000, are more likely to work through price or through utilization effects, and whether the net effect, in the nation as a whole or in a particular state, is very large. This kind of finding may be useful to all purchasers (as well as to some providers!), public and private alike, as they adopt and amend utilization management theory and practice.
It sometimes seems obvious but may bear repeating, even under the best of data set matching circumstances, all our Ps and Qs are highly aggregated. This is appropriate for policy analysis, especially since the explanatory variables and estimated parameters are the underlying (and measurable) fundamentals of our health system. The relations established between prices, quantities, and demand/supply determinants like insurance coverage rates and provider capacity ratios could someday be interpreted as stylized facts, suggestive of deep and stable relationships (or the absence of the same) within our health care delivery system. This is one of the few, if not the only, model framework currently in the public domain that potentially links provider supply, consumer demand, and health service use, price, and overall expenditures. At the same time the level is aggregated for policy usefulness, however, it must be remembered that far more detailed models are required for something as practical as setting capitation rates. Most real world health service markets cover much smaller geographic areas than a state, even if policy makers and analysts sometimes speak and act as if the opposite were true.
American Hospital Association. National Hospital Panel Survey. Chicago. 1995.
American Hospital Association. Annual Hospital Statistics. Chicago. 1996.
Basu J, ABorder Crossing Adjustment and Personal Health Care Spending by State.@ Health Care Financing Review 18(1); 215-236, Fall 1996.
Bernstein, J. "Policy Implications of Physician Income Homeostasis," Journal of Health Care Finance 24(4): (Summer 1998), pp. 80-86.
Department of Veteran Affairs, Summary of Medical Programs FY 1996. http://www.va.gov/sumedpr/fy96/sumfy96.htm.
Friedman, B., and A. Elixhauser. "The Changing Distribution of a Major Surgical Procedure Across Hospitals: Were Supply Shifts and Disequilibrium Important?" Health Economics. 4(4), (Jul-Aug. 1995), pp. 301-14.
Greenberg L, Cultice JM. AForecasting the Need for Physicians in the United States: The Health Resources and Services Administrations Physician Requirements.@ Health Services Research 31(6):723-737, February 1997.
Greene WH, Econometric Analysis, Prentice Hall, Englewood Clifts NJ, 1993.
Health Resources and Services Administration, Bureau of Health Professions. Ninth Report to Congress. Washington DC. 1993.
Levit KR, Lazenby HC, Cowan CA, Letsch SW, AHealth Spending by State: New Estimates for Policy Making.@ 12(3):7-26, Fall 1993.
Levit KR, Lazenby HC, Cowan CA, Won DK, Stiller JM, Sivarajan L, Stewart MW, AState Health Expenditure Accounts: Building Blocks for State Health Analysis.@ Health Care Financing Review 17(1);201-254, Fall 1995.
Levit KR, Lazenby HC, Braden BR, Cowan CA, McDonnell PA, Sirvarajan L, Stiller JM, Won DK, Donham CS, Long AM, Stewart MW, ANational Health Expenditures, 1995.@ Health Care Financing Review 18(1); 175-214, Fall 1996.
Marsteller, Jill A., Randall Bovbjerg, Diana Verrilli, and Len Nichols. "The Resurgence of Selective Contracting Restrictions," Journal of Health Politics, Policy and Law, v. 22 # 5, pp. 1133-1189, October 1997.
Nichols, Len M. "Exploring the Feasibility of Linking Health Expenditure and Utilization Data," Final Report for DHHS Contract No. HHS-100-95-0021, July 1997.
Physician Payment Review Commission. Annual Report to Congress, 1997. Washington DC.
Pindyck, Robert S. and Daniel L. Rubinfeld. Econometric Models and Economic Forecasts. McGraw-Hill (New York: 1991).
Shiell, Scott A. "Analysing the Effect of Competition on General Practitioners? Behavior Using a Multilevel Modelling Framework," Health Economics 6(6), (Nov-Dec. 1997), pp. 577-88.
Simon, Carol J., David Dranove and William D. White "The Effect of Managed
Care on the Income of Primary Care and Specialty Physicians: Part I." Health
Services Research v. 33 # 3 pp. 549-570 (August 1998).
Stiglitz, Joseph E. "The Causes and Consequences of the Dependence of Quality on Price," Journal of Economic Literature 25:1 (March 1987), pp. 1-48.
Vector Research Incorporated, General Services Demand Model 1.0: Technical Summary of Model Development, Ann Arbor MI, 1995.
- Tables 1 and 2 in PDF
- Tables 3-10
- Table 11
- Table 12
- Table 13
- Table 14
- Table 15
- Table 16
- Table 17
- Tables 18-20
- Table 21