The estimate of uninsured children provided annually by the March supplement to the CPS has become the most widely accepted and frequently cited estimate of the uninsured. At this point, only the CPS provides annual estimates with relatively little lag, and only the CPS is able to provide state- level estimates, albeit with considerable imprecision.(20) But what, exactly, does the CPS measure? The renewed interest in the CPS as a source of state-level estimates for CHIP makes it important that we answer this question.(21) While the CPS health insurance questions ask about coverage over the course of the previous calendar year, implying that the estimate of uninsurance identifies people who had no insurance at all during that year, the magnitude of the estimate has moved researchers and policymakers to reinterpret the CPS measure of the uninsured as providing an indicator of uninsurance at a point in time.(22) How can this interpretation be reconciled with the wording of the questions themselves, and how far can we carry this interpretation in examining the time trend and other covariates of uninsurance? We consider these questions below.
a.In What Sense Does the CPS Measure Uninsurance at a Point in Time?
There is little reason to doubt that the CPS respondents are answering the health insurance questions in the manner that was intended. That is, they are attempting to report whether they ever had each type of coverage in the preceding year. We can say this, in part, because the health insurance questions appear near the end of the survey, after respondents have reported their employment status, sources and amounts of income, and other characteristics for the preceding year. By the time they get to the health insurance questions, respondents have become thoroughly familiar with the concept of “ever in the preceding year.” More importantly, however, there is empirical evidence that reported coverage is more consistent with annual coverage than with coverage at a point in time. Consider Medicaid, for example. Table 4 compares CPS and SIPP estimates of children under 19 who were reported to be covered by Medicaid in 1993 and 1994. The CPS estimates match very closely the SIPP estimates of children ever covered in a year whereas the CPS estimates exceed the SIPP estimates of children covered at a point in time by 26 to 28 percent.(23)
|CPS as Percent of SIPP|
|CPS||SIPP Annual Ever||SIPP Point in Time||Annual Ever||Point in Time|
|NOTES: The SIPP annual estimates refer to the federal fiscal year. The point-in-time estimates refer to September of each year. The CPS estimates refer to the calendar year. Both sets of estimates were obtained by tabulating public use data files. The CPS estimates are from the March 1994 and March 1995 surveys. The SIPP estimates are from the 1992 panel.|
|The SIPP estimates here actually understate what SIPP finds, as these estimates refer to the survivors of the population sampled in early 1992. SIPP also understates births. SIPP point-in-time estimates made with the calendar month weights would be higher, as the calendar month weights are controlled to the full population. Annual-ever estimates cannot be produced for the calendar month samples, however.|
How, then, can the frequency with which the CPS respondents report no coverage during the year imply rates of uninsurance that are double what we would expect for children uninsured all year and about equal to what we would expect for children uninsured at a point in time? The answer, we suggest, lies in the extent to which coverage for the year is underreported. That is, CPS respondents answering questions in March either forget or otherwise fail to report health insurance coverage that they had in the previous year, and they do so with greater frequency than respondents to other surveys reporting more current coverage. Presumably, coverage that ended early in the year is more likely to be missed than coverage that ended later in the year or continued to the present. Coverage that started late in the year may be susceptible to underreporting as well, with respondents who are uncertain about the starting date having some tendency to project it into the current year. With more than 90 percent of the population having had coverage for at least some part of the year, only a small fraction--about 8 percent of those with any coverage--need to fail to report their coverage to account for the observed result.
Is it simply by chance that CPS respondents underreport their coverage in the previous year to such an extent that the number who appear to have had no coverage at all rises to the same level as independent estimates of the number who were without coverage at a point in time? Or is the phenomenon the result of a more systematic process that in some sense ensures the observed outcome? The answer is important because the more the phenomenon is due to chance, the less confident we can be that the CPS estimate of the uninsured will track the true number of uninsured children (or adults) over time. Similarly, the more the resemblance to a point-in-time estimate is due to chance, the less we can rely on the CPS estimate of the uninsured to tell us how uninsurance at a point in time varies by children’s characteristics--including state of residence.
b.Covariates of Uninsurance
Time is a critical covariate of uninsurance. The CPS measure of the uninsured is used by many policy analysts to assess the trend in uninsurance for the population as a whole and for subgroups of that population. But, in truth, how well does the CPS measure track the actual level of uninsurance? There is no definitive source on the uninsured, but both the NHIS and the SIPP provide annual estimates that can be compared with the CPS. Do these estimates show the same trends over time, even though the estimates themselves may differ? The estimates presented in Table 1 are inconclusive in this regard. The CPS time series is clearly less volatile than the NHIS time series, with the latter showing large swings between 1993 and 1994 and between 1994 and 1995. Between 1995 and 1996, the CPS uninsured rate shows an upswing that observers have interpreted as a response to the implementation of welfare reform. The NHIS estimate for 1996 predates this event, as we explained earlier. With a redesign of the survey and a revision of the health insurance questions in 1997, the continuation of the NHIS time series once the 1997 data are released will shed little if any light on the validity of the CPS series. The SIPP data are too limited to provide a useful point of comparison.(24)
Even if the CPS estimate of the uninsured were a sufficiently reliable proxy for point-in-time uninsured to provide an accurate indicator of trends, this gives us no assurance that the CPS measure can accurately reflect the relationship between point-in-time uninsurance and other covariates besides time. We have already presented evidence that for reasons related, no doubt, to the measurement of uninsurance as a residual, combined with the peculiar reference period of the survey, the CPS overstates the proportion of infants who are uninsured (see Table 3). How confident can we be that the CPS can provide adequate estimates of the relationship between children’s uninsurance and very complex variables, such as Medicaid eligibility? This is an important question but one that will require more research to answer.
As a final note, the success of verification questions in the CTS and NSAF is prompting consideration of including such questions in the SIPP and the CPS. In light of our discussion of the CPS measure, we must wonder what the impact would be of introducing a verification question into the CPS. Rather than improving the point-in-time representation of the CPS, might this not move the CPS much closer to estimating the number of people who truly were uninsured throughout the preceding year? Arguably, this would reduce the policy value of the CPS measure because uninsurance throughout the year is too limited a phenomenon to be embraced as our principal measure of uninsurance. Of course, policy analysts could choose not to use the verification question, but this would only make it that much more difficult to assert that the data being reported in the CPS provide a reliable measure of uninsurance at a point in time.