(1)As of May 1999, data from only the first two of the 12 waves have been released. These data cover five calendar months in early 1996.
(2)A description of the CTS is presented in Rosenbach and Lewis (1998).
(3)A brief description of the design of the NSAF is provided by Kenney, Scheuren and Wang (1999), which can be found on the Urban Institute’s web page at http://newfederalism.urban.org/ nsaf/design.html.
(4)For the CPS in 1996 the rate of uninsurance among children under 18 is 14.8 percent or .3 percentage points lower than the rate for children under 19. We can assume that a comparable differential between these two alternative definitions of children exists across the other years.
(5)The MEPS instrument includes direct questions about periods of uninsurance in the past. The number reported in Table 1 and cited frequently in AHCPR reports is based on measuring uninsurance as a residual.
(6)Brennan et al. (1999) report that without the verification question the estimated proportion of people under 65 who lacked health insurance would have been “slightly greater than the uninsurance rate published by the Census Bureau.” If this applied to children under 18 as well, the uninsurance rate without the verification question would have been slightly above the 14.8 percent figure that the Census Bureau estimated from the March 1997 CPS.
(7)This last approach is the one used by the SIPP. The estimates in Table 2 of children ever uninsured or children uninsured for an entire year were constructed by aggregating individual monthly results in which uninsurance was measured as a residual.
(8)Technically, a child who was covered for only the first day and last day of a two-month period should be reported as covered for each of the two months. That is, despite a 58-day period of uninsurance, the child would not be identified in the SIPP as uninsured at all if the child (or parent) answers the SIPP questions correctly.
(9)Measuring insurance coverage as it is done in the CPS, SIPP, and NHIS involves, in effect, filling in for every sample household a matrix that includes a column for every distinct type of insurance that the survey takers wish to measure and a row for every household member. Failure to identify every household member who is included under a particular type of coverage may result in one or members being classified as uninsured. The potential problems with this approach would be less severe if the survey instruments walked through the entire matrix cell by cell. But in the interest of saving valuable time in surveys that serve many purposes, the survey instruments do not do this. Without a verification question, then, there is no way to determine if a household member who appears to be uninsured was overlooked under a particular coverage type.
(10)The CPS and SIPP questionnaires use state program names in addition to the more generic “Medicaid” in their questionnaires. This is fairly common practice in the major surveys.
(11)Hypothetically, someone could qualify for TANF without being eligible for Medicaid, which suggests that imputing Medicaid to all respondents who report TANF but do not report Medicaid may not always be correct. In reality, however, there are probably very few TANF recipients who are not covered by Medicaid.
(12)This is a net figure representing the number of people actually missed by the census less those counted twice.
(13)The Census Bureau does not publish population estimates and projections that have been adjusted for the 1990 census undercount, but it uses adjusted estimates to weight its surveys, and it publishes the estimates of census undercount that are used to derive the sample weights. These estimates of the census undercount are available by age, sex, race, Hispanic origin, and state. Users can add these estimates of the census undercount to the published population estimates and projections in order to obtain undercount-adjusted figures.
(14)The CPS, which provides state-level estimates of unemployment, includes state as a dimension of its post-stratification. The NSAF employed state population controls for 13 states.
(15)Bureau field staff conduct the monthly employment questionnaire before beginning the March supplement. The response rate to the employment questionnaire was 93 percent in March 1997, but 9 percent of these respondents refused or otherwise failed to complete the supplement, producing the indicated total response rate (91 percent of 93 percent).
(16)We exclude surveys conducted by mail. Except for the decennial census, which uses telephone and in-person methodologies to complete interviews with the more than 35 percent of households that fail to return their questionnaires, mail surveys tend to be very limited in scope. Furthermore, their contribution to research on the uninsured has been minimal at best. We also exclude self-administered questionnaires that are included as part of an in-person interview. These represent a distinctly different mode and one that has proven effective as a means of collecting data on sensitive topics, such as drug use, but they have little relevance to the measurement of health insurance coverage.
(17)Both the NSAF and the CTS were telephone surveys. The NSAF included a complementary sample of nontelephone households. The CTS did so in the 12 intensive sites and relied on statistical adjustments to compensate for households without telephones elsewhere.
(18)The fact that nearly all respondents were introduced to the CPS with in-person interviews may reduce the mode differences between the telephone and in-person interviews.
(19)This is in addition to the 16 percent of reported participants whose Medicaid coverage was logically imputed or edited, as explained earlier.
(20)Estimates from the NHIS and the SIPP typically have not been released until more than two years after the end of the data year. With the move to CAPI and other changes, the NCHS has goals of reducing the lag in releasing NHIS data to as little as six months. There are no such objectives for the SIPP, however, and until an overlapping panel design is restored in 2001 or 2002, the representativeness of the SIPP sample over time presents a serious concern for the measurement of trends.
(21)Effective with the 1997 redesign, the NHIS will be able to provide state-level estimates, but for many if not most states the precision of these estimates will be too low to support policy analysis at the state level. It is likely that the SIPP will move to a fully state-representative design analogous to the CPS but almost certainly not before 2004.
(22)See, for example, the seminal paper by Swartz (1986) and, for a more recent perspective, Bilheimer (1997).
(23)The SIPP estimates of annual coverage are derived from responses in three or four consecutive interviews, so their face validity is high. The SIPP estimates refer to a somewhat smaller universe of children than the CPS estimates, which lowers them by a few percentage points.
(24)Estimates for 1995 would have to come from the 1993 SIPP panel, which shows significantly more poor children than the 1992 panel and is likely to show an upswing in uninsured children relative to the 1992 panel. SIPP data covering the final months of 1996--to pick up the implementation of welfare reform--will not be released for several months.
(25)In the CTS, only people who did not report employer-sponsored coverage were asked the questions on Medicaid coverage. Other surveys indicate that a nontrivial fraction of those who report Medicaid also report another source of coverage at the same time, so it is likely that the incidence of Medicaid enrollment among those respondents who were not asked the question is not negligible. Even with some allowance for this, however, the reported Medicaid enrollment is too low to suggest that better Medicaid reporting accounts for the relatively low estimate of the incidence of uninsurance. The quality of reporting of Medicaid reporting in the NSAF has not been documented as yet.
(26)The value of a home is not counted in determining Medicaid eligibility or eligibility for the other major means-tested entitlement programs.
(27)Another issue with respect to the reporting of age in the HCFA(now known as CMS) data is discussed in Section E.2.b.
(28)Lewis (1997) found that about 10 percent of the households reporting the receipt of food stamp benefits in January 1992 were simulated to be ineligible. Official estimates placed the error rate in food stamp eligibility determinations at about 3 percent at the time, suggesting that errors in the simulation algorithm or the survey data used by the model accounted for most of the seemingly ineligible participants.
(29)For example, infants who no longer appear eligible but would be eligible if they had enrolled earlier should be included in the count of eligibles if the seemingly ineligible infants are added to the numerator (and denominator).
(30)Both the CPS and the SIPP differentiate between respondents, who include all household members 15 and older, and younger members of the household. Coverage is ascertained separately for each respondent, whether directly or by proxy, but coverage for children is measured with questions that ask who else is included under each respondent’s plan.
(31)Children born on or before September 30, 1983, are the exception. An important question, then, is whether the uninsured children who report parents covered by Medicaid are themselves eligible for Medicaid. If they are eligible, then the likelihood is high that they were in fact covered by Medicaid, and their true status was simply misreported. On the other hand, when they are not eligible, and their parents are neither pregnant nor covered by SSI, then perhaps it is the parents’ coverage that is misreported.
(32)Both populations include the same 9,271,000 uninsured children in September 1993, but for the dynamic population it is inappropriate to divide this number by the total population to obtain a proportion uninsured at a point in time because the dynamic population total includes people who would not have been defined as children in September 1993.
(33)Like all panel surveys, SIPP data exhibit a pronounced “seam” effect. Reported changes occur disproportionately between rather than within the four-month reference periods.
(34)Given the frequency of SIPP interviews, the underrepresentation of births is more likely the result of attrition by new mothers than the underreporting of births. If so, new mothers and any other children they may have are being underrepresented along with the newborns.
(35)Selecting all spells that were active during a 12-month period yields a much less skewed distribution than limiting the universe to spells that were active during a single month. Spells of one month duration could have started in any of 12 months while spells of two months duration could have started in any of 13 months, and spells of 12 months duration could have started in any of 24 months. In this case, then, spells of 12 months in length are represented at only twice the relative frequency rather than 12 times the relative frequency of one-month spells.