In the introductory chapter we highlighted the following differences in survey design and methodology as bearing on survey estimates of income: subannual versus retrospective annual income data collection, the breadth and depth of income questions, and strategies for measuring income in the context of a rolling sample. Comparison of survey estimates with an eye to these aspects of survey design raised more questions than it answered. Important questions for follow-up research are suggested by our findings.
SIPP is the only survey that collects income at the monthly level. The annual estimates prepared for this study were built up from monthly amounts. SIPP’s approach is clearly effective for program participation, where the SIPP estimates exceed those of other surveys by a wide margin. Given this, why does SIPP end up with 10 percent fewer dollars of total earned income and total unearned income than the CPS—and even 6 percent less total income than NHIS?
Given that SIPP employs an entirely different approach to collecting income data than any of the other surveys, we cannot conclude from these results that the SIPP approach is flawed; nor can we conclude that the comparatively low estimates of total income are the result of poor implementation. It may be both or neither. Perhaps the SIPP design is more effective among people with erratic income flows and less effective among those with more regular income flows. Alternatively, perhaps the SIPP field staff has focused on getting good data from low-income families with a weaker emphasis on higher-income families. The lower capture of income could also be a function of the dynamic character of the SIPP sample that SIPP estimation procedures do not properly handle. With their similar panel designs but different approaches to measuring income, SIPP and MEPS could provide useful comparative data on their alternative approaches were it not for the fact that the MEPS data are post-stratified to the distribution of poverty status in the CPS. At the same time, however, we should not dismiss the possibility that asking retrospective questions of a fixed simple—the design element shared by the other four general population surveys—may impart a bias of its own, but this one in an upward direction. That is, SIPP’s shortfall may be overstated. The four general population surveys that share the retrospective approach yield surprisingly close estimates of total income despite widely ranging approaches to measurement.
Understanding why SIPP estimates are so much lower than the other surveys is extremely important as the Census Bureau moves forward with a redesign of SIPP that may change many of the features that are unique to SIPP. It is also an exceedingly challenging question from a methodological perspective.
With just a single question asked at the family level, NHIS was able to capture 95 percent as much total income as the CPS. ACS captured 98 percent as much as CPS with seven questions, although these were asked of each person. This suggests that large batteries of questions may not generate much additional total income. Instead, their value lies elsewhere, which may or may not be relevant to the intended use of income data in a given survey. Detailed questions appear to produce less rounding, presumably better accuracy at the family and person level, plus the source detail that may be needed for simulating program eligibility. It is also apparent that the impact of additional questions is not uniform across the income distribution. Compared to the CPS, NHIS misses proportionately more income in the bottom quintile than in quintiles two through four, and one result is a higher estimated poverty rate after differences in family definition are taken into account (see below).
A critical issue for income measurement in a rolling sample is whether a rolling versus fixed reference period for income is to be preferred. Policy applications may favor one over the other, depending on the type of application. For example, a rolling reference period maintains a uniform lag between the income reference period and statuses measured at the time of the interview (such as health insurance coverage or program participation). Equally important, however, is which approach will yield better data. Does the quality of income data for a fixed reference period deteriorate as the interview date moves farther from the reference period? Alternatively, can respondents report income for the past 12 months as accurately as they can for the previous calendar year? Will they fall back on reporting their incomes for the prior calendar year (or show other evidence of diminished quality, such as higher non-response or increased variance)?
Our examination of income reporting by month (ACS) or quarter (NHIS) turned up little evidence that respondents in either survey had difficulty with the income concept in ways that were reflected in reporting patterns over time. Perhaps the low rate of inflation and slow pace of economic change during 2002 and 2003 contributed to our null findings and the findings of a similar assessment conducted with survey data for 2008 would be different. For now, our questions about the choice of reference period remain open questions.