Income Data for Policy Analysis: A Comparative Assessment of Eight Surveys. APPENDIX A: ANNOTATED BIBLIOGRAPHY

12/23/2008

This annotated bibliography was prepared as background to the development of an analysis plan for Assessing the Quality of Income Data across Surveys.  In developing the bibliography we surveyed the literature on topics related to methodological issues in measuring income, validation and benchmarking of income data, estimates of accuracy in reported income, and comparisons of income data across household surveys—particularly the eight surveys included in the project.  Starting with the references cited in the Department of Health and Human Services working paper (Turek 2005) that provided the conceptual foundation for this project, we extended the list of citations by consulting with the members of the project’s Technical Advisory Group and with the Mathematica staff most familiar with the literature on income measurement.  This was particularly helpful in expanding our search to encompass the unpublished or “gray” literature.  Indeed, many of our final entries are drawn from this literature.

We restricted the scope of our search to the past three decades, but in going back as far as the late 1970s we recognized that changes in survey design, content, and processing may have reduced the relevance of particular types of findings.  The earliest reference is from 1977 and the most recent reference is from 2008.

We obtained copies of all potential entries in order to assess their suitability for inclusion and, for those that were selected, to prepare their annotations.  Many of the entries identified potential additional references, which we followed up by obtaining copies and reviewing for relevance.  Occasionally the same or related findings appeared in more than one venue.  To minimize redundancy we sought to include only the most complete or widely accessible version.

The entries that appear in the bibliography were drawn from peer reviewed journal articles, conference proceedings, reports, working papers, and miscellaneous other sources.

The literature that is represented in this bibliography encompasses a range of methodological issues relating to the measurement of income in household surveys.  Specific methodological issues include:

  • Question wording
  • Number of questions
  • Question context
  • Item and unit nonresponse
  • Post survey editing and processing
  • Weighting
  • Imputation

To assist readers in findings references to these and other topics, an index follows the bibliography.

The purpose of the annotations that accompany the citations is to summarize rather than review.  In preparing the annotations we drew from the authors’ abstracts and conclusions as a starting point.  We supplemented these texts in order to clarify key findings or to expand upon results that were especially germane to this project.  The annotations vary in length, which is a function of the relevance of the material that they describe and how easily the main findings could be communicated.

Survey and other acronyms used in the annotations are spelled out the first time that they appear.  A list of acronyms used in more than one entry follows this introduction.

Finally, the bibliography includes a number of papers from the Survey of Income and Program Participation (SIPP) working paper series.  Most of these papers have no dates, but we understand that the numbering of the papers is sequential, so approximate dates can be inferred from the papers in the series that do have dates.

Acronyms

  • AAPOR – American Association for Public Opinion Research
  • ACS – American Community Survey
  • AFDC – Aid to Families with Dependent Children
  • AGI – Adjusted Gross Income
  • AHS – Annual Housing Survey
  • ASEC – Annual Social and Economic Supplement
  • BEA – Bureau of Economic Analysis
  • C2SS – Census 2000 Supplementary Survey
  • CAPI – Computer-assisted personal interviewing
  • CATI – Computer-assisted telephone interviewing
  • CE – Consumer Expenditure Survey
  • CPI – Consumer Price Index
  • CPS – Current Population Survey
  • GAO – Government Accountability Office
  • HRS – Health and Retirement Study
  • IRA – Individual Retirement Account
  • IRS – Internal Revenue Service
  • ISDP – Income Survey Development Program
  • MCBS – Medicare Current Beneficiary Survey
  • MEPS – Medical Expenditure Panel Survey
  • MSA – Metropolitan Statistical Area
  • NHIS – National Health Interview Survey
  • NIPA – National Income and Produce Accounts
  • OASDI – Old-Age, Survivors and Disability Insurance
  • PSID – Panel Study of Income Dynamics
  • SIPP – Survey of Income and Program Participation
  • SMI – Supplementary Medical Insurance (Medicare Part B)
  • SSA – Social Security Administration
  • SSI – Supplemental Security Income
  • SSN – Social Security number
  • TANF – Temporary Assistance to Needy Families


Annotated Bibliography

Alternative Measures of Income and Poverty. U.S. Census Bureau, http://www.census.gov/hhes/  www/income/incomestate.html#altmeas.

This website contains historical income data tables from the Decennial Census and the March supplement to the Current Population Survey (CPS).  The site includes more detailed tables from the renamed CPS Annual Social and Economic Supplement from 1995 forward as well as two reports on the effect of government taxes and transfers on income and poverty.  Reports on income inequality and The Changing Shape of the Nation’s Income Distribution, 1947–98 by Arthur F. Jones, Jr. and Daniel H. Weinberg are also accessible form the website.

Atrostic, B.K., and Charlene Kalenkoski. “Item Response Rates, One Indicator of How Well We Measure Income.”  Proceedings of the American Statistical Association, Section on Survey Research Methods [CD-ROM]. Alexandria, VA: American Statistical Association, 2002, pp. 94-99.

The authors of this paper develop a process for defining consistent sets of item nonresponse rates that explicitly account for the survey design.  Item response rates are defined in terms of the group eligible for a set of questions and whether group members answered those questions.  The paper illustrates the definition with a few examples.  The authors find several key points from their calculations of the March 1990 and March 2000 CPS.  First, response rates to income items were falling, and the amount of imputed income was increasing.  Second, wage and salary income was 102 percent of a benchmark based on the National Income and Product Accounts between 1990 and 1996 (1996 benchmark), interest income was 84 percent of the 1996 benchmark, and dividend income was 60 percent of the 1996 benchmark.  Based on their findings, the authors recommended reporting standard income nonresponse rates and continuing research into ways to reduce nonresponse while identifying characteristics of nonrespondents.

Banthin, Jessica S., and Thomas Selden.  “Income Measurement in the Medical Expenditure Panel Survey.”  Agency for Healthcare Research and Quality Working Paper No. 06005, July 2006, http://gold.ahrq.gov.

Using 2002 data, the paper compares the Medical Expenditure Panel Survey (MEPS) and CPS poverty distributions for selected populations of interest.  It shows that MEPS income data align relatively closely to CPS estimates.  It then compares an experimental poverty status measure based on a single question recently added to the MEPS Round 1 and 3 instrument with the standard poverty status measure based on the more detailed MEPS income questions. An experimental question was added to the MEPS as of 2003, asking respondents to report their total household income within ranges corresponding to five poverty-level status categories. A majority (63.1 percent) of individuals with responses for the single income question provided information that matched the information collected from the detailed questions.  However, 26.2 percent underreported family income while 10.7 percent overestimated family income.  The paper also finds that, compared to the detailed questions, the single income question overestimates the percent of people in poverty covered by private insurance, underestimates the percent with public coverage, and overestimates the percent uninsured, although these differences are not statistically significant.

Bates, Nancy, and Robert Pedace.  “Reported Earnings in the Survey of Income and Program Participation: Building an Instrument to Target Those Likely to Misreport.”  Proceedings of the American Statistical Association, Section on Survey Research Methods. Alexandria, VA: American Statistical Association, 2000, pp. 959-964.

The paper analyzes income misreporting propensities by using the 1992 Survey of Income and Program Participation (SIPP) longitudinal file matched to Social Security Summary Earnings Records.  Specifically, it focuses on wage and salary and self-employment earnings.  The findings suggest that the 1992 SIPP accurately estimated the net number of earnings recipients but underestimated amounts received.  The misreporting pattern reveals that respondents at the lowest end of the income distribution tended to overreport earnings while those at the high end were more likely to underreport earnings.  The authors fit multinomial models to predict misreporting based on demographic characteristics.  Those age 50 and over, males, blacks, Asians, Hispanics, craftspersons, and those with low levels of education were more likely to underreport.  Farmers, members of the military, and the self-employed tended to overreport.

Battaglia, Michael P., David C. Hoaglin, David Izrael, Meena Khare, and Ali Mokdad. “Improving Imputation by Using Partial Income Information and Ecological Variables.” Proceedings of the American Statistical Association, Section on Survey Research Methods [CD-ROM]. Alexandria, VA: American Statistical Association, 2002, pp. 152-157.

This research examines alternative ways of using reported ranges or “partial” income information to impute missing family incomes in the National Immunization Survey, a telephone survey that collects data on children aged 19 to 35 months.  Respondents who do not know their total family income for the prior calendar year or refuse to answer the question are asked a cascading sequence of questions designed to assign family income to one of 15 intervals.  In the 2000 survey, 27.8 percent of respondents did not answer the initial income question.  About half of these completed the follow-up sequence.  Of the rest, about 2 in 5 completed part of the sequence, yielding partial information.  That is, their incomes could be placed within a broader interval than one of the 15.  The authors compared two regression approaches to imputing family income for persons with the most limited partial information or no partial information.  The first approach estimated a separate equation for each partial interval, with three additional equations for don’t knows, refusals, and those who broke off the interview before the income question.  Don’t knows and refusals were allotted separate models because refusals reported higher incomes when they responded to the cascading questions.  The second approach estimated a single equation over all of these groups.  The models were estimated on cases with reported family incomes or, for the don’t knows and refusals, cases that completed the cascading questions. Predictor variables included characteristics of the child, mother, family and household as well as ecological characteristics associated with the telephone exchange (such as median education).  In general, the separate regression models provided more accurate imputations than the overall model. 

Bavier, Richard.  “Reconciliation of Income and Consumption Data in Poverty Measurement.”  Journal of Policy Analysis and Management, vol. 27, no. 1, Winter 2008, pp. 40-62.

Researchers are interested in whether consumption data are superior to income data for poverty measurement.  Although the Census Bureau has provided researchers with an experimental series of variables in the CPS that can produce a comprehensive income measure, previous analyses have not fully exploited these variables.  The author examines data from the CPS, the Consumer Expenditure Survey (CE), and the SIPP and shows no “huge discrepancy” in federal surveys, as some have suggested, between income and expenditures near the bottom of the distribution.  When poverty is measured with a comprehensive income measure that includes the income value of noncash benefits, capital gains and losses, the earned income tax credit, and returns on home equity and subtracts the value of direct taxes, income poverty rates and trends are similar to those of consumption poverty.  Arguments that income is measured with more error than consumption at the bottom of the income distribution are shown to derive from inferior income data.

Beebout, Harold.  Reporting of Transfer Income on the Survey of Income and Education: Initial Corrections of the Microdata for Underreporting.  Mathematica Policy Research, October 14, 1977.

The study attempts to remedy the underreporting of transfer income in the Survey of Income and Education by adjusting for two types of error: (1) fewer individuals reported receipt of an income type than were indicated to have received the income in administrative data; and (2) the number of recipients was acceptable, but they reported too few dollars.  In the first case, the study imputed additional recipients by using either a hot deck or simulation technique.  In the second case, the study made an upward, proportional adjustment to the class of recipients’ benefits to conform to the administrative data.  The author attempted to change as little of the original survey data as possible, to edit the data so that the aggregate amount of each income type was approximately equal to the adjusted administrative data, and to preserve major covariances.  The procedures were intended to provide a better basis for policy analysis than the unadjusted data but were not intended to satisfy any formal statistical criteria.  The results for the total of all work-related transfers indicate that the corrected file has nearly 100 percent of the estimated control income.  The total amount of means-tested transfers on the adjusted file, including food stamps, is 99.5 percent of the estimated control total.

Bishaw, Alemayehu, and Sharon Stern.  “Evaluation of Poverty Estimates: A Comparison of the American Community Survey and the Current Population Survey.”  U.S. Census Bureau, June 15, 2006.

At the national level, the CPS Annual Social and Economic Supplement (ASEC) and the American Community Survey (ACS) are relatively consistent in their estimates of poverty.  Differences in counting unrelated persons within a household suggest that estimates of poverty may differ, but the data do not show systematic differences between the surveys.  For selected characteristics, however, the national estimates of poverty rates differed between the two surveys.  The 2003 estimates differed for individuals age 18 to 64 and married-couple families, and the 2002 estimates differed for children under age 18, individuals 65 and older, women, married-couple families, and female-headed households with no husband present.  Statistically, the state poverty rates were the same in the ACS and the CPS ASEC for 36 states.  The ACS estimates were higher than the CPS ASEC in 12 states and lower in 2 states and the District of Columbia.

Bound, John, and Alan B. Krueger.  “The Extent of Measurement Error in Longitudinal Earnings Data:  Do Two Wrongs Make a Right?”  Journal of Labor Economics, vol. 9, no. 1, January 1991, pp. 1-24.

This article reports findings from a study using Social Security earnings data matched to CPS sample records from 1977 and 1978.  The analysis is restricted to heads of households who remained at the same address for two years, were successfully matched to their Social Security earnings records, and received earnings from covered employment in both years. The results suggest that the combination of mean reversion and correlated error in reports of wages in consecutive years has a beneficial impact on estimated change in earnings; fully 75 percent of the variation in the change in CPS earnings represents true earnings variation.  However, the findings also suggest that the simple models that have been used to characterize measurement error in past studies are not appropriate.  The standard assumptions about measurement error as white noise are contradicted by evidence that measurement error is positively autocorrelated and negatively correlated with true earnings.

Bound, John, Charles Brown, Greg J. Duncan, and Willard L. Rodgers.  “Evidence on the Validity of Cross-sectional and Longitudinal Labor Market Data.”  Journal of Labor Economics, vol. 12, no. 3, July 1994, pp. 345-368.

This article reports findings from a Panel Study of Income Dynamics (PSID) validation study based on sample members who were employed by a single, large firm.  Survey reports from two successive waves of the panel were compared to payroll records.  Respondents’ reports of annual earnings were fairly accurate, with a very small mean error in the log of earnings but a substantial standard deviation.  In addition, errors were negatively correlated with true earnings.  This reduces bias when earnings are used as an explanatory variable but adds negative bias when earnings are the dependent variable.  Biases were marginally larger for reported changes in earnings.  Bias in calculated earnings per hour (annual earnings divided by annual hours worked) were more severe.  This was due in part to the error in reported annual hours worked, despite a detailed sequence of questions used to arrive at these hours.  However, correlated error was a bigger factor in the magnitude of the bias in hourly earnings.

Bruun, Maria, and Jeffrey Moore.“SIPP 2004 Wave 1 Asset Income Item Nonresponse Results and Nonresponse Follow-up Outcomes.”  Statistical Research Division, U.S. Census Bureau, October 3, 2005.

The 2004 Wave 1 SIPP questionnaire asked new and expanded follow-up questions in order to combat nonresponse.  The questions asked respondents to report their income in a multiple-choice range rather than as a dollar amount.  Overall, asset income amount questions had a 40 percent nonresponse rate.  Miscellaneous had the lowest (24 percent) nonresponse rate, and stocks and mutual funds had the highest (56 percent).  The predominant form of nonresponse to the follow-up question mirrors the form of initial nonresponse.  Overall, the nonresponse follow-up questions reduced nonresponse by about half or more.  The effectiveness was even greater for those answering “don’t know.”  For these respondents, 75 percent of those that initially said don’t know gave a response.  Overall, the paper finds that the follow-up questions should greatly improve the quality of the data, suggesting that the benefits of asking the questions far outweighed the extra burden on respondents.

Canberra Group. Expert Group on Household Income Statistics: Final Report and Recommendations.  Ottawa, 2001, www.lisproject.org.
The report is a guide for data collectors, data analysts, and other users on how to prepare comparable statistics on income distribution.  Within the context of prevailing ideas and best practices, the authors set forth guidelines for understanding the complex nature of income data.  The guidelines reflect how economies are organized and how people conduct their lives.  Where sufficient consensus exists about best practices, the report makes recommendations in the hope that such recommendations will contribute to the availability of more accurate, complete, and internationally comparable income statistics compiled to common standards.  The report includes chapters on the income concept, other conceptual issues, practical considerations, comparing income distributions over time, income dynamics, data presentation, robustness assessment reporting, and issues still to be resolved.

Clark Sandra Luckett, John Iceland, Thomas Palumbo, Kirby Posey, and Mai Weismantle.  “Comparing Employment, Income, and Poverty: Census 2000 and the Current Population Survey.”Housing and Household Economic Statistics Division, U.S. Census Bureau, September 2003, www.census.gov/hhes/www/laborfor/final2_b8_nov6.pdf.

The report examines the differences between the 2000 Decennial Census and the CPS with regard to employment, income, and poverty numbers as a consequence of different data collection methods.  Before 1990, unemployment rates were higher in the CPS than in the census.  However, in 2000, unemployment reported in the CPS was 2.1 percentage points lower than the census estimate.  The difference occurred across all demographic variables.  Median family and household income were both $1,000 to 2,000 higher in the census than in the CPS despite the fact that the CPS asked more questions about income from different sources.  The one exception was single male households, for which census income estimates were lower than the CPS estimates.  The poverty rate was moderately higher in the census (12.4 percent) than in the CPS (11.9 percent).  The paper did not find a comprehensive explanation of these income, employment, and poverty differences.

Coder, John.  Comparisons of Alternative Annual Estimates of Wage and Salary Income from SIPP.  Memorandum for Gordon Green, Assistant Division Chief, Population Division, U.S. Census Bureau, March 1988.

The memorandum demonstrates that, with a combination of annual recall reports from the annual round-up module in the SIPP and the annual estimates constructed from subannual amounts, the SIPP wage and salary estimates would exceed the analogous CPS estimates by about 6 percent instead of showing a consistent deficit.

Coder, John, and Lydia Scoon-Rogers.  Evaluating the Quality of Income Data Collected in the Annual Supplement to the March Current Population Survey and the Survey of Income and Program Participation.  Housing and Household Economic Statistics Division, U.S. Census Bureau, July 1996, www.sipp.census.gov/sipp/ workpapr/wp215.pdf.

The paper extensively covers differences between income estimates of the 1990 March CPS and the 1990 SIPP.  It also compares the estimates to benchmark estimates.  The paper observes that the SIPP seemed to miss more high-income recipients than did the CPS.  The authors offer alternative explanations for this difference, but there is no hard evidence supporting any particular cause.  The SIPP’s wage and salary estimates are about 5 percent lower than those in the CPS.  One explanation is that the SIPP is more conducive to reporting “take-home” pay than is the CPS.  The SIPP and CPS definitions of self-employment income are markedly different, making comparisons between the two surveys difficult.  The paper also compares the two surveys’ estimates of income from Social Security, railroad retirement, unemployment compensation, workers’ compensation, Supplemental Security Income, public assistance, veterans’ payments, pensions, interest and dividends, rents, royalties, estates and trusts, child support, alimony, and financial assistance as well as “other income.”  Overall, the comparisons show evidence of deterioration in the SIPP estimates between 1984 and 1990, with the SIPP maintaining an advantage for some sources while falling closer to or below the CPS for others.  For other sources, the SIPP estimates remained no better than the CPS estimates.  Generally, the SIPP provides more complete estimates of recipients, however.

Cohen S.B., and S.R. Machlin.  “Characteristics of Nonrespondents in the MEPS Household Component.”  Proceedings of the American Statistical Association, Section on Survey Research Methods.  Alexandria, VA: American Statistical Association, 1998, pp. 329-334.
This paper attempts to determine the characteristics of nonrespondents in the MEPS.  Using the National Health Interview Survey (NHIS) as the sample frame, the 1996 MEPS sample consisted of about 9,000 reporting units.  Several groups were likely to be nonresponders based on the following factors: telephone availability (no telephone number given on the NHIS), size of dwelling unit (single- or two-person), family income of primary reporting unit (higher income), item nonresponse for employment classification (no response), Metropolitan Statistical Area (MSA) size (large cities), and the dwelling unit–level personal help measure of need (less healthy).  In addition, the race, gender, and experience (less experience) of the interviewers had a significant impact on nonresponse.  The authors conclude that the MEPS data should be weighted to adjust for these differences in nonresponse.

Cohen S., S. Machlin, and J. Branscome.  “Patterns of Survey Attrition and Reluctant Response in the 1996 MEPS.”  Health Services & Outcomes Research Methodology, vol. 1, no. 2, June 2000, pp. 131-148.

This paper examines MEPS sample members who participated cooperatively in the survey, did not respond, or were reluctant to respond.  The authors find that reluctant responders in the first round of the survey were much more likely to be nonresponders in the second round.  Other characteristics of the round-two nonresponders were membership in a large household, residence in a large metropolitan area, and the presence of elderly members in the household.  In addition, reluctant responders were a distinct group whose members shared similar age, MSA residence, and employment characteristics with those who dropped out of the survey; nonetheless, they shared marital status and reporting unit size characteristics with cooperating respondents.  The authors find that, in the absence of an effort to convert reluctant respondents, the survey’s precision would have dropped, though not substantially (standard errors would have increased by about 6 percent).

Czajka, John L., James Mabli, and Scott Cody.  “Sample Loss and Survey Bias in Estimates of Social Security Beneficiaries:  A Tale of Two Surveys.”  Final Report.  Washington, DC:  Mathematica Policy Research, Inc., February 2008.

This report examines two sources of sample loss that affect the utility of SIPP and CPS data for analysis of Social Security beneficiary populations.  One source is survey nonresponse, which includes both initial nonresponse and attrition.  The other source is the reluctance of respondents to provide their Social Security numbers, which prevents the Census Bureau from matching their survey records to administrative records.  The report documents the growth in sample loss due to nonresponse and nonmatching; provides estimates of match bias and attrition bias; examines discontinuities between consecutive SIPP panels in estimates of beneficiary characteristics as well as poverty rates for the broader population; and examines the comparative strengths of the SIPP and CPS in describing the economic well-being of the population in general and elderly and lower-income persons in particular.  Analysis of SIPP full panel and cross-sectional sample records matched to Internal Revenue Service earnings records and Social Security benefit records provides evidence that the Census Bureau’s full panel weights are highly effective in compensating for bias due to differential attrition.  The authors also found little evidence of match bias in SIPP estimates of a wide range of characteristics when the matched sample was calibrated to the same demographic controls used to weight the SIPP sample.  A more limited evaluation of match bias in the CPS focused on retired workers and obtained results very similar to the SIPP findings.

The authors present evidence that discontinuities in SIPP poverty estimates across panels are due in part to a recent tendency for SIPP panels to obtain high estimates of poverty in the first wave, which then decline sharply in the second wave.  The authors also present evidence that new entrants who are excluded from a panel over time are a distinctive group that is large enough and potentially unique enough to induce marked shifts in poverty when they are represented in full by a new panel.

Across all age groups but particularly children and the elderly the SIPP has continued to identify more sources of family income than the CPS.  With respect to income amounts, however, the SIPP has lost ground to the CPS since the initial SIPP panel.  From 1993 on, the most significant losses have occurred in the bottom income quintile, where the SIPP has historically performed best relative to the CPS.  In 1993 the SIPP captured 20 percent more aggregate income from this quintile than did the CPS.  By 2002, however, the SIPP’s advantage had fallen to just 6 percent.  These losses were distributed across most income sources.  Only for SSI, welfare and pensions did the SIPP maintain or improve its advantage.   A comparison of poverty trends in the two surveys raises a number of concerns about the use of either survey for the measurement of trends in economic well-being.  These concerns are greatest for estimates of poverty among the elderly.

Davern, Michael, Lynn A. Blewett, Boris Bershadsky, and Noreen Arnold. “Missing the Mark?  Examining Imputation Bias in the Current Population Survey’s State Income and Health Insurance Coverage Estimates.”  Journal of Official Statistics, vol. 20, no. 3, 2004, pp. 519-549.

This article examines earned income in the 1990 Decennial Census, the Census 2000 Supplemental Survey (C2SS), and the 1998–2000 CPS data to determine the bias at the state level created by the hot deck imputations used in the surveys.  It also examines CPS state health insurance coverage rates.  For income data, the Census imputes income if any of the income-related questions are missing, whereas the CPS imputes only for the missing question.  Through the fitting of a bias model, the results showed little bias at the state level in estimates of income for the 1990 Decennial Census or the C2SS.  The CPS income data, however, showed a bias.  The CPS health insurance coverage estimates were even more biased because the hot deck procedure did not use geographic region.  To correct for this significant bias, the article considers possible approaches to model bias, to change the hot deck procedure in order to capture more between-state variation by adding a geographic proximity preference, or to use a multiple imputation procedure.

Davern, Michael., Holly. Rodin, Timothy J. Beebe, and Kathleen Thiede Call.  “The Effect of Income Question Design in Health Surveys on Family Income, Poverty and Eligibility Estimates.”  Health Services Research, vol. 40, no. 5, October 2005, part I, pp. 1534-1552.

The article uses March CPS supplement data and compares omnibus family income estimates (obtained by one overarching income question) to aggregate family income estimates (obtained by asking several income questions about various sources of income).  The authors find substantially different income estimates depending on the method used.  Only 31 percent of people remained in the same income bracket for both methods.  Factors associated with underreporting were households with three or more family members or those with other sources of income or assistance.  One table in the article shows that the omnibus question inflates the amount of poverty by an average of about 1 percentage point.  The article concludes that the omnibus household income question is likely biased and that the bias should be recognized when using such question for analysis.

Denmead, Gabrielle, and Joan Turek.  “Comparisons of Health Indicators by Income in Three Major Surveys.” Proceedings of the Annual Meeting of the American Statistical Association [CD-ROM]. Alexandria, VA: American Statistical Association, 2005, pp. 1532-1538.

The authors compare relationships between income and comparable measures of health status, insurance coverage, and utilization in three surveys: the NHIS, CPS and SIPP.  The comparisons use identically defined family income for calendar year 2001.  Study findings include differences among the surveys in counts and composition of the low-income population, health status, uninsured, uninsured children, Medicaid coverage, and utilization of inpatient and ambulatory care. The surveys provide different pictures of the needs and target groups for public programs.  The NHIS has more poor and low-income than CPS and SIPP despite its broader family definition.  The NHIS finds more insurance coverage but less Medicaid coverage on a monthly basis, total and for children, than does SIPP. The CPS finds both less insurance coverage and less Medicaid coverage on an annual basis, total and for children, than does SIPP.

Denmead, Gabrielle, Joan Turek, and Michele Adler. “Annual Income and Working-Age Disability: Estimates from the NHIS and CPS.”  Proceedings of the American Statistical Association, Section on Health Policy Statistics [CD-ROM]. Alexandria, VA: American Statistical Association, 2003, pp. 1203-1208.

This paper develops annual income measure s that can be used in conjunction with disability data and program participation to address health and disability policy issues.  The analysis uses data from the NHIS and the CPS from the mid-1990s, when the NHIS collected person-level information on monthly income by source.  The authors annualized income reported in the NHIS and conducted validity tests of alternate income measures within a single data base, SIPP.  They found that monthly income came closest to total income and poverty rates under Actual annual income, but also had the highest rate of false negatives in determining poverty status. Another difference between the NHIS and the CPS is that the NHIS treats unmarried partners as married, affecting the poverty rate.  Overall, the data from the NHIS matched the CPS fairly well.  The article goes on to analyze the information on income, disability, and participation.

Doyle, Pat.  “The Survey of Income and Program Participation: AAPOR Roundtable: Improving Income Measurement.”  SIPP Working Paper 241, U.S. Census Bureau, no date.

This summary describes an American Association for Public Opinion Research (AAPOR) Annual Conference roundtable discussion of the findings from the first two field experiments of the SIPP Methods Panel project.  Each experiment included a treatment group, which received the experimental instrument, and a control group, which received the SIPP Wave 1 instrument for the panel in the field at the time.  In addition to other changes the second experiment introduced a different approach to collecting earnings.  Respondents were allowed more flexibility in choosing the best time period for reporting amounts received (that is hourly, weekly, biweekly, monthly, quarterly, or annually).  For unearned income, the experiment introduced screening procedures for effectively targeting need-tested program questions to households potentially eligible for such programs.  With regard to assets, the experiment took a three-part approach:  (1) determining ownership of Individual Retirement Accounts (IRAs), (2) determining ownership of a set of commonly held asset types, and (3) capturing ownership of the remaining asset types.  Overall, the treatment group experienced significantly lower item nonresponse on income amounts than did the control group, especially for asset amounts.  For earnings, the treatment group achieved a reduction in item nonresponse of over 40 percent.  A comparison of mean income amounts and the proportion of the population with income in the treatment and control groups showed no significant differences.

Doyle, Pat, Betsy Martin, and Jeff Moore.  “The Survey of Income and Program Participation (SIPP) Methods Panel: Improving Income Measurement.”  SIPP Working Paper 234, U.S. Census Bureau, November 13, 2000, http://www.sipp.census.gov/sipp/workpapr /wp234.pdf (an abbreviated version appears in the Proceedings of the American Statistical Association, 2000).

This paper describes experimental research in trying to increase response and accuracy in the 2000 SIPP survey.  To test different question designs, the authors randomly assign 1,000 people to the standard SIPP instrument and 1,000 people to the modified instrument.  The authors reach several conclusions.  Use of nonresponse follow-up improves reporting of income amounts.  The high nonresponse to asset income questions is primarily a function of lack of knowledge, suggesting that follow-up questions that request more limited information (such as bracketed values) can improve response rates.  For some respondents, a common set of asset types can be used instead of asking about each asset type individually.  An income screener can reduce the number of respondents asked about needs-based programs.  The seam bias problem remains unresolved, however.

Duncan, Greg J. and Daniel H. Hill.  “Assessing the Quality of Household Panel Data:  The Case of the Panel Study of Income Dynamics.”  Journal of Business and Economic Statistics, vol. 7, no. 4, October 1989, pp. 441-451.

Evidence from a number of methodological studies is used to assess the overall quality of data from the PSID.  Despite substantial cumulative attrition, comparisons with the CPS indicate that after 12 years the PSID sample continued to provide good representation of the nonimmigrant population.  The PSID had proportionately fewer low-income families in both 1968 and 1980, but this may reflect the PSID’s more complete capture of income.  In addition, PSID reports of transfer income appear to compare more favorably with program aggregates than reports from the CPS.  The results of a validation study conducted with a subsample of respondents indicate that reports of wages and employment are generally unbiased but contain measurement error that varies from trivial to very large.

Fisher, Patricia J.  “Assessing the Effect of Allocated Data on the Estimated Value of Total Household Income in the Survey of Income and Program Participation (SIPP).”  SIPP Working Paper 244, U.S. Census Bureau, no date.

This paper examines the individual components of total household income as collected in the SIPP and evaluates the proportion imputed (or allocated) for each component.  The author concludes that 28.8 percent of total household monthly income is allocated.  Much of the allocation is carried over from previous waves of data collection rather than allocated with hot deck or cold deck imputation or logical imputation.
Fitzgerald, John, Peter Gottschalk, and Robert Moffitt.  “An Analysis of the Impact of Sample Attrition on the Second Generation of Respondents in the Michigan Panel Study of Income Dynamics.”  The Journal of Human Resources, vol. 33, no. 2, Spring 1998a, pp. 300-344.

The authors study the impact of sample attrition on the second generation of respondents in the PSID.  They conclude that the intergenerational relationship among earnings, education, and welfare participation of parents and their adult children is stronger for the subsample of children who do not attrite by the end of the panel than for the full sample that includes all children who did not attrite before their mid-20s (but may have attrited afterwards).  The differences in intergenerational coefficients are small and seldom statistically different from zero for welfare and earnings.  However, the authors do find evidence of attrition bias in estimates for education.  They assert that attrition may be random with respect to some outcomes but not others.

Fitzgerald, John, Peter Gottschalk, and Robert Moffitt.  “An Analysis of Sample Attrition in Panel Data: The Michigan Panel Study of Income Dynamics.”  The Journal of Human Resources, vol. 33, no. 2, Spring 1998b, pp. 251-299.

The authors study the effect of approximately 50 percent sample loss from cumulative attrition on the unconditional distributions of several socioeconomic variables and on the estimates of several sets of regression coefficients.  Their empirical analysis shows that attrition is highly selective and concentrated among individuals with lower socioeconomic status.  They also show that attrition is concentrated among those with more unstable and lower earnings.  However, cross-sectional comparisons of unconditional moments between the PSID and the CPS show a close correspondence all the way through 1989.  The authors conclude that the selection that occurs is moderated by regression-to-the-mean effects from transitory components that fade over time.  Therefore, despite a high level of attrition, they find no strong evidence of loss of representativeness.

Garner, T., and L. Blanciforti. “Household Income Reporting: An Analysis of U.S. Consumer Expenditure Survey Data.”  Journal of Official Statistics, vol.10, no. 1, 1994, pp. 69-91.

This paper uses data from the 1987 CE to model income response with socioeconomic factors.  The binomial logit model showed significant increases in response associated with age (very young or very old), race (non-black), education (non–college graduate), employment (not self-employed), consumer unit composition (single), and region (West or South).  The expenditure variable was particularly interesting and showed that those reporting higher expenditures were significantly more likely to give complete income information.

Gouskova, Elena and Robert F. Schoeni. “Comparing Estimates of Family Income in the Panel Study of Income Dynamics and the March Current Population Survey, 1968­–2005.” http://psidonline.isr.umich.edu/Publications/Papers/Report_on_income_quality_v3.pdf, July 2007.

The PSID has experienced substantial cumulative non-response over its 39-year history.  Moreover, the PSID has undergone several methodological changes: 1) conversion to computer assisted telephone interviewing (CATI) from paper and pencil telephone interviewing in 1993, 2) suspension of roughly one-half of the low-income sample in 1997, 3) addition in 1997 of a sample of families who immigrated to the US since 1968, 4) switch to biannual interviewing in 1999, and 5) a doubling of the length of the interview between 1995 and 1999.  The objective of this study is to reassess the quality of the PSID family income data by comparing estimates of family income between the PSID and the CPS for the survey years 1968 through 2005.  Over this period the family income distributions from the two surveys match fairly closely between the 5th and 95th percentiles.  Overall, the PSID estimates have been somewhat higher than the CPS estimates, but the trends are quite similar.  The two data sets show less agreement at the upper and lower five percentiles of the distribution.

Government Accountability Office. American Community Survey: Key Unresolved Issues.  October 2004, GAO-05-82.

In this report, the Government Accountability Office (GAO) considers whether the ACS can provide an adequate replacement for the census long form as the major source of data for small geographic areas.  GAO reviews both operational and programmatic aspects of the ACS and identifies a number of issues that the Census Bureau will have to address.  One outstanding issue relates to the measurement of income.  GAO reports that when the Census Bureau releases ACS data for each new year, it will present only annual estimates adjusted for inflation and will revise all dollar-denominated data for earlier years.  Dollar-denominated items include income, housing value, rent, and housing-related expenditures.  The Census Bureau also has decided to continue to adjust data collected each month in the ACS to a calendar year basis.  It will use the Consumer Price Index (CPI), a national measure of inflation, for the annual and monthly adjustments for all geographic areas.  GAO raises serious questions about inflation adjustments.  Moreover, GAO finds that the use of a national cost-of-living adjustment does not reflect variations in geographic areas and may not be appropriate when allocating federal funds to states.

Grieger, Lloyd D., Sheldon Danziger, and Robert F. Schoeni. “Estimating and Benchmarking the Trend in Poverty from the Panel Study of Income Dynamics.”  http://psidonline.isr. umich.edu/Publications/Papers/grieger-danz-schoeni.pdf, November 2007.

This paper guides researchers through the process of calculating the poverty rate from the PSID for each year from 1968 to the present and compares the level and trend in PSID poverty rates to those of the March CPS.  The authors explain how to calculate four alternative PSID poverty series, which differ with respect to their income thresholds.  Prior to 1973, the trends in the first two PSID poverty rates differ significantly from the CPS series, with the PSID showing greater declines in poverty.  The third series, available from 1990 forward, is highly correlated with the CPS series from 1989 through 2002, and the fourth series is highly correlated with the CPS series over the entire period, 1967 to 2002.

Heeringa, S.G., D.H. Hill, and D.A. Howell. “Unfolding Brackets for Reducing Item Non- Response in Economic Surveys.”  HRS Working Paper 94-029, Institute for Social Research, University of Michigan, June 1995.

This paper describes and analyzes a new survey methodology for reducing item nonresponse on financial measures.  A respondent who is unable to provide an exact dollar amount may be able to provide a range, but respondents vary in how precisely they can bound the true value.  Giving a respondent a set of fixed brackets is not the most effective way to determine how much the respondent knows.  Systematic “unfolding brackets” provide an alternative approach, whereby the respondent is given a series of choices (for example, “Is it more/less than X dollars?”) to determine the lower and upper bounds that the respondent is able to provide. Unfolding brackets are applicable in both face-to-face and telephone surveys.

  The proportion of missing observations for financial variables in national surveys is often in the range of 20 to 25 percent and, in some cases, as high as a one-third.  With the unfolding bracket method, the proportion of completely missing data can be cut by two-thirds.  Furthermore, with appropriately chosen bracket breakpoints, it is possible to recover a high proportion of the variance of the underlying measure.  The authors investigate the effects of bracketing on the empirical validity of survey data.  While they find lower empirical validity for data from individuals exposed to brackets early in the survey instrument, this finding appears to result from self-selection rather than from a direct effect of exposure to the methodology.

Hendrick, Mark R., Karen E. King, and Julia L.Bienias.  “Research on Characteristics of Survey of Income and Program Participation Nonrespondents Using IRS Data.”  SIPP Working Paper 211, U.S. Census Bureau, no date.

The paper relies on matching individual 1990 Internal Revenue Service (IRS) data to SIPP data to track the accuracy of SIPP earnings estimates.  Differences between the IRS and SIPP definitions of total income necessitated adjustments while cases with IRS income of zero were discarded.  The authors use regression models to fit the IRS income and to determine if the relationship between SIPP and IRS earnings differs for respondents and nonrespondents.  Married respondents had higher earnings than married nonrespondents while single respondents had lower earnings than single nonrespondents.  The authors also find that the relationship between IRS and SIPP earnings data varies by race.  Overall, the analysis shows that the SIPP overestimates earnings at low earnings levels and underestimates earnings at high earnings levels.  The research also appears to verify a general underreporting of earnings in the SIPP.

Henry, Eric, and Charles Day.“A Comparison of Income Concepts: IRS Statistics of Income, Census Current Population Survey, and BLS Consumer Expenditure Survey.”  Proceedings of the Annual Meeting of the American Statistical Association [CD-ROM].  Alexandria, VA:  American Statistical Association, 2005, pp. 1155-1162.

This paper describes the Adjusted Gross Income (AGI) concept used by the IRS and then explains the most important differences between AGI and the definitions used in the CE and CPS.  AGI excludes nontaxable income, which leaves out some sources entirely while discounting other sources.  Differences occur in wages and salaries, self-employment income, Social Security, private and government retirement income, interest, dividends, rental and other property income, unemployment and workers’ compensation, veterans’ benefits, public assistance, Supplemental Security Income, food stamps, regular contributions for support, and other income.

Hess, Jennifer, Jeffrey Moore, Joanne Pascale, Jennifer Rothbag, and Catherine Keeley.  “The Effects of Person-level versus Household-level Questionnaire Design on Survey Estimates and Data Quality.”  Proceedings of American Statistical Association, Section on Survey Research Methods. Alexandria, VA: American Statistical Association, 2000, pp. 157-162.

This study attempts to identify the best survey method for gaining information about people in a household.  The traditional method is a person-level approach whereby the interviewer asks the same questions for every person in the household.  A different technique is the household-level approach whereby the interviewer asks questions such as “Does anyone in the household have trouble seeing?”  The study was based on two surveys of 908 households.  The authors found some limited evidence that the household-level approach increases the risk of underreporting for some summary measures such as asset ownership.  However, the reduced risk of underreporting in the person-level survey suggests that the improvement may come at the expense of response reliability.  Item nonresponse and behavior coding results did not suggest that either the household- or person-level version was superior.  Survey interviewers greatly preferred the household-level survey and thought that it was less burdensome than the traditional person-level survey.  The authors suggest that validating data could greatly help determine which survey type is superior.  They also suggest that, for some types of information, one might be better than the other and vice-versa.

Hurd, Michael D.  “Anchoring and Acquiescence Bias in Measuring Assets in Household Surveys.”  Journal of Risk and Uncertainty, vol. 19, 1999, pp. 111-136.

Cognitive psychology has identified and extensively studied several cognitive anomalies that may be important for assessing the economic status of individuals and households.  In particular, the use of unfolding brackets to elicit information about income and assets in household surveys can interact with such cognitive anomalies—acquiescence bias and anchoring—to cause bias in the estimates of the distribution of income and assets.  This paper uses data from the Health and Retirement Study (HRS) and the Asset and Health Dynamics Study to study the use of brackets to elicit information about income and assets.  The author finds that bracketing can produce bias in population estimates of assets based on matching respondents across two successive March panels for 1992-93 and 1996-97.

Hurd, Michael, F. Thomas Juster, and James P. Smith.  “Enhancing the Quality of Data on Income: Recent Innovations from the HRS.”  The Journal of Human Resources, vol. 38, no. 3, Summer 2003, pp. 758-772.

The authors evaluated two survey innovations introduced in the HRS that aimed to improve income measurement.  The innovations are (1) the integration of questions for income and wealth and (2) matching the periodicity over which income questions are asked with the typical way such income is received.  Both innovations had significant impacts in improving the quality of income reports.  For example, the integration of income questions into the asset module produced in HRS an across-wave 63 percent increase in the amount of income derived from financial assets, real estate investments, and farm and business equity.  Similarly, asking respondents to answer in terms of a time interval consistent with how they receive income substantially improved the quality of reports on Social Security income based on matching respondents across two successive CPS March panels for 1992-93 and 1996-97.

Huyhn, Minh, Kalman Rupp, and James Sears, Office of Research, Evaluation and Statistics, Social Security Administration.  “The Assessment of the Survey of Income and Program (SIPP) Benefit Data Using Longitudinal Administrative Records.”  SIPP Working Paper 238, U.S. Census Bureau, no date. http://www.sipp.census.gov/ sipp/workpapr/ wp238.pdf

This paper uses administrative records data from the Social Security Administration (SSA) to assess the accuracy of SIPP data concerning Old-Age, Survivors and Disability Insurance (OASDI) and Supplemental Security Income (SSI) benefits.  OASDI estimates from the SIPP are consistently and substantially lower than the Monthly Benefit Credited estimates of gross OASDI benefits (6 to 8 percent difference). 

Using aggregate SSA-SIPP comparisons, both the March 1996 and October 1998 SIPP underestimate aggregate SSI receipt (by 4.5 and 1.8 percent, respectively). The authors also look at the individual-level variation beyond these overall measures of SIPP receipt error.  Overall, the accuracy of reporting receipt of “OASDI only” or “neither” is very high.  The percent misreporting in the two categories involving SSI receipt is much higher. The SIPP misclassifies a nontrivial fraction of those receiving SSI (“SSI only” and “concurrent” SSI and OASDI) according to SSA records as receiving “OASDI only.”  Finally, a substantial portion of “SSI only” recipients reports no benefit at all.  The authors also examine benefit amounts conditional on receipt.  In January 1993, a large plurality (42.5 percent of observations) had OASDI benefit amounts that exceeded the Monthly Benefits Paid by $31 to $40.  For each of the other three time points (August 1995, March 1996, and October 1998), less than 2 percent of individuals fell into this category.  The difference is likely attributable to a questionnaire change asking respondents to report the total amount each month after any deductions.  The authors also find that reporting errors for both SSI and OASDI differ dramatically by imputation status, and they provide evidence that errors may be systematically related to sample attrition and interview status (self, proxy, and refusal). They also provide a brief assessment of the effect of the lack of Social Security numbers in a nontrivial fraction of cases and find clear evidence of selectivity.

Juster, F. Thomas, and James P. Smith. “ Improving the Quality of Economic Data: Lessons from the HRS and AHEAD.”  Journal of the American Statistical Association, vol. 92, no. 440, 1997, pp. 1268-1278.

Juster and Smith provide an overview of “follow-up brackets” as applied to collecting respondent-reported data on assets for (1) the HRS of people age 51 to 61 in order to measure economic transitions in health, work, income, and wealth and (2) the Asset and Health Dynamics Among the Oldest Old Survey of people over age 70 in order to study the relationship between physical and cognitive health in old age, living arrangements, and “asset decumulation.”  The authors find that when bracketed amounts are given as follow-up to responses of “don’t know” or “refuse,” the bracketed data are useful for later imputation of the actual amount requested.  They also find that respondents who used the bracket amount path early in the survey were more likely to provide estimated dollar amounts (non-bracket) later in the survey.  Use of follow-up brackets reduces item nonresponse and provides for more appropriate imputation estimates.

Kalton, Graham, and Michael E. Miller.  “The Seam Effect with Social Security Income in the Survey of Income and Program Participation.”  Journal of Official Statistics,vol. 7, 1991, pp. 235–245.

A common finding in SIPP data is that more month-to-month changes in recipiency occur when data are collected in different waves versus the same wave.  This phenomenon is called the seam effect.  To examine the seam effect further, this paper looks at the January 1984 3.5 percent increase in Social Security payments.  One-third of the Social Security recipients in the SIPP did not report an increase in Social Security payments for the period.  Using a logistic regression, the authors compare the characteristics of those reporting an increase and those failing to do so.  Those most likely to report the change were in rotation group 1, white, self-reporting, and with a January Social Security payment over $413.  They had a predicted reporting rate of 75 percent while those with the opposite characteristics had a predicted reporting rate of 26 percent.  One explanation for the seam effect is that it a manifestation of the general problem of measuring gross changes in panel surveys.  Another explanation is false consistency; that is, people forget that a change has occurred and repeat the same answer as in the past.

Kapteyn A., P. Michaud, J.P. Smith, and A. Van Soest.  “Effects of Attrition and Non-Response in the Health and Retirement Study.”  RAND Working Paper.  May 1, 2006.

This study attempts to determine how nonresponse and attrition affect the representativeness over time of members of the HRS sample born between 1931 and 1941.  The authors find that most baseline characteristics are not correlated with nonresponse except for race, ethnicity, gender, and age–factors that HRS already weights.  The authors advise against using complicated weighting schemes other than the HRS-provided weights.  The paper also finds that those who leave the survey but return later are significantly different from those who leave permanently and those who always complete the survey.  Thus, the authors recommend use of the unbalanced sample (which includes those who dropped out and then returned) because returning respondents differ significantly from either of the other two groups; returnees’ omission from the sample could compromise representativity.  The paper also studies whether there was a difference in those who did not provide their pension summary plan description (SPD) or SSA records.  The authors find that many characteristics of respondents are associated with both an SSA and SPD match and that the sample of those providing SSA or SPD information is nonrandom.  Use of the weights helps account for nonresponse and attrition, but some differences remain to be addressed.

Kashihara D., and T. Ezzati-Rice.“Characteristics of Survey Attrition in the Household Component of the Medical Expenditure Panel Survey (MEPS).” Proceedings of the American Statistical Association, Section on Survey Research Methods [CD-ROM]. Alexandria, VA: American Statistical Association, 2004, pp. 3758-3765.

This study attempts to determine the factors that make a person likely to drop out of the MEPS panel survey.  The first analysis looked at Year 1 and those who completed round 1 but then dropped out.  The total attrition rate in this case was about 10 percent.  The significant variables (5 percent significance rate) were age, race, education, employment status, region, MSA, health insurance status, number of people in the reporting unit, and whether participants were reluctant respondents.  The second analysis looked at Year 2 and those who completed rounds two and three but then dropped out.  The significant variables were age, marital status, education, region, self-perceived health status, health care expenditures, office-based doctor visits, first respondent, proxy respondent, number of people in the reporting unit, and whether participants were reluctant respondents.  Health care expenditures and doctor visits were new variables in the Year 2 analysis.

Kim, Yong-Seong and Frank P. Stafford. “The Quality of PSID Income Data in the 1990’s and Beyond.”  http://psidonline.isr.umich.edu/Guide/Quality/q_inc_data.html, December 2000.

This paper reviews changes to the PSID implemented in the 1990s along with prospective future changes and assesses their actual and potential future impact on the quality of PSID data.  Operational changes included conversion to computer assisted telephone interviewing and the introduction of new processing and editing systems.  Sample changes included suspension of more than half of the original low-income sample and the introduction of a new sample of immigrants.  Based on comparisons between the PSID and CPS the authors conclude that, despite these changes, a number of potential data seams were avoided, and the basic continuity of the income data series has been preserved.

Koenig, Melissa L.  “An Assessment of the Current Population Survey and the Survey of Income and Program Participation Data Using Social Security Administrative Data.”  Federal Committee on Statistical Methodology, 2003 Research Conference papers, pp. 129-137.

This analysis compares survey-reported Social Security and SSI beneficiary information from the CPS and SIPP to the information contained in program administrative records for persons age 65 or older with a Social Security number (SSN) match.  Both surveys estimate aggregate Social Security benefits very well for the matched samples.  (CPS reported benefits are compared to the gross Social Security benefit while SIPP reported benefits are compared to the net Social Security benefit—that is, excluding the Medicare Part B premium.)  The CPS underestimates SSI benefits by 21 percent compared to 8 percent for the SIPP.  The SIPP correctly identifies 99 percent of Social Security beneficiaries and 93 percent of SSI beneficiaries.  The CPS correctly identifies 95 percent of Social Security beneficiaries but only 69 percent of SSI beneficiaries.  However, both surveys incorrectly identify about 40 percent of elderly nonbeneficiaries as Social Security beneficiaries whereas they misclassify less than one percent of SSI nonbeneficiaries.  Imputation affects the level of correspondence between the survey and administrative data.  For respondents with no Social Security or SSI imputations, substituting the actual benefit amounts for the reported amounts changes poverty status for only 4 percent of persons in the CPS and 2 percent in the SIPP.  For those with imputations, poverty status is changed for 10 percent of persons in the CPS and 4 percent in the SIPP.

Kominski, Robert.  “Record Use by Respondents.”  SIPP Working Paper 152, U.S. Census Bureau, 1991. http://www.sipp.census.gov.sipp.workpapr/wp152.pdf

The study seeks to ascertain the basic level of record use by respondents when reporting income. It relies on Senior Field Representatives (SFRs) who performed routine observations of Wave 1 interviews in the 1990 panel of SIPP.  The SFRs used an observation form and noted whether respondents used records in reporting certain income sources: wages and salary, assets, and certain public programs. Of persons reporting a wage or salary, 31 percent used some type of record.  A similar level of use—28 percent—was reported for assets.  About a third of the sample reported receipt of Social Security, but 43 percent of these respondents did not in any way verify such receipt.  Of those providing verification, one in three verified that source, but not the amount.  Only about a third of all Social Security recipients (35 percent) verified both the source and amount with some type of record.  Of respondents reporting Medicare, 78 percent were able to verify enrollment with a record.  Two-thirds of those verifying Medicare did so for the source only.  With the remaining programs infrequently reported, the authors combined them into one measure.  Of those persons reporting in one of these programs, 21 percent verified participation.  The analyses also show that the source of the lack of record use is attributable to the interviewer and to respondent characteristics.  The fundamental finding is that record use is noticeably low across all elements.

Kominski, Robert.  “The SIPP Event History Calendar:  Aiding Respondents in the Dating of Longitudinal Events.”  Proceedings of American Statistical Association, Section on Survey Research Methods.  Alexandria, VA:  American Statistical Association, 1990, pp. 553-558.

This paper presents the results of a test of an event history calendar in the SIPP.  Designed to reduce seam bias, the calendar was used to collect selected data on employment health insurance coverage, program participation, and pension receipt.  The calendar was tested in one region, comprising the states of Illinois and Indiana, for the duration of the 1989 panel, which was terminated after just three of the planned nine waves.  The calendar, displaying all 32 months that were to be covered by the panel, was completed by the interviewer after each interview and presented to the respondent to use as a reference tool during the next interview.  Used in this way the calendar served as a form of dependent interviewing by allowing respondents to see their households’ responses from prior waves.  Some reduction in seam bias was observed for several of the items collected with the aid of the calendar.  The calendar also facilitated longitudinal editing and correction of the data.  There was no evidence to suggest that the calendar was rejected by either respondents or the field staff.

Lamas, Enrique, Thomas Palumbo, and Judith Eargle.  “The Effect of the SIPP Redesign on Employment and Earnings Data.”  SIPP Working Paper 217, U.S. Census Bureau, no date.

This paper focuses on the difference between the 1993 and 1996 SIPP.  The major change was a switch from paper-and-pencil personal interviewing to computer-assisted personal interviewing (CAPI).  In addition, the questions about income and employment were grouped together differently.  The results show the same percent of persons working all weeks of a month, but a lower percent with no job who are either looking for employment or on layoff.  Moreover, CAPI shows higher mean and aggregate earnings, perhaps indicating a reduction in the level of underreporting in the SIPP.

Lamas, Enrique, Jan Tin, and Judith Eargle.  “The Effects of Attrition on Income and Poverty Estimates from the Survey of Income and Program Participation (SIPP).”  U.S. Census Bureau. Paper presented at the Conference on Attrition in Longitudinal Surveys, May 4, 1994.

Using several models of income and poverty that take attrition into account, the authors examine the effect of attrition from the SIPP on income and poverty correlates.  They also use simulations to examine the magnitude of potential attrition bias on poverty estimates.  They impute missing information for attritors and calculate poverty estimates for the complete panel.  To obtain an estimate of potential attrition bias, they use simulations for attritors to compare poverty estimates for the full panel to those of panel members with complete information.  The authors conclude that, although attrition had an effect on income and poverty estimates in the SIPP, the observed differences in the poverty estimates from the SIPP and CPS do not appear to result from either attrition or the other methodological differences between the two surveys.  The differences may result from better reporting in the SIPP of income at the lower end of the distribution, especially reporting of means-tested income and other short-term spells of income, but further work in this area is needed.

Liu, Hongji, and Ravi Sharma.  Report on Round 30 Income and Assets Imputation for MCBS Community Residents.  Memorandum from Westat to Frank Eppig, Centers for Medicare and Medicaid Services, June 12, 2002.

This memorandum reviews the procedures implemented to impute for income and assets in the Medicare Current Beneficiary Survey (MCBS) Round 30 Income and Assets Supplement.  The authors imputed the income and assets dollar amounts by using a hot deck imputation procedure and a predictive mean-matching procedure.  The share of responses missing total annual income for 2000 totaled 25.56 percent.  To assess the degree to which the imputation preserved the observed relationship among income, assets, and homeownership amounts in Round 30 and the previous round, the authors compute Pearson correlations.  The correlation coefficients for 2000 and 1999 income amounts are very similar for observed and completed Round 30 data.

Loomis, Laura, and Jennifer Rothgeb.  Final Report on Cognitive Interview Research Results and Revisions to the Welfare Reform Benefits Questions for the March 2000 Income Supplement to the CPS. Survey Methodology #2005-02.  Statistical Research Division, U.S. Census Bureau, March 14, 2005.

This report describes the results of cognitive interview research on questions about welfare benefits that were included in both the 1998 and 1999 March Income Supplement of the CPS.  The questions represent the CPS’s first attempt to measure participation in welfare after a new law passed in 1996 instituted the Temporary Assistance to Needy Families (TANF) program.  The report makes recommendations on welfare-reform related questions dealing with receipt of cash assistance, cash diversion assistance, transportation and child care assistance, and participation in work-related training activities.  The authors include the final decisions made by the Housing and Household Economic Statistics Division.

Lynn, Peter, Annette Jackle, Stephen P. Jenkins, and Emanuela Sala.  “The Effects of Dependent Interviewing on Response to Questions on Income Sources.”  Journal of Official Statistics, vol. 22, no. 3, 2006, pp. 357-384.

The term “dependent interviewing” generally refers to structured interviews whereby the choice and/or wording of questions varies across sample members, depending on information maintained by the survey organization about the sample member.  Typically, the information comes from a previous survey, although it may come from administrative data or the sample frame.  Using an experimental design, the authors compare two approaches to dependent interviewing to traditional independent interviewing for a module of questions about sources of income.  The authors compare the three approaches to questioning in terms of the effect on underreporting of income sources and related bivariate statistics.  The study design also permits identification of the characteristics of respondents whose responses are sensitive to interview mode.  The authors conclude that underreporting can be significantly greater with independent interviewing than with either form of dependent interviewing, especially for income sources that are relatively common or relatively easy to forget.  They also find that dependent interviewing is helpful as a recall aid for respondents below retirement age and for registered disabled persons.

Mack, Stephen, and Rita Petroni.  “Overview of SIPP Nonresponse Research.”  Presented at the Fifth International Workshop on Household Survey Non-Response, Ottawa, Canada, September 26–28, 1994.

In providing an overview of various weighting techniques for the SIPP, the authors find that alternative longitudinal weighting intended to deal with levels of nonresponse and bias provides no strong evidence of reduction of these two problems.  The authors use constrained response propensity adjustments for panel nonresponse in an effort to reduce the bias of subsequent waves’ nonresponse.  The results, however, do not demonstrate any reduction of nonresponse bias from this approach.  Finally, the authors build on research suggesting that the use of a Chi-Squared Automatic Interaction Detector algorithm in conjunction with several alternative panel nonresponse adjustments (ranking adjustment, logistic regression, logistic regression/observed, and collapsed cells) offers a possible means of reducing bias in the estimates.  Results show, however, that none of seven nonresponse adjustments were better than the others at reducing panel nonresponse bias.  Thus, the paper suggests that, while none of the above methods is effective in reducing nonresponse bias between rounds of data collection, the SIPP staff will continue experimenting with different weights in an effort to obtain the highest-quality data.

Marquis, Kent H., and Jeffrey C. Moore.  “SIPP Record Check Results: Implications for Measurement Principles and Practice.”  SIPP Working Paper 126, U.S. Census Bureau, no date. http://sipp.census.gov/sipp/workpapr/wp126.pdf

The SIPP Record Check uses a “full” as opposed to a one-directional design; that is, the evaluation checks both “yes” and “no” reports of program participation and obtains program participation records for eight government transfer programs administered by four states (Florida, New York, Pennsylvania, and Wisconsin) and the federal government.  From each agency, the authors obtained identifying information to match records and monthly benefit amounts in order to measure response error.  They find that misclassification error percentages for monthly reports of program participation and program participation changes are very low for each program.  The net bias in estimates of the mean level of program participation ranges from -3 to -39 percent, indicating that the estimated mean is usually lower than the true mean.  They discuss measures that could improve measurement error in the SIPP, such as statistical error correction and control and design changes.

Marquis, Kent H., and S. James Press.  Cognitive Design and Bayesian Modeling of a Census Survey of Income Recall, in Federal Committee on Statistical Methodology, 1999 Research Conference, pp. 51-64.  http://www.fcsm.gov/papers/index.html.

This paper investigates ways of combining Bayesian estimation and cognitive psychology to make estimates of data containing response errors. If respondents can judge the quality of their answers, then the authors’ approach may work well.  However, the paper shows that  asking respondents for a range associated with their income proved burdensome for both respondent and interviewer.  Many people had difficulty with the concept of providing a range, even when presented with a practice question.  CATI techniques ensured that each respondent’s best guess fell in the given range.  Still, some respondents’ actual values were on the border of their response, and, for the question on interest and dividends, many people did not want to provide a range.  Other people appeared not to be motivated to think hard enough to give reasonable answers.  Overall, more fine tuning is needed to make the paper’s approach useful.

Martini, Alberto.  “Research Grant Summaries: Why SIPP and CPS Produce Different Poverty Measures among the Elderly.”   Social Security Bulletin, vol. 60, no. 4, 1997, pp. 50-55.

The purpose of this research is to document the divergence between SIPP and CPS poverty measures, focusing on the elderly and to explain why such divergence arises, with particular focus on the role played by the reporting of various income sources.  On average across four years (1987, 1988, 1990, and 1991), the SIPP poverty rates for the elderly are about 27 percent lower than in the CPS (9 versus 12 percent).  The author also observes larger SIPP-CPS discrepancies among men than among women and larger discrepancies for married than nonmarried persons and for those living with others versus those living alone. The SIPP not only finds fewer poor people, it also finds that those counted as poor are on average somewhat better off than their CPS counterparts.  The average income-to-needs ratio is about 78 percent among the SIPP elderly, whereas it is 71 percent in the CPS.  The author notes that the SIPP counts more recipients for all sources of income.  However, with the exception of self-employment income and Social Security benefits, average amounts among SIPP recipients are lower than their CPS counterparts.  The author concludes that differences in the reporting of Social Security benefits seem to account for at least half of the observed poverty rate differential among the elderly in the SIPP and CPS.  The other half of the differential can be explained by a combination of many other factors, of which only some can be precisely identified.  Among them, the author notes the role of differences in the treatment of attrition and family composition, the interaction between income sources, and the role of other aspects of income reporting, such as part-year income and small amounts of income.

Mathiowetz, Nancy A., Charlie Brown, and John Bound.  “Measurement Error in Surveys of the Low-Income Population,” in Studies of Welfare Populations: Data Collection and Research Issues, edited by Michele Ver Ploeg, Robert A. Moffitt, and Constance F. Citro.  Panel on Data and Methods for Measuring the Effects of Changes in Social Welfare Programs, Committee on National Statistics, Division of Behavioral and Social Sciences and Education. Washington, DC: National Academy Press, 2002.

The authors provide an introduction to sources of measurement error and examine two theoretical frameworks (cognitive and social psychological) for understanding the various sources of error.  They review the empirical literature concerning the quality of responses for reports of earnings and transfer income to identify those items most likely to be subject to response error among the welfare population.  The paper concludes with suggestions for attempting to reduce the various sources of error through alternative questionnaire and survey designs.  Such alternatives include the use of filter questions to determine the complexity of the experience and the use of different follow-up questions for those with simple and complex behavior.  For example, the questionnaire might ask the respondent whether the amount of income from a particular income support program varies from month to month, with follow-up questions based on the response.  The authors also suggest that simple, single-focus questions are often more effective than complex, compound questions.  In addition, they suggest reducing cognitive burden by asking income questions in the form of recognition (Did you receive income from X?) rather than relying on free recall. And, to reduce cognitive burden, the authors suggest requesting earnings for the time period that the respondent is best able to respond.  Finally, they suggest that unfolding income brackets may result in less nonresponse to income questions.

McGonagle, Katherine A., and Robert F. Schoeni. “The Panel Study of Income Dynamics: Overview & Summary of Scientific Contributions After Nearly 40 Years.”  http://psidonline.isr.umich.edu/Publications/Papers/montreal.pdf, January 30, 2006.

The authors describe the history of the PSID design as well as the key features of the design and content of the survey.  The PSID sample was originally drawn from two independent samples: an over-sample of approximately 2,000 low income families from the Survey of Economic Opportunity and a national sample of approximately 3,000 households designed by the Survey Research Center, University of Michigan.  They describe how the sample changed over the years as children leaving their parents’ households were interviewed as their own family units.  In addition, in 1990 the PSID added 2,000 Latino households, and while this sample represented a major group of immigrants, it did not cover all immigrants since 1968, especially Asians.  Due to this shortcoming and insufficient funding, the Latino sample was dropped after 1995.  In 1997 two major changes to the sample were made: 1) a reduction of the core sample and 2) the introduction of a refresher sample of post 1968 immigrant families and their adult children.

The authors conclude with the strengths and weaknesses of the survey.  The strengths include consistently high response rates, the longevity of the data collection, a sample that is nationally representative and genealogically-based, content domains that are broad and recurring, and innovative supplements.  There are five main weaknesses of the study.  First, as a result of the longevity of the panel, cumulative attrition is an issue.  Of the 18,192 individuals in the sample in 1968, 5,282 were alive and interviewed in 2001 and the remainder either died, were explicitly dropped from the study in 1997, or attrited.  A second limitation is the periodicity of the PSID data collection.  Currently, data are collected every other year but for the entire two-year period.  The two-year reference period is especially disadvantageous for the collection of income and employment data.  Third, until 1997 the PSID did not interview household members other than the family head and wife.  This limitation was addressed in 1997 and 2002 with the Child Development Supplements.  Moreover, a pilot study was launched in 2005 to interview children who had participated in the supplements and were at least 18 years old but not yet family heads or wives.  A fourth weakness is the limited types of data that can be collected by a telephone interview.  A fifth weakness is that new immigrants to the U.S. are not continuously represented in the sample.  A large number of immigrants have arrived since 1999, and the PSID cannot be used to assess their outcomes.

McGrath, David E.  “Comparison of Data Obtained by Telephone versus Face to Face Response in the U.S. Consumer Expenditures Survey.”  Proceedings of the Annual Meeting of the American Statistical Association [CD-ROM].  Alexandria, VA:  American Statistical Association, 2005, pp. 3368-3375.

The CE was designed to collect data by personal visit.  However, 42 percent of households report by telephone.  The paper examines whether mode of data collection has a significant impact on data quality.  White, non–Hispanic, highly educated people are more likely to report by telephone.  By modeling expenditure data with a logistic regression model, the paper finds that mode of collection does not affect total expenditures.  However, it is true that telephone respondents tend to refuse income questions such that telephone data are allocated and imputed at significantly higher rates than data disclosed by personal visits.  In addition, the paper finds that interviewers rather than respondents have the largest impact on whether the CE is completed by telephone or personal visit.

Meyer, Bruce D., Wallace K.C. Mok, and James X. Sullivan.  “The Under-Reporting of Transfers in Household Surveys: Comparisons to Administrative Aggregates.”  Manuscript, March 7, 2007, bdmeyer@uchicago.edu.

Household surveys often underreport benefit receipt for reasons such as imperfect recall, a desire to reduce interview burden, the stigma of program participation, or the sensitivity of income information.  This paper examines survey reports of benefit receipt from unemployment insurance, workers’ compensation, Social Security, Supplemental Security Income, food stamps, the earned income tax credit, and Aid to Families with Dependent Children/TANF.  The authors analyze data from the CPS ASEC, the PSID, and the SIPP and compare the weighted totals reported by households for these programs with those published by government organizations.  The research results show sharp differences across programs and surveys as well as over time.  Surveys differ systematically in their ability to capture benefit receipts.  The SIPP typically has the highest reporting rate for government transfers, followed by the CPS and PSID.  However, unemployment insurance and workers’ compensation are reported at a slightly higher rate in the CPS than in the SIPP.  These differences are informative as to the relative importance of the various reasons for underreporting.  The reporting rates provided by the authors can also be used to adjust estimated program effects on income distribution and estimates of program take-up.

Meyer, Bruce D., and James X. Sullivan.  “Measuring the Well-Being of the Poor Using Income and Consumption.”  The Journal of Human Resources, vol. 38, supplement, 2003, pp. 1180-1220.

This article compares income and consumption as measures of the material well-being of the poor.  After reviewing the conceptual and pragmatic reasons that favor income or consumption, the authors examine relevant findings from earlier research and present an empirical analysis using income and consumption data from the CE and the PSID. Comparisons of percentile distributions of income, expenditures, and consumption as well as average income and expenditures show that in both surveys, reported expenditures exceed reported income among low-educated single mothers and among all families at the low ends of both distributions.  Reported expenditures among families with low reported incomes provide evidence that incomes in this subpopulation are substantially understated.  The authors review evidence from other studies that indicate substantial under-reporting of key components of income in the CPS and SIPP.  Finally, the authors examine other measures of hardship and material well-being by level of income and consumption among low-educated and all single mothers in the CE and PSID.  The findings suggest that reported consumption does a better job than reported income in capturing well-being among disadvantaged families.

Moon, Marilyn, and F. Thomas Juster.  “Economic Status Measures in the Health and Retirement Study.”  The Journal of Human Resources, vol. 30, supplement, 1995, pp. S138-S157.

This paper offers a flavor for the major economic status variables in the HRS, provides some preliminary analysis of the quality of the data, and takes a preliminary look at the interrelationships among economic status measures such as income and wealth and other important variables, including health status, pension rights, and health insurance coverage.  The authors also compare the first wave of HRS income data with all households headed by a person between the ages of 51 and 61 from the March 1992 CPS.  They find strong similarities in the amount and distribution of income in the two data sets.  Poverty rates are somewhat lower for the CPS.

Moore, Jeffrey C., and Laura Loomis.  “Using Alternative Question Strategies to Reduce Income Nonresponse.” Proceedings of American Statistical Association, Section on Survey Research Methods. Alexandria, VA: American Statistical Association, 2000, pp. 947-952.

This paper describes research that builds on the unfolding brackets approach to asking about income and tests a new form of income range reporting, which the authors label “implicit brackets.”  The authors conducted the research as part of the Census Bureau’s Questionnaire Design Experimental Research Survey, which was a split-sample experiment using a paper-and-pencil instrument in a telephone interview with a random digit dial sample.  For the experimental “implicit brackets” treatment, the question format consisted of two parts:  (1) whether annual income for 1998 was more or less than $X, where $X was a minimum amount varying by asset type; and (2) if the answer was “more,” then the respondent was asked, “How much was it to the nearest $Z?”  The second question in effect establishes response brackets of width $Z.  The authors evaluate five asset income sources: checking accounts, savings accounts, certificates of deposit, mutual funds, and stocks.  For all five asset income sources, the item nonresponse rate for the experimental treatment was lower than for the control.  However, all of the improvement came from a reduction in “don’t knows” and not from a reduction in refusals.  The authors also find that the distribution of income responses did not differ by questionnaire treatment.  Finally, they find that the experimental treatment seemed to increase report precision.

Moore, Jeffrey C., Kent H. Marquis, and Karen Bogen.  “The SIPP Cognitive Research Evaluation Experiment: Basic Results and Documentation.”  SIPP Working Paper 212,  Statistical Research Division, U.S. Census Bureau, January 11, 1996.

The Census Bureau implemented a test of new procedures designed to reduce measurement error.  One procedure asked household respondents to use their personal income records instead of relying on memory.  The results indicate that the procedures had no effect on reducing either under- or over-reporting of participation in income programs.  However, the new procedures did produce substantial improvement in reporting income amounts.

Moore, Jeffrey C., Linda L. Stinson, and Edward J. Welniak, Jr.  “Income Measurement Error in Surveys: A Review.”  Journal of Official Statistics, vol. 16, no. 4, 2000, pp. 331-361.

This paper reviews what is known about income measurement errors.  It focuses on response error research by comparing individual survey respondents’ reports to external measures of truth obtained by independent record systems.  The paper finds that errors in individual surveys include both bias and random error, with substantially varying propensities for these errors across different income types.  However, the authors cite several papers indicating that 95 percent of reported wages and salaries is accurate and concluding that over- and underreporting tend to cancel out (although reporting is slightly underestimated).  Research on transfer programs indicates a large and consistent negative bias while many sources indicate that assets suffer from severe underreporting.  The paper also finds that respondents have trouble understanding income concepts and terms such as “nonwage cash payments” or “total family income.”  Others have trouble retrieving information and constructing “monthly pay,” for example.  Some surveys have found that telling respondents to use records increases accuracy but also places further burden on both respondent and interviewer.  Overall, the paper concludes that several problems need to be solved in order to improve income measurement.

Moyer, M. Eugene.  Counting Persons in Poverty in the Current Population Survey.  August 1998, http://aspe.os.gov/rn/rn20.htm.

The Census Bureau estimated that 36.5 million persons were in poverty in 1996.  However, if an analyst were to estimate from the CPS the number of persons in families whose income is less than the poverty level, the estimate would be higher.  Two reasons explain the difference.  First, by definition, unrelated children under age 15 have no income because the CPS does not ask about their income.  They tend to live with families that are not poor.  The U.S. Department of Health and Human Services estimates that 40 percent of these children are foster children placed with the family, and, while the family is not poor, the children were poor when they were placed with the family and probably will again be poor when they return to their birth parents.  Therefore, the Department has always included them in its count of persons in poverty.  Second, some families contain subfamilies.  If the analyst counts the subfamily as part of the primary family (as the Census Bureau does), the entire family is likely to have income higher than the poverty level, and no one in the family would be counted as being in poverty.

Nelson, Charles.  “What Do We Know about Differences between CPS and ACS Income and Poverty Estimates?”  Housing and Household Economic Statistics Division, U.S. Census Bureau, August 21, 2006.

The author summarizes methodological and conceptual differences between the CPS ASEC and ACS as well as differences in the timing of estimates and then compares national estimates and measures of sampling and nonsampling error.  The methodological differences include mode of data collection, reference period, income question detail, sample size, survey universe, family unit definition, and residence rules.  The differences in timing of estimates can be seen at the national level; CPS results released in August 2006 were based on a somewhat more recent time period than the ACS results.  The comparisons of national estimates show that the ACS and CPS were similar in 2004 in that both surveys indicated a rise in poverty between 2003 and 2004, with no change in real median household income over the period.  In terms of point estimates, the ACS poverty rate (13.3 percent) in 2005 was higher than the CPS national rate of 12.6 percent.  The CPS poverty rate was lower than the ACS rate in five out of six years between 2000 and 2005, and the rates were not statistically different in the sixth year.  The relationship between ACS and CPS median household income has not been consistent; two years showed different estimates, and four years had estimates that were not statistically different.

The author continues by comparing measures of sampling and nonsampling error.  At the state level, the author finds that the standard errors of the ACS poverty rates are significantly smaller than the comparable CPS single- or three-year poverty rate standard errors.  And while the 2004 CPS aggregate total money income estimate of $6.940 trillion was slightly higher than the 2004 ACS aggregate of $6.862 trillion, the author points out three types of income in which the ACS aggregate was higher than the CPS—self-employment income, public assistance, and retirement income.  The author speculates that the difference could be attributable to respondent reporting error, differences in the questionnaire, and differences in how the estimates are constructed.  The weighted unit response rate for ACS is around 97 percent while the CPS ASEC combined response rate is around 80 percent.  Moreover, item nonresponse rates in the CPS ASEC are higher than comparable ACS figures.  Therefore, it would appear that differences in imputation methodology between the two surveys should be considered a potential source of differences between the two estimates.  Coverage error could also be a source of differences.  The ACS coverage rate is 95 percent, and the CPS coverage rate is around 89 percent.

The author concludes with a comparison of state distributions of poverty and income estimates from the CPS and ACS.  In 13 states, the 2004–2005 CPS poverty rate was lower than the 2005 ACS rate.  The CPS rate was higher than the ACS rate in two states, Maryland and New York.  The author concludes from various Chi-squared test results that strong evidence shows that the 2004–2005 CPS and 2005 ACS estimate different geographic distributions of poverty.

Nelson, Charles T., and Patricia Doyle.  “Recommendations for Measuring Income and Program Participation in the Post Welfare Reform Era.”  Proceedings of the American Statistical Association, Government Statistics and Social Statistics Sections. Alexandria, VA: American Statistical Association, 1999, pp. 54-63.

Changes to means-tested benefit systems under welfare reform made it necessary for surveys that collect data on program participation and benefit receipt to modify their questions to avoid losing reported benefits.  A topical module administered in wave 8 of the 1996 SIPP panel collected data to determine how welfare reform was affecting the way that people maintained program eligibility and received benefits.  This paper discusses planned changes to the core content of the SIPP based on early analysis of the wave 8 topical module data and recommends that portions of the wave 8 topical module be added to future SIPP panels to provide a continuous source of information on the changes in forms of benefit receipt brought about by changes in the way that government benefits are delivered.

Olson, Janice A.  “Social Security Benefit Reporting in the Survey of Income and Program Participation and in Social Security Administrative Records.”  SIPP Working Paper 235, U.S. Census Bureau, 2001.  http://www.sipp.census.gov/sipp/workpapr/wp235.pdf.

This paper examines the consistency between Social Security benefit amounts reported in the SIPP and provided in SSA administrative records.  A particular interest, especially for the elderly, is whether the amounts reported in the SIPP include the amount of Supplementary Medical Insurance (SMI) or the Medicare Part B premium. Only 25 percent of the elderly and 42 percent of the nonelderly reported a Social Security benefit amount in the SIPP that was within $1 of the amount in SSA administrative records. About three-quarters of both groups reported an amount within 10 percent of that in the records.  This analysis suggests that beneficiaries under age 65 who were retired workers, aged spouses, and aged widows are the best reporters.  Roughly half of them reported amounts matching the Monthly Benefit Credited in the SSA data, a result consistent with the idea that those newly on the program are more likely to have accurate recall of the benefit amount they receive. In contrast, only about a quarter of disabled workers and of beneficiaries age 65 and over (regardless of type) reported consistent amounts.  In the SIPP, underreporting of Social Security benefit amounts by the amount of the Medicare premium does not appear to be a major problem among elderly or disabled beneficiaries, although disproportionate shares of both groups make such reports.  However, possible measurement error, particularly substantial underreporting by those at the low end of reported benefit amounts (and, to a lesser degree, overreporting at the high end), may be a nontrivial problem, especially among the elderly.

Patil, Vrushali, and J. Neil Russell.  Final Report of the 2000 National Health Interview Survey Welfare Pretest.  Centers for Disease Control and Prevention, National Center for Health Statistics, Division of Health Interview Statistics, September 2000.

This report analyzes the test of various versions of welfare reform–based questions.  The test was needed to evaluate and revise old questions after the 1996 implementation of welfare reform.  The test used a split-ballot questionnaire design to examine the wording of seven questions as well as a split-ballot design for block areas where low-income respondents resided.  Given time constraints, the test did not randomly assign questionnaires.  The authors use logistic regression to analyze information about the questions and find that different wording would increase understanding of the questions for several items.

Paulin, Geoffrey and David Ferraro.  “Imputing Income in the Consumer Expenditure Survey.” Monthly Labor Review, vol. 117, no. 12, 1994, pp. 23-31.

This article summarizes methods of adjusting for nonresponse bias in the CE.  In the early part of the century, account balancing was used to eliminate large gaps between family income and expenditures.  More recently, more complex methods have found application.  For example, a hot deck method assigns missing values from a donor from the same demographic group but has proven problematic in that the CE sample size is relatively small.  Another method is model-based and creates a statistical model to impute missing values.  Models can be specified at the member or family level.  Research by Paulin and Ferraro attempts to explore whether income could be modeled on expenditures.  Other research by Chand and Alexander uses stochastic methods to impute income separately for each member and each source of income.

Paulin, Geoffrey D., and Elizabeth M. Sweet. “Modeling Income in the U.S. Consumer Expenditure Survey.”  Journal of Official Statistics, vol. 12, no. 4, 1996, pp. 403-419.

Nonresponse to income questions is common in household surveys.  This study examines wage and salary income data from the 1988–1990 CE.  A large portion (15 percent) of the sampled families are classified as incomplete income reporters, and not all complete income reporters provide a full accounting of all sources of income.  The authors explore different procedures designed to yield a model-based imputation strategy for wage and salary income of two-person consumer units.  The two-member units represent a link between single-member consumer units (with few inherent difficulties for modeling) and more complex multiple-member consumer units (with several inherent difficulties).  Selected variables from each consumer unit are synthesized into a final model that is tested for proper specification.  Results of the final model indicate that imputation increases the means of published CE income data.

Pedace, Roberto, and Nancy Bates.  “Using Administrative Records to Assess Earnings Reporting Error in the Survey of Income and Program Participation.”  Journal of Economic and Social Measurement, vol. 26, 2000, pp. 173-192.

This paper analyzes income misreporting propensities and magnitudes by using the 1992 SIPP longitudinal file matched to Social Security Summary Earnings Records.  Specifically, the authors focus on wage and salary and self-employment earnings.  The paper compares SIPP data to SSA records while making the assumption that the SSA data represent the “truth.”  The findings suggest that the 1992 SIPP accurately estimated the net number of earnings recipients but tended to underestimate amounts received. The mean difference in dollar amounts between the SIPP and SSA records was -$459, although the magnitude and direction of difference varied by income category.  An interesting characteristic of SIPP misreporting is that it overreports in the lowest income categories but underreports for those with at least $20,000 in earnings.  The authors use a logit model to estimate misreporting.  Those age 50 to 64, males, Hispanics, those without a college education, blacks, Asians, and those who are married or divorced all had significantly higher rates of misreporting.  Those who work in farming/forestry/fishing, craft, or military operations also had significantly larger reporting errors.  The self-employed tended to overreport.

Pleis, John R., and James M. Dahlhamer.  “Family Income Nonresponse in the National Health Interview Survey: 1997–2000.”  Proceedings of the American Statistical Association, Section on Survey Research Methods [CD-ROM]. Alexandria, VA: American Statistical Association, 2003, pp. 3309-3316.

The goal of the paper is to analyze different types of nonresponse in NHIS income data.  Most studies treat “don’t know” and refusals the same way, but the paper finds that different types of people are more likely to refuse to answer or to answer “don’t know.”  Refusers were likely to have higher incomes, whereas “don’t knows” had lower incomes.  Education, marital status, and current employment status are variables that seem to indicate whether a respondent is more likely to refuse than say “don’t know.”  The paper also shows that those with a GED were more similar to those without a high school education than those with a diploma.  The paper argues that type of nonresponse should be considered when imputing income data, perhaps requiring different follow-up questions depending on type of nonresponse.

Pleis, John R., and James M. Dahlhamer.  “Family Income Response Patterns for Varying Levels of Income Detail: An Analysis of the National Health Interview Survey (NHIS).”  Proceedings of the American Statistical Association, Section on Survey Research Methods [CD-ROM]. Alexandria, VA: American Statistical Association, 2004, pp. 4200-4207.

This paper measures how much detail people were willing to give about their incomes and what characteristics affected their willingness to disclose such information.  The categories used in the ordinal regression were no information given, income greater or less than $20,000, income chosen from a list of categories with $5,000 increments, or exact amount given.  The variables that appreciably increased the amount of income detail were age (younger), race (multiracial), employment in the previous year (employed), marital status (married), income sources (more), and adults in the family (fewer).  The data show that nonresponse bias is likely to affect analyses involving total family income.

Pleis, John R., James M. Dahlhamer, and Peter S. Meyer.  “Unfolding the Answers?  Income Nonresponse and Income Brackets in the National Health Interview Survey.”  Proceedings of the American Statistical Association, Section on Survey Research Methods [CD-ROM].  Alexandria, VA:  American Statistical Association, 2006, pp. 3540-3547.

Nonresponse to income-related survey questions is problematic and may lead to biased estimates.  In the NHIS, respondents are first asked to provide the exact dollar amount of the family's income in the previous calendar year (nonresponse @ 30%).  Previously, follow-up questions based on income intervals had had minimal effect on lowering nonresponse.  This paper analyzes results of a test that used NHIS screened-out households in April-June of 2006 to pose alternative income questions using unfolding brackets.  Respondents were randomly assigned to the existing or alternative method.  Alternative methods for asking about the sources of income were used to assess whether item nonresponse for income could be reduced.  Instead of asking about each source separately, a flashcard approach was used where families were asked about only the income sources of which they initially indicated receipt.  According to results from the 2006 field test, the alternative follow-up income questions (unfolding brackets) performed much better than the follow-up income questions used since the 1997 NHIS.  The path completion rate for the alternative income follow-up questions (unfolding brackets) was approximately 47%, while the path completion rate was 12% for the income follow-up questions used since the 1997 NHIS.  Based on the favorable results from the 2006 field test, the unfolding bracket follow-up income questions were incorporated into the NHIS beginning in 2007.

Posey, Kirby G., and Edward Welniak.  Income in the ACS: Comparisons to the 1990 Census. U.S. Census Bureau, March 25, 1998, http://www.census.gov/acs/www/AdvMeth/ Papers/ACS/Paper16.htm.

This paper compares income estimates in the 1996 ACS to income estimates in the 1990 Decennial Census.  The major difference between the two surveys was that the ACS asked about income in the last 12 months while the census asked about income in the previous year.  A split-panel study in October to December, 1997, showed significantly less wage and salary income was reported for the last 12 months than for the prior calendar year.  In addition, given that the ACS uses 12 possible reference periods (depending on when respondents answered the survey), it must use inflation adjustment factors to account for the various periods.  Allocation and imputation schemes used in the ACS were largely the same as those used in the census.  The adjusted median income results for the four ACS sites of interest were lower than the census results.  The authors attribute this finding to the recession as well as to the use of national CPI factors in local areas.  A final analysis also finds that median household income estimates from mail returns and CATI matched the census figures much more closely than did the CAPI interviews (which were lower).

Posey, Kirby G., Edward Welniak, and Charles Nelson.  “Income in the American Community Survey: Comparisons to Census 2000.”  Proceedings of the Annual Meeting of the American Statistical Association [CD-ROM].  Alexandria, VA:  American Statistical Association, 2003, pp. 352-3359.

The ACS and 2000 Decennial Census both collected information on total income.  However, the two surveys use different reference periods.  The ACS collects data throughout the year on an ongoing basis and asks for a respondent’s income over the “past 12 months.”  The 2000 Decennial Census collected income for 1999 (the last calendar year).  This paper describes a split-panel test conducted over the period October through December 1997 to evaluate the impact of a prior calendar year versus past 12 months reference period.  The only statistical differences in median income estimates between the two reference periods occurred in the earnings categories—wages/salary and self-employment.  The questionnaire with the “past 12 months” reference period produced slightly higher response rates for every income source.  However, only one income source, public assistance, shows a statistically significant difference.  The paper also describes the C2SS, which is an ACS program designed to demonstrate the feasibility of collecting long form–type information in a census environment.  The C2SS was conducted at the same time as the 2000 Decennial Census but as a separate effort.  Median household income estimates were generally lower in the C2SS/ACS than in the 2000 Decennial Census after adjusting the census’s 1999 dollar values for inflation. Nationally, median household income was more than 4 percent lower in the C2SS than in the census.  Five states reported median household incomes that were more than 8 percent lower in the C2SS than in the census.  The C2SS’s median household income at the national level matched more closely with CPS estimates.  Surprisingly, of the three major Census Bureau household survey-based estimates of median income at the national level, the outlier is the 2000 Decennial Census estimate, not the C2SS or CPS estimate.  The authors conclude with possible explanations for the differences in income estimates.

Reichert, W. Jennifer, and John C. Kidelberger.  “Reliability of Income Poverty Data from the Current Population Survey Annual Demographic Supplement.” Proceedings of the American Statistical Association, Section on Survey Research Methods. Alexandria, VA: American Statistical Association, 2000, pp. 151-156.

The CPS Supplement is the source for estimating national poverty, and, for the first time, the Census Bureau used reinterviews to determine if people’s responses were consistent with their answers in an earlier interview.  The authors use an index of inconsistency to evaluate response consistency by taking the ratio of response variance to total variance for each question.  The model assumes that people’s responses on each survey are independent of each other.  Many of the questions about participation in poverty-related programs were subject to high variability, suggesting that the questions were unreliable and could confuse respondents.  Questions relating to household income were fairly reliable.

Rodgers, Willard, Charles Brown, and Greg J. Duncan.  “Errors in Survey Reports of Earnings, Hours Worked, and Hourly Wages.”  Journal of the American Statistical Association, vol. 88, no. 424, December 1993, pp. 1208-1218.

Data collected as part of a validation study for the PSID were analyzed to assess the quality of reporting of earnings, hours worked, and hourly wages for hourly employees of a single manufacturing firm.  In comparing reported values with the firm’s administrative records, the authors found that standard assumptions about measurement error were violated to varying degrees.  Signed errors were (negatively) correlated with true values, and errors in reported earnings and hours worked in different periods were generally (positively) correlated with errors in other periods.  Reporting errors followed an approximately normal distribution, with departures from normality being due primarily to a small number of outliers.  These exerted considerable influence on estimates of relationships between variables.  Overall, these results demonstrate the importance of validation studies as a source of realistic assumptions about measurement error.

Roemer, Marc.  “Using Administrative Earnings Records to Assess Wage Data Quality in the March Current Population Survey and the Survey of Income and Program Participation.” Longitudinal Employer-Household Dynamics Program, Demographic Surveys Division, U.S.  Census Bureau, November 19, 2002.  www.census.gov/hhes/hhes/income/papers.html.

The March CPS and SIPP produce different aggregates and distributions of annual wages.  The former reports an excess of high wages and shortage of low wages; the latter reports the opposite.  Exactly matched Detailed Earnings Records from the SSA allow a comparison of March CPS and SIPP wages by using data independent of the surveys.  The findings show that the March CPS and SIPP represent a worker’s percentile rank better than the dollar amount of wages.  In addition, the March CPS accounts for a higher level of “underground” wages than does the SIPP and increasingly so in the 1990s.  The March CPS reports a higher level of self-employed income “misclassified” as wages than does the SIPP and increasingly so in the 1990s.  These trends explain one-third of the March CPS’s 6 percentage point increase in aggregate wages relative to independent estimates from 1993 to 1995.  Finally, the paper delineates March CPS occupations disproportionately likely to be entirely absent from the administrative data or self-employment income misclassified as wages.

Roemer, Marc I.  “Assessing the Quality of the March Current Population Survey and the Survey of Income and Program Participation Income Estimates, 1990–1996.”  Income Surveys Branch, Housing and Household Economic Statistics Division, U.S. Census Bureau, June 16, 2000.

This paper establishes a methodology for deriving benchmarks from the National Income and Product Accounts (NIPA) and evaluates CPS and SIPP income estimates by comparing them to these benchmarks.  It also considers possible misestimates by the two surveys and explains the changes in the relationship between the surveys.  Some NIPA figures need adjustment for institutionalized individuals, decedents, those residing overseas, and those in the military without family.  Other adjustments address differences in what is considered income, such as lump-sum payments.  As for earnings, the March CPS estimate increased from 93 to 96 percent of benchmark.  The SIPP earnings estimate decreased from 90 to 88 percent of benchmark.  Among general categories of income, only SIPP pensions have improved relative to the March CPS and perhaps just slightly relative to benchmarks.  However, the paper concludes that redesigning the SIPP for 1996 does not seem to improve its income estimates.  Even though the SIPP identifies more recipients than the March CPS, it has lower income aggregates, posing a challenge to analysts.  In addition, analysis of tax returns matched to the March CPS shows the occurrence of both over- and underreporting, suggesting that comparing aggregate data to benchmarks may be a simplistic method of measuring data quality.

Roemer, Marc. “Reconciling March CPS Money Income with the National Income and Product Accounts: An Evaluation of CPS Quality.” Paper presented at ASA Joint Statistical Meeting, Baltimore, August 10, 1999.

This paper attempts to create income benchmarks to reconcile differences between the March CPS and NIPA definitions of income.  To compare the two, the author adjusts NIPA’s universe to include institutionalized individuals, decedents, those residing overseas, and those in the military.  In addition, the March CPS includes only cash that people can spend, whereas NIPA includes all economic resources, a difference corrected in the paper’s methodology.  The article also explains trends in measurement over time for various income measures.  The gap between the March CPS and NIPA estimates increased in 1996 as compared with previous years, but the overall completeness of the March CPS improved. 

Ruser, John, Adrienne Pilot, and Charles Nelson.  Alternative Measures of Household Income: BEA Personal Income, CPS Money Income, and Beyond.  Presented to the Federal Economic Statistics Advisory Committee, December 14, 2004.  http://www.bea.gov/ bea/about/fesac/AlternativemeasuresHHincomeFESAC121404.pdf.

This paper compares personal income and money income and analyzes how they differ.  Bureau of Economic Analysis (BEA) personal income is income received from participation in production, from government and business transfer payments, and from government interest.  BEA estimates personal income largely from administrative sources.  CPS money income is defined as total pre-tax cash income earned by persons, exclusive of certain lump-sum payments and capital gains.  BEA estimates income at $8.678 trillion and CPS estimates $6.446 trillion.  Nearly two-thirds of the difference is attributable to differences in income types between the sources and 18 percent to BEA adjustment and underreporting.  The Census Bureau has developed alternative measures that better describe economic well-being and reduce the gap between rich and poor.  An unresolved issue is whether some types of income (such as pensions) should be counted when accrued or when dispersed.  Many proposed alternative measures move toward the theoretical concept of income as the maximum amount that can be consumed while keeping real wealth unchanged. 

Schenker, Nathaniel, Trivellore E. Raghunathan, Pei-Lu Chiu, Diane M. Makuc, Guangyu Zhang, and Alan J. Cohen.  Multiple Imputation of Missing Income Data in the National Health Interview Survey.  Journal of the American Statistical Association, vol. 101, no. 475, September 2006a, pp. 924-933.

The NHIS provides a rich source of data for studying relationships between income and health and for monitoring health and health care for persons at different income levels.  However, nonresponse rates are high for two key items:  total family income in the previous calendar year and personal earnings from employment in the previous calendar year. To handle missing data for family income and personal earnings, the authors perform multiple imputation of these items, along with employment status and the ratio of family income to the federal poverty threshold, for NHIS survey years 1997–2004.  This article describes the approach used in the multiple-imputation project and evaluates the methods by analyzing the multiply imputed data.  The analyses suggest that imputation corrects for biases that occur in estimates based on data without imputation and that multiple imputation usually results in lower estimated standard errors than analyses of data without imputation.

Schenker, Nathaniel, Trivellore E. Raghunathan, Pei-Lu Chiu, Diane M. Makuc, Guangyu Zhang, and Alan J. Cohen.  Multiple Imputation of Family Income and Personal Earnings in the National Health Interview Survey: Methods and Examples, July 30, 2006b, http://www.cdc.gov/nchs/about/major/nhis/2005imputedincome.htm.

The NHIS provides a rich source of data for studying relationships between income and health and for monitoring health and health care for persons at different income levels. However, nonresponse rates are high for two key items: total family income in the previous calendar year and personal earnings from employment in the previous calendar year. To handle the problem of missing data for family income and personal earnings, the authors performed multiple imputation of these items for NHIS survey years 1997–2005 and plan to create multiple imputations for 2006 and beyond as data become available.  The multiple imputations used an adaptation of Sequential Regression Multivariate Imputation that handles the hierarchical nature of the data.  Examination of observed data on two-category income (less than $20,000 versus $20,000) suggests that multiple imputation corrects for biases that occur in estimates based on data without imputation (that is, based on complete-cases analysis).  Further, multiple imputation usually results in lower estimated standard errors than do analyses of the data without imputation.  For each survey year, data sets containing the imputed values, along with related documentation, are available from the NHIS Web site (http://www.cdc.gov/nchs/nhis.htm).

Schwartz, Lisa, and Geoffrey Paulin.  “Improving Response Rates to Income Questions.” Proceedings of the American Statistical Association, Section on Survey Research Methods. Alexandria, VA: American Statistical Association, 2000, pp. 965-970. 

Income data in the CE Quarterly Interview Survey had a high missing rate of 17.7 percent in 1997.  Brackets, which are categories or ranges offered to respondents who initially refuse to report income, help in eliciting a partial response.  This study investigates the usefulness of bracketing techniques for the CE and compared three bracketing methods: (1) conventional bracketing, which presents the respondent with several researcher-determined data ranges; (2) unfolding bracketing, which asks the respondent a series of yes/no questions designed to narrow the respondent’s income range; and (3) respondent-generated intervals, which ask the respondent to provide the upper and lower limits on his or her income.  Sixty adults participated in mock CE interviews followed by intensive cognitive interviews.  The income item nonresponse rate was 18.1 percent but fell to 9.5 percent with the inclusion of brackets.  The results indicate that the unfolding technique is the least popular with respondents.  Moreover, respondents liked the respondent-generated intervals technique, and respondent-generated intervals tended to be smaller than those generated by researchers.

Scoon-Rogers, Lydia.  “Evaluating Respondents’ Reporting of Social Security Income in the Survey of Income and Program Participation Using Administrative Data.”  Federal Committee on Statistical Methodology, 2005 Research Conference paper.

This paper looks at reporting error and the impact of excluding the Medicare Part B (or SMI) premium from reported Social Security benefits in the SIPP.  Using Social Security administrative records matched to SIPP records from the 1996 panel, the author finds that adding the SMI deduction to the reported benefit and correcting any additional error reduces the elderly poverty rate by 2.3 percentage points.  Unmatched records have a higher poverty rate than matched cases, suggesting that the uncorrected error among these cases could be even greater.

Sears, James, Kalman Rupp, and Melissa L. Koenig.  “Exploring Social Security Payment History Matched with the Survey of Income and Program Participation: An Assessment.”  Federal Committee on Statistical Methodology, 2003 Research Conference papers, pp. 49-57.

All SIPP panels have been matched to Social Security (OASDI) and SSI benefit history records, providing a valuable resource for assessing the quality of SIPP data.  This paper examines matched data for the 1996 and 2001 SIPP panels.  The match rate for the 1996 panel, 85 percent, is typical of earlier panels, but the match rate for the 2001 panel is only 60 percent.  The recent availability of actual payment record data instead of payment eligibility data further enhances the potential of the administrative data to evaluate and improve the accuracy of the survey data.  At all ages the reporting of OASDI benefit receipt is more accurate than the reporting of SSI receipt, but among the elderly about a third of those with neither benefit report receiving OASDI.  Nearly a fifth of the elderly with SSI fail to report it.  Among those who correctly reported receiving OASDI in the 1996 panel, 53 percent reported benefits that were within $10 of the actual amounts.  This compares to 62 percent in the 1996 panel.  In addition, large errors grew in frequency, with 22 percent of the 2001 panel versus 16 percent of the 1996 panel misreporting their benefits by $100 or more.  The paper concludes that both survey error and the quality of the SSN match need careful consideration.  With the sharp decline in the match rate, an important issue is whether to base analysis on survey matches only—as SSA analysts have done previously—or to combine matches and nonmatches.

Short, Kathleen S.  “The Relationship between Monthly and Annual Income.”  Housing and Household Economic Statistics Division, U.S. Census Bureau, October 26, 1990.

This paper examines the relationship between income collected for one month and annual income.  At the time, the income reference period for the NHIS was the month before the interview, and the interview collected dollar amounts for several income sources.  However, for many analyses, annual income is the preferred measure because it avoids problems with seasonality, covers a sufficiently long period to establish well-being—such as poverty status—and allows for comparability among various surveys.  Using the SIPP to simulate the NHIS, predictors of annual income were derived from monthly income, as though monthly income were the only available information.  Predicted income was then compared with “actual” annual income to assess the reliability of predicting annual income from a single month’s income.  First, the author examined a naïve estimator of annual income obtained by multiplying a month’s income by 12.  The analysis suggests that this simple estimator may be reasonable for monthly income below some given level but that the relationship between annual and monthly income is not linear.  For example, very large monthly income typically does not result in very large annual income.  The author also fit a regression equation predicting total personal income.  The model included person-month income, monthly dummy variables representing the month in which each amount was received, and demographic indicators.  The predicted values from the model follow much more closely the pattern of annual income calculated for each person than does the simple estimator.

The author also examined the effect of removing outliers on the prediction of annual income and investigated the lower end of the income distribution, concluding that there is some nonlinear relationship between monthly and annual income.  The author observed statistically significant differences in income by month, indicating that months matter in measuring income.  Examination of the lower end of the income distribution suggests that classification of persons as poor based on monthly income overstates the number of persons in poverty by almost 3 percent.  More accurate classification is possible for persons with specific characteristics, such as those not working or those receiving income from government programs.

Smeeding, Timothy M., and Daniel H. Weinberg.  “Toward a Uniform Definition of Household Income.”  Review of Income and Wealth, series 47, no. 1, March 2001, pp. 1-24.

This article attempts to provide a unified framework for aggregating income types to create an income definition that enables researchers to make valid comparisons across nation states.  An examination of several national household income surveys shows that it is nearly impossible to quantify all elements of any new comprehensive income definition in a way that expedites comparisons.  The authors hope that their framework—a combination of national income-based approaches and a microdata perspective—illuminates the differences in current practice and allows researchers to assess the effect of those differences on income distribution measures.  The authors also review theoretical approaches to income definition, present recommendations for constructing a new definition of income, and discuss the feasibility of collecting sufficient data to create comparable international measures.

Susin, Scott.  Discrepancies between Measured Income in the American Housing Survey (AHS) and the Current Population Survey (CPS).  Final Report.  U.S. Census Bureau, March 27, 2003.  www.census.gov/hhes/income/papers.html.

The CPS measurement of income is more detailed than the AHS, especially with respect to non-wage income.  The two surveys also use different recent periods, with the CPS asking about the previous calendar year and the AHS, which is conducted late in the year, asking about income for the previous 12 months.  Average household income is 9 percent lower in the AHS than in the CPS.  Family earnings are about the same while non-wage income is 32 percent lower because of the failure of many respondents to report non-wage income.  The discrepancy has widened over time, especially since 1995.  Underreporting increases with the number of adults in the household, indicating that the CPS’s practice of asking each person about income makes a difference.  The largest sources of underreported income are interest, dividends, Social Security, and pensions. 

Those with business income in the AHS report 49 percent more earnings than in the CPS, perhaps reflecting self-employment income reported on the wrong line of the survey.  Reanalysis of a 1991 AHS experiment indicates that, compared to a paper instrument (also administered by telephone), CATI reduces non-wage income by $308 on average and has a particularly large effect on families receiving business income.  Finally, the CPS counts several sources of income not counted by the AHS, including educational and financial assistance, which represents roughly 10 percent of the gap in non-wage income.

Turek, Joan. “Measuring Income on Surveys: Content and Quality, an Overview.”  Office of the Assistant Secretary for Planning and Evaluation, U.S. Department of Health and Human Services, August 1, 2005.

This report describes the income data collected in six federal surveys.  Three of the surveys are designed and conducted by the Census Bureau: the SIPP, the Annual Social and Demographic Supplement to the CPS (March CPS), and the ACS.  Three surveys are designed by the U.S. Department of Health and Human Services: the MEPS sponsored by the Agency for Healthcare Research and Quality, the NHIS sponsored by the National Center for Health Statistics, and the MCBS sponsored by the Centers for Medicare and Medicaid Services.  Of these surveys, only the SIPP has as its mandate the collection of income data.  The main purpose of the other surveys is to collect employment and/or health information.  Currently, the March CPS provides official estimates of income, poverty, and health insurance status.  The paper also presents available findings on the quality and comparability of the collected data. 

Turek, Joan.  “Poverty Measures from the Current Population Survey (CPS) and the Consumer Expenditure Survey (CE): Why Do They Differ?”  Office of the Assistant Secretary for Planning and Evaluation, U.S. Department of Health and Human Services, Summer 2001, Joan.Turek@hhs.gov.

Official measures of poverty are obtained from CPS data, although poverty measures can also be constructed by using the results of other surveys, such as the CE.  Poverty rates estimated from the CE, however are significantly higher than those estimated from the CPS.  The author examines the reasons for the differences and calls for caution when using income-based measures obtained from the CE.  Differences in income reporting on the two surveys, particularly underreporting of income in the CE, have a dramatic influence on reported poverty rates.  The author finds that significant underreporting of income in the CE captured approximately 86 percent of the aggregate income captured by the CPS in 1990 and 83 percent in 1996.  In addition, the CE captured less aggregate income for types of income typically received by people who are better off.  For example, in 1996, the CE captured about 28 percent of the aggregate property income (interest, dividend, rents, royalties, estates, and trusts) reported in the CPS.  As a result, respondent units in the CE were classified as poor when they were not.  Therefore, poverty rates from the CE are inflated as a result of income underreporting.  Similarly, comparisons of CE units that use income as a classifying variable are also misleading.
Turek, Joan, Gabrielle Denmead, and Brian James.  “Poverty Estimates in the ACS and Other Income Surveys: What Is the Impact of Methodology?”  Office of the Assistant Secretary for Planning and Evaluation, U.S. Department of Health and Human Services, Winter 2004, Joan.Turek@hhs.gov.

For users to take full advantage of ACS data, they need to be aware of methodological differences between the ACS and other surveys.  The authors examine three features of the ACS that differ from the Annual Social and Economic Supplement to the CPS and the Decennial Census Long Form Survey (Long Form): a rolling sample, a rolling reference period, and CPI adjustments to the (rolling) income data (the ACS uses these adjustments to approximate fixed sampling and references periods).  The authors use the 1996 SIPP panel to construct simulated ACS, CPS, and Long Form estimates for 1998. They replicate each survey’s sampling, reference period, weighting, and CPI adjustments as accurately as possible. They change each feature in turn to pinpoint each feature’s respective contribution to differences in estimates but obviously cannot control for other differences attributable to the number of income questions, recall periods, or family relationship measures.  The authors’ tests show that the ACS rolling sample for 1997–1998 yields a higher estimate of poverty than the fixed samples and reference periods for the 1998 CPS and Long Form.  This finding holds even with CPI adjustments and an adjustment for SIPP panel attrition, which partially offsets the measured differential.  Given that the ACS rolling sample is lagged as compared to true calendar year income, the authors’ result could reflect an understatement by the ACS of increases in real income over the lag, although use of a two-year CPS average moderates this effect.

U.S. Census Bureau.  Guidance on Differences in Income and Poverty Estimates from Different Sources.  U.S. Census Bureau, March 12, 2007.  http://www.census.gov/hhes/www/income/ newguidance.html..

This Census Bureau Web site offers guidance on income and poverty estimates from different sources and contains a chart of which data source to use for each purpose and geographic level; a fact sheet on the differences between CPS ASEC, and ACS data for income and poverty; and a comparison of household income from ACS 2005 and CPS ASEC 2004–2005 averages.  It also contains background information on income and poverty estimates from five Census Bureau national household surveys and programs:  (1) the CPS ASEC, (2) ACS, (3) SIPP, (4) 2000 Decennial Census Long Form, and (5) Small Area Income and Poverty Estimates program.

U.S. Census Bureau, Comparability of Current Population Survey Income Data with Other Data.  Washington, DC, http://www.census.gov/hhes/www/income/compare1.html, March 9, 2005.

This article compares CPS data with other data sources.  The CPS is a cross-sectional survey while the SIPP is a longitudinal survey.  Generally, the two surveys define income the same way except for a few types of interest, educational assistance, and lump-sum payments included in the SIPP but not in the CPS.  Self-employment income is also defined and measured differently in the two surveys.  The BEA produces personal income statistics mainly derived from business and government sources.  The aggregates obtained from these sources are more complete than the data collected from household samples.  Farm income data published by the Census Bureau are not directly comparable to data published by the U.S. Department of Agriculture.  The income data published by the Census Bureau are also not directly comparable to tax return data because of IRS filing and reporting requirements and other factors.

U.S. Census Bureau.  “Differences between the Income and Poverty Estimates from the American Community Survey and the Annual Social and Economic Supplement to the Current Population Survey.”  U.S. Census Bureau, August 19, 2004, www.census.gov/ hhes/income /factsheet081904.html.

This fact sheet outlines the differences between the ACS and the CPS.  The ACS tracks cities and eventually even areas as small as census tracts while the CPS tracks only areas as small as states.  Sample size for the ACS is about 3 million households while that for the CPS is about 100,000 households.  The ACS is mandatory; the CPS is voluntary.  The ACS includes the household and group quarters populations, whereas the CPS uses the civilian noninstitutionalized population.  The ACS asks about income for the previous 12-month period, but the CPS asks about calendar year income.  The ACS uses a series of 8 questions to ask about income, and the CPS asks about more than 50 sources of income.  The ACS adjusts income estimates for inflation while the CPS does not.

U.S. Department of Commerce.  SIPP Quality Profile.  SIPP Working Paper Number 230, 3rd Edition.  U.S. Census Bureau, 1998.

Wage and salary earnings are the main components of income.  The SIPP estimate amounted to 91 percent of the independent NIPA estimate in 1984 and 92 percent in 1990.  The CPS estimate was 97 percent of the NIPA benchmark in both years, although the number of earners estimated from the SIPP was higher than from the CPS in both years.  The paper speculates that the CPS has an advantage over the SIPP because the latter is conducive to reports of “take-home pay.”  The SIPP and CPS self-employment estimates fall far short of the NIPA benchmark, but both are far greater than individual tax returns.  It is difficult to compare the SIPP and CPS estimates because they use different concepts of the “draw” that people take to meet personal expenses. The SIPP estimates were lower than the CPS estimates.  Evaluation studies have consistently shown that estimates of property income are particularly poor.  Respondents have difficulty with the definition of terms.  Much of the observed difference between the SIPP and CPS estimates results from different methods of collecting opposing data.  The SIPP estimates generally exceeded the CPS estimates.  The SIPP produced higher estimates of income from Social Security, railroad retirement, and SSI than the CPS and was close to benchmarks.  Estimates for AFDC and other public assistance were well short of benchmark, although such income is often misclassified as general welfare.  The SIPP estimates were about 84 percent of benchmark for unemployment income and 84 percent of benchmark for veterans’ payments.  The SIPP was superior to the CPS in estimating pensions in 1984, but apparently not in 1990.  The SIPP also exceeded CPS estimates in child support and other sources of income.

U.S. Department of Housing and Urban Development.  “American Housing Survey: A Quality Profile.”  Rameswar P. Chakrabarty, assisted by Georgina Torres.  Current Housing Reports H121/95-1.  Washington, DC: HUD, July 1996.

The AHS estimates of total income are lower than the independent estimates calculated from NIPA, the SSA, and the Veterans Administration.  They are also lower than the CPS for every category other than self-employment income.  The CPS is likewise lower than independent estimates but is closer than the AHS.  The CPS may be closer to the independent estimates because of differences in income questions and the timing of both the CPS and AHS.  The CPS asks more detailed and extensive questions about income sources and amount by source than does the AHS.  Moreover, the CPS ASEC is administered during income tax season, when respondents are more aware of non-wage income such as interest, dividends, and the like.

The report also compares AHS and CPS poverty data between 1985 and 1993, noting three procedural differences between the surveys and subsequent impacts on poverty-level reporting.  As of 1989, the AHS uses a set of monthly moving poverty thresholds based on 12 sets of poverty thresholds for the 12 months before the interview.  The thresholds were intended to align the poverty cutoffs more closely with how income data are collected.  However, the result of the procedural change has gone unmeasured.  In 1993, the Census Bureau revised the non-wage income section of the AHS questionnaire in order to capture income sources commonly reported in the CPS but not previously specified in the AHS.  The percent of households reporting non-wage income increased, but median non-wage income dropped between 1991 and 1993 from $7,400 to $6,212.  Moreover, in 1993, the definition of lodgers in the household was expanded to include all persons not related to the householder who paid rent or part of the household’s housing costs.  This question change has led to an increase in the percentage of households reporting rental income.

Vaughan, Denton R.  “Reflections on the Income Estimates from the Initial Panel of the Survey of Income and Program Participation.”  Studies in Income Distribution, Social Security Administration, Office of Research and Statistics, SSA Pub. No. 73‑11776 (17), May 1993.

This report reviews the quality of SIPP cross-sectional income estimates.  The author draws nine principal conclusions.  (1) The SIPP has achieved substantial gains over the CPS in the measurement of public and private transfers.  (2) SIPP measures of wage and salary earnings are broadly similar to those available from the CPS, but evidence suggests that the SIPP (a) identifies more recipients who do not work full-time year-round and (b) presents a more valid representation of the population of full-time year-round wage and salary recipients.  Accordingly, the SIPP improves the representation of the relationship between annual work experience and annual wage and salary earnings.  (3) Clear evidence indicates that SIPP estimates of property income receipt are substantially more complete for the principal sources of property income than comparable CPS estimates.  (4) The SIPP has materially reduced the impact of item nonresponse.  As a result, the percentage of aggregate income attributable to imputation is approximately half that of the CPS.  (5) The SIPP’s subannual wage and salary amounts appear to be slightly biased relative to CPS estimates.  (6) SIPP estimates of unemployment compensation show little if any improvement in completeness over CPS estimates.  (7) While public assistance income is more fully reported in the SIPP than in the CPS, AFDC estimates still appear to be subject to misclassification.  (8) Income from workers’ compensation and associated sources remains underreported.  (9) Property income aggregates remain well below independent estimates, especially interest income.  The author suggests measures for improving SIPP income estimates.

Vaughan, Denton R.  “Errors in Reporting Supplemental Security Income (SSI) Recipiency,” in Reports from the Site Research Test, edited by Jan. Olson.  U.S. Department of Health and Human Services, Office of the Assistant Secretary for Planning and Evaluation, Washington, DC, 1980.

As with AFDC, this study shows that misclassification was the principal reason for underestimating the prevalence of SSI in the site research test sample of known recipients.  Recipients most frequently confused SSI with Social Security benefits, with mis-classification most likely to occur when a person received only SSI.  Such individuals had higher-than-average payments and were most likely to be over age 65.  Given the prevalence of dual recipients among the elderly population and dual recipients’ smaller SSI benefits, surveys affected by the misclassification problem will produce biased estimates of the SSI population by age and underestimate benefits to a greater extent than will recipients themselves. Subsequent research shows a tendency for new entrants on the Social Security Disability Insurance (DI) rolls to misclassify their Social Security benefits as SSI because they often begin receiving benefits under SSI while awaiting Social Security DI benefits.

Vaughan, Denton R.  “Errors in Reporting Supplemental Security Income Recipiency in a Pilot Household Survey.”  Proceedings of the American Statistical Association, Section on Survey Research Methods. Alexandria, VA: American Statistical Association, 1978, pp. 288-293.

This paper reports on the pilot test and measurement of SSI conducted in five cities in fall 1977.  The author used program records to select a sample of SSI recipients living in four of the five cities.  In order to be certain about the exact nature of the recipiency reporting errors in the survey, the author made a case-by-case comparison of the survey and administrative records for SSI sample members tagged as potential nonreporters.  The comparison defined “true” SSI nonreporters as sample members included on the household roster but not identified in the survey as SSI recipients.  Misclassified cases were defined as sample members not identified as SSI recipients on the questionnaire but reporting income from some other source in the amount of their actual SSI payment.  The recipiency reporting error rate was about 13 percent; the nonreporting rate was less than 4 percent.  The misclassification rate was slightly less than 10 percent.  Therefore, the SSI income amount went completely unreported on the questionnaire in only about a quarter of the apparent nonreporter cases.  About three-quarters of the nonreporter cases had SSI reported on the questionnaire as some other type of income.  For misclassification errors, somewhat more than 80 percent were reported as one of three forms of Social Security.

Vaughan, Denton R. (with K. Goudreau and H. Oberheu).  “An Assessment of the Quality of Survey Reports of Income from the Aid to Families with Dependent Children (AFDC) Program.”  Journal of Business and Economic Statistics, April 1984, pp. 179-186.

This multistate record check study establishes the importance of part-period recipients in the phenomenon of nonreporting of means-tested transfers with substantial turnover. It also deals with the impact of misclassification on reported aggregates for an important means-tested source.  It shows substantial state-to-state variation in misclassifcation rates and demonstrates that the study—either directly or when misclassification was taken into account--captures approximately 90 percent of aggregate AFDC benefits received by the test sample.

Vaughan, Denton R. (with Bruce Klein).  “Validating Recipiency Reporting of AFDC Mothers in a Pilot Household Survey,” in Reports from the Site Research Test, edited by J. Olson.  Office of the Assistant Secretary for Planning and Evaluation, U.S. Department of Health and Human Services, 1980.

The paper shows the importance of misclassification errors affecting recipiency reports of AFDC in the site research tests of the SIPP development program.  Using a set of matched cases of known AFDC cases drawn from program records, the author demonstrates that misclassification was more important than nonreports of recipiency in the overall underestimates of the number of AFDC cases.

Vaughan, Denton R. (with C. Whiteman and C. Lininger).  “The Quality of Income and Program Data in the 1979 ISDP Research Panel: Some Preliminary Findings.”  The Review of Public Data Use, June 1984. 

This paper establishes a pattern of SIPP measurement characteristics in comparison with the March CPS that was evident from the Income Survey Development Program (ISDP) Research Panel, which preceded the SIPP program.  The ISDP Research Panel obtained almost universally greater recipiency estimates, especially for property income, and much lower item nonresponse rates for income recipiency and, to a lesser extent, amounts.  At the source level, ISDP aggregates tended to exceed CPS aggregates, with the exception of wages, salaries, and interest income.

Vaughan, Denton R., Charles A. Lininger, Robert E. Klein.  “Differentiating Veterans’ Pensions and Compensation in the 1979 ISDP Panel.”  Proceedings of the American Statistical Association, Section on Survey Research Methods.  Alexandria, VA: American Statistical Association, Washington, DC, 1983, pp. 191-196.

This study explores possible ways to differentiate veterans’ pensions from compensation.  It relies on attributes of each payment type (for example, disability ratings, death of a spouse while recipient was in the service, or death from a service-connected cause) to determine recipients’ awareness that they are recipients of a pension or compensation payment.  Later items added to the CPS and SIPP help directly identify pension recipients by asking whether the Department of Veterans Affairs required an individual to respond to an income questionnaire.

Waldo, Dan.  Income and Asset Measurement in the Medicare Current Beneficiary Survey.  Centers for Medicare and Medicaid Services, August 25, 2005.

This study uses multiple surveys to study the measurement of income for the MCBS and finds people reluctant to answer questions with respect to specific sources of income.  As a result, the author imputes 30 percent of the total income data by using a hot deck imputation.  The response was higher for those items prominent in lower incomes such as Social Security, SSI, and pensions but worse for items such as bonds, dividends, and interest.  Overall, the paper finds that lower-income households report income more accurately when income comes from only a few sources.  In addition, data from tax returns are more reliable for higher-income households but may not be accurate for lower-income households in that the latter may be exempt from filing.  The MCBS staff now uses a hybrid income measure that features both survey and tax return data sets.  The staff assigns each respondent a new income listed as a multiple of the poverty threshold based on the hybrid income measure.

Weinberg, Daniel H.  “Income Data Quality Issues in the CPS.”  Monthly Labor Review, June 2006, vol. 129, no. 6, pp. 38-45.

This paper focuses on CPS ASEC questionnaire design, data collection, and data processing and suggests areas for improvement and issues for future research.  CPS collects or imputes nearly all of the Canberra Group’s components of income recommendation.  In the actual collection, response rates to the CPS are usually about 92 or 93 percent.  Hot deck imputation is used for missing data.  Some have said that the CPS underreports income, especially government transfers, property income, and self-employment.  Another difficulty with the CPS lies in how to value non-cash income, especially employer-provided benefits such as health care.  The article identifies the valuation of non-cash income as an area in need of more work and suggests collecting more information on other income sources such as fringe benefits and interhousehold transfers, reducing item nonresponse, and developing additional probes for income sources with notable misreporting.

Weinberg, Daniel H., Charles T. Nelson, Marc I. Roemer, and Edward J. Welniak, Jr.  The American Economic Review, vol. 89, no. 2. Papers and Proceedings of the One Hundred Eleventh Annual Meeting of the American Economic Association, May 1999, pp. 18-22.

The U.S. Census Bureau has been computing income statistics annually since 1947.  Until 1980, the Census Bureau gradually increased the number of income questions in the CPS from 2 to 11.  Then, in 1980, the survey underwent a major overhaul and started to ask respondents about over 50 sources of income.  The Census Bureau reports on 17 definitions of income based on various combinations of money income, benefits, and so forth.  Wages, Social Security, SSI, veterans’ payments, and pensions are all reliably estimated.  Property income and unemployment have improved greatly as well.  However, the remaining income sources such as military retirement, rents, and royalties have seen declines in accuracy of reporting as compared to benchmarks.

Wheaton, Laura L.  “CPS Underreporting Across Time.”  Final Deliverable.  Memorandum to Joan Turek and Reuben Snipper.  The Urban Institute, March 5, 2007.

The work presented here updates and expands upon the research reported in Wheaton and Giannarelli (2000), which found substantial underreporting of transfer program income in the CPS.  Recipients and benefits identified in the CPS and SIPP are compared to targets developed from administrative data for each of several transfer programs.  The extent to which the CPS recipient and benefit data are allocated is also examined.  The analysis of CPS data covers calendar years 1993 through 2004.  The analysis of SIPP data covers calendar years 1997-1998 and 2001-2002.  The memorandum concludes by showing the effects of TRIM3’s correction for underreporting on poverty estimates for 2004.

Wheaton, Laura, and Linda Giannarelli.  “Under-reporting of Means-Tested Income in the March CPS.”  Proceedings of the American Statistical Association, Section on Social Statistics. Alexandria, VA: American Statistical Association, 2000, pp. 236-241.

This paper examines the underreporting of means-tested transfer benefits in the CPS and finds a large decline in the portion of AFDC or TANF benefits captured by the CPS from 1993 to 1998.  Possible reasons for the decline include confusion or stigma.  Food stamp reporting has remained about constant.  The amount of SSI reporting has fluctuated, though for no known reason.  The Census Bureau uses allocations and imputations to attempt to account for underreporting.  Another way to correct for underreporting is through microsimulation, which steps through the CPS one household at a time, performing the same steps that a caseworker would perform in determining program eligibility and benefits for household members.  The simulation captures 90 percent of AFDC/TANF benefit dollars and 94 percent of food stamp and SSI benefit dollars.  Correction for underreporting of AFDC/TANF and SSI has a substantial effect on the estimated number of persons removed from poverty through means-tested cash transfers and an even greater effect on the estimated extent to which these programs reduce the poverty gap.  The reduction in the poverty gap from these programs appears 36 percent higher after correction for underreporting.  The reduction in the poverty gap from food stamp benefits appears 57 percent higher after correction for underreporting.

Index

American Community Survey

  • Bishaw and Stern 2006
  • Government Accountability Office 2004
  • Lamison-White 1997
  • Nelson 2006
  • Nelson and Doyle 1999
  • Posey and Welniak 1998
  • Posey, Welniak, and Nelson 2003
  • Turek 2005
  • Turek, Denmead, and James 2004
  • U.S. Census Bureau 2004

American Housing Survey

  • Susin 2003
  • U.S. Department of Housing and Urban Development 1996

Attrition

  • Cohen, Machlin, and Branscome 2000
  • Czajka, Mabli, and Cody 2008
  • Fitzgerald, Gottschalk, and Moffitt 1998a
  • Fitzgerald, Gottschalk, and Moffitt 1998b
  • Kapteyn et al. 2006
  • Kashihara and Ezzati-Rice 2004
  • Lamas, Tin, and Eargle 1994

Benchmark estimates

  • Atrostic and Kalenkoski 2002
  • Coder and Scoon-Rogers 1996
  • Meyer and Sullivan 2003
  • Meyer, Mok, and Sullivan 2007
  • Roemer 2000
  • Roemer 1999
  • U.S. Department of Commerce 1998
  • Vaughan 1993
  • Weinberg et al. 1999
  • Wheaton 2007
  • Wheaton and Giannarelli 2000

Comparisons of income estimates

  • Banthin and Selden 2006
  • Clark et al. 2003
  • Czajka, Mabli, and Cody 2008
  • Denmead and Turek 2005
  • Denmead, Turek and Adler 2003
  • Gouskova and Schoeni 2007
  • Martini 1997
  • Meyer, Mok, and Sullivan 2007
  • Nelson 2006
  • Posey, Welniak, and Nelson 2003
  • Roemer 2002
  • Roemer 2000
  • Susin 2003
  • U.S. Department of Commerce 1998
  • U.S. Department of Housing and Urban Development 1996

Comparisons of survey designs

  • Nelson 2006
  • Turek 2005
  • Turek, Denmead, and James 2004
  • U.S. Census Bureau 2007
  • U.S. Census Bureau 2004

Consumer Expenditure Survey

  • Bavier 2008
  • Ferraro and Paulin 1994
  • Garner and Blanciforti 1994
  • Henry and Day 2005
  • McGrath 2005
  • Meyer and Sullivan 2003
  • Paulin and Ferraro 1994
  • Paulin and Sweet 1996
  • Schwartz and Paulin 2000
  • Turek 2001

Correction for underreporting

  • Beebout 1977
  • Wheaton 2007
  • Wheaton and Giannarelli 2000

Current Population Survey

  • Alternative Measures of Income and Poverty [website]
  • Atrostic and Kalenkoski 2002
  • Banthin and Selden 2006
  • Bavier 2008
  • Bishaw and Stern 2006
  • Bound and Krueger 1991
  • Clark et al. 2003
  • Coder and Scoon-Rogers 1996
  • Czajka, Mabli, and Cody 2008
  • Davern et al. 2005
  • Davern et al. 2004
  • Denmead and Turek 2005
  • Denmead , Turek, and Adler 2003
  • Gouskova and Schoeni 2007
  • Henry and Day 2005
  • Hurd, Juster, and Smith 2003
  • Koenig 2003
  • Lamas, Tin, and Eargle 1994
  • Loomis and Rothgeb 2005
  • Martini 1997
  • Meyer and Sullivan 2003
  • Meyer, Mok, and Sullivan 2007
  • Moon and Juster 1995
  • Moyer 1998
  • Nelson 2006
  • Reichert and Kidelberger 2001
  • Roemer 2002
  • Roemer 2000
  • Roemer 1999
  • Ruser, Pilot, and Nelson 2004
  • Susin 2003
  • Turek 2005
  • Turek 2001
  • Turek, Denmead, and James 2004
  • U.S. Census Bureau 2005
  • U.S. Department of Commerce 1998
  • U.S. Department of Housing and Urban Development 1996
  • Vaughan 1993
  • Weinberg 2006
  • Weinberg et al. 1999
  • Welniak 1986
  • Wheaton 2007
  • Wheaton and Giannarelli 2000

Decennial Census

  • Alternative Measures of Income and Poverty [website]
  • Clark et al. 2003
  • Davern et al. 2004
  • Posey and Welniak 1998
  • Posey, Welniak, and Nelson 2003
  • Turek, Denmead, and James 2004

Event History Calendar

  • Kominski 1990

Health and Retirement Study

  • Heeringa, Hill, and Howell 1995
  • Hurd 1999
  • Hurd, Juster, and Smith 2003
  • Juster and Smith 1997
  • Kapteyn et al. 2006
  • Moon and Juster 1995

Historical data

  • Alternative Measures of Income and Poverty [website]

Imputation

  • Battaglia et al. 2002
  • Czajka, Mabli, and Cody 2008
  • Davern et al. 2004
  • Fisher (no date)
  • Juster and Smith 1997
  • Liu and Sharma 2002
  • Paulin and Ferraro 1994
  • Paulin and Sweet 1996
  • Schenker et al. 2006a
  • Schenker et al. 2006b

Income concepts

  • Bavier 2008
  • Canberra Group, The 2001
  • Hendrick et al. (no date)
  • Henry and Day 2005
  • Meyer and Sullivan 2003
  • Ruser, Pilot, and Nelson 2004
  • Smeeding and Weinberg 2001
  • U.S. Census Bureau 2005

Income Survey Development Program (see also Survey of Income and Program Participation)

  • Vaughan (with Whiteman and Lininger) 1984
  • Vaughan, Lininger, and Klein 1983

Item nonresponse

  • Atrostic and Kalenkoski 2002
  • Bruun and Moore 2005
  • Garner and Blanciforti 1994
  • Heeringa, Hill, and Howell 1995
  • Juster and Smith 1997
  • McGrath 2005
  • Moore and Loomis 2000
  • Nelson 2006
  • Paulin and Sweet 1996
  • Pleis and Dahlhamer 2004
  • Pleis and Dahlhamer 2003
  • Pleis, Dahlhamer, and Meyer 2006
  • Schwartz and Paulin 2000
  • Waldo 2005

Measurement of income

  • Banthin and Selden 2006
  • Coder 1988
  • Davern et al. 2005
  • Doyle (no date)
  • Doyle, Martin, and Moore 2000
  • Heeringa, Hill, and Howell 1995
  • Hess et al. 2000
  • Hurd 1999
  • Hurd, Juster, and Smith 2003
  • Juster and Smith 1997
  • Kominski 1991
  • Lamas, Palumbo, and Eargle (no date)
  • Loomis and Rothgeb 2005
  • Lynn et al. 2006
  • Marquis and Press 1999
  • Mathiowetz, Brown, and Bound 2002
  • Moore and Loomis 2000
  • Nelson and Doyle 1999
  • Patil and Russell 2000
  • Schwartz and Paulin 2000

Measurement error

  • Bound et al. 1994
  • Bound and Krueger 1991
  • Mathiowetz, Brown, and Bound 2002
  • Meyer and Sullivan 2003
  • Moore, Marquis, and Bogen 1996
  • Moore, Stinson, and Welniak 2000
  • Reichert and Kidelberger 2001

Medical Expenditure Panel Survey

  • Banthin and Selden 2006
  • Cohen and Machlin 1998
  • Cohen et al. 2000
  • Kashihara and Ezzati-Rice 2004
  • Turek 2005

Medicare Current Beneficiary Survey

  • Liu and Sharma 2002
  • Turek 2005
  • Waldo 2005

National Health Interview Survey

  • Denmead and Turek 2005
  • Denmead, Turek, and Adler 2003
  • Patil and Russell 2000
  • Pleis and Dahlhamer 2004
  • Pleis and Dahlhamer 2003
  • Pleis, Dahlhamer, and Meyer 2006
  • Schenker et al. 2006a
  • Schenker et al. 2006b

National Immunization Survey

  • Battaglia et al. 2002

Nonresponse bias

  • Cohen and Machlin 1998
  • Czajka, Mabli, and Cody 2008
  • Mack and Petroni 1994
  • Paulin and Ferraro 1994
  • Pleis and Dahlhamer 2004
  • Pleis and Dahlhamer 2003

Panel Study of Income Dynamics

  • Beaulé, Leissou, and Lui 2007
  • Bound et al. 1994
  • Duncan and Hill 1989
  • Fitzgerald, Gottschalk, and Moffitt 1998a
  • Fitzgerald, Gottschalk, and Moffitt 1998b
  • Gouskova and Schoeni 2007
  • Grieger, Danziger, and Schoeni 2007
  • Kim and Stafford 2000
  • McGonagle and Schoeni 2006
  • Meyer and Sullivan 2003
  • Meyer, Mok, and Sullivan 2007
  • Rodgers, Brown, and Duncan 1993

Poverty measurement

  • Bavier 2008
  • Grieger, Danziger, and Schoeni 2007
  • Moyer 1998
  • Nelson 2006
  • Turek 2001
  • Turek, Denmead, and James 2004

Reference period

  • Hurd, Juster, and Smith 2003
  • Posey and Welniak 1998
  • Posey, Welniak, and Nelson 2003
  • Short 1990
  • Susin 2003
  • Turek, Denmead, and James 2004

Reporting accuracy, based on matched administrative records

  • Bates and Pedace 2000
  • Bound et al. 1994
  • Bound and Krueger 1991
  • Duncan and Hill
  • Hendrick et al. (no date)
  • Huyhn, Rupp, and Sears 2001
  • Koenig 2003
  • Marquis and Moore (no date)
  • Olson 2001
  • Pedace and Bates 2000
  • Rodgers, Brown, and Duncan 1993
  • Roemer 2002
  • Scoon-Rogers 2005
  • Sears, Rupp, and Koenig 2003
  • Vaughan 1980
  • Vaughan 1978
  • Vaughan (with Goudreau and Oberheu) 1984
  • Vaughan (with Klein) 1980

Survey of Income and Education

  • Beebout 1977

Survey of Income and Program Participation (see also Income Survey Development Program)

  • Bates and Pedace 2000
  • Bavier 2008
  • Bruun and Moore 2005
  • Coder 1988
  • Coder and Scoon-Rogers 1996
  • Czajka, Mabli, and Cody 2008
  • Denmead and Turek 2005
  • Doyle (no date)
  • Doyle, Martin, and Moore 2000
  • Fisher (no date)
  • Hendrick et al. (no date)
  • Huyhn, Rupp, and Sears 2001
  • Kalton and Miller 1991
  • Koenig 2003
  • Kominski 1991
  • Kominski 1990
  • Lamas, Palumbo, and Eargle (no date)
  • Lamas, Tin, and Eargle 1994
  • Mack and Petroni 1994
  • Marquis and Moore (no date)
  • Martini 1997
  • Meyer, Mok, and Sullivan 2007
  • Moore, Marquis, and Bogen 1996
  • Nelson and Doyle 1999
  • Olson 2001
  • Pedace and Bates 2000
  • Roemer 2002
  • Roemer 2000
  • Scoon-Rogers 2005
  • Sears, Rupp, and Koenig 2003
  • Turek 2005
  • U.S. Census Bureau 2005
  • U.S. Department of Commerce 1998
  • Vaughan 1993
  • Wheaton 2007

View full report

Preview
Download

"report.pdf" (pdf, 4.33Mb)

Note: Documents in PDF format require the Adobe Acrobat Reader®. If you experience problems with PDF documents, please download the latest version of the Reader®