Skip to main content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

TANF "Leavers", Applicants, and Caseload Studies: Summary of Fall 1999 ASPE Welfare Outcomes Grantee Meeting

Contents

  1. Pre-Meeting Session with Grantees Studying Applicant and Diverted TANF Cases
  2. Cross-Project Findings and Common Administrative Measures
  3. Preliminary Findings from Surveys with Former TANF Recipients
  4. Analysis of Response Rates and Nonresponse Bias in Surveys
  5. Strategies for Achieving High Response Rates in Surveys with Former TANF Recipients
  6. Linking Administrative and Survey Data
  7. Overview of Survey Questions and Data Analysis
  8. State Discussion Session:  Next Steps

 


Pre-Meeting Session with
Grantees Studying Applicant and Diverted TANF Cases

A.  Session Overview

The meeting's first two sessions were open to states and county consortia that received Welfare Outcomes Grants to study TANF applicants and individuals who have been diverted from receiving TANF assistance. The first session examined variations in grantees' definitions of "diversion" and the possibilities for developing a common language among grantees for examining applicant and diverted TANF cases. The second session focused on methodological issues faced in previous research with similar populations and possible study design considerations for grantees undertaking similar research.

B. "Overview of 1999 Welfare Outcomes Grants Emphasizing TANF Diversion"
Matt Lyon, ASPE; Don Oellerich, ASPE

States and county consortia awarded ASPE grants to study the status of applicants and potential applicants to the TANF program have selected different population groups to define the possible "diverted" TANF population. (In this summary, states and county consortia awarded ASPE grants will referred to as "grantees.") The variation among grantees' definitions may be attributed to their specific administrative data limitations, policy needs, and methodological preferences. However, while the diversity in definitions for applicant and diverted TANF populations contributes to the overall richness of the information on these populations, the absence of a common language for describing the groups being studied presents a challenge to cooperative research, and a possible obstacle to the comparability of results.

Matt Lyon of ASPE presented a draft conceptual framework for examining the variation in grantee definitions of applicant and diverted TANF cases. This framework summarized the population groups being studied by grantees as those who:

  1. Were formally diverted through lump sum payments, mandatory applicant job searches, and policies encouraging the use of alternative resources
  2. Failed to complete the application
  3. Completed an application and were determined to be eligible, but did not enroll
  4. Were declared ineligible for non-financial reasons
  5. Receive food stamps and/or Medicaid and appear to be eligible for TANF, but do not participate in the TANF program

This conceptual framework was presented as a starting point for further discussion among grantees on developing a common language or vocabulary for describing applicant and diverted TANF cases.

Don Oellerich of ASPE opened the discussion by a review of the term "divert." At first glance, the word "divert" may be interpreted as synonymous with words such as:  deflect, deter, hinder, impede, and sidetrack.  However, these terms only describe the outcome of diversion rather than the process by which individuals may interact with the TANF system or even the specific population being studied. Accordingly, use of the term "diversion" alone may be an incomplete definition for the groups being studied by grantees. For example, of the definitions presented in the conceptual framework, four grantees are using two of the definitions in their study, four grantees are using three separate definitions, and one grantee is using four definitions. With this in mind, Oellerich argued, it may be more important to discuss diversion in terms of defining and describing subcategories. Grantees were asked to comment on the five subcategories presented in the conceptual framework. Key points in this discussion included:

  • The "failed to complete an application" and "completed, but not enrolled" categories may be very similar and, depending on the administrative intake process, in some cases may not differ at all.
  • Formal diversion may also be viewed as "redirection,"and additional subcategories may be necessary to define these efforts (e.g., cases in which applicants received lump sum emergency assistance payments).
  • Some state administrative systems do not permit distinctions between the different applicant categories (e.g., definitions 2-4).
  • Failure to complete an application may be due to formal job search requirements. This fact suggests that elements of the application process may take on the characteristics of "formal diversion."
  • Since many of the individuals being studied have a history of TANF or welfare participation, informal diversion may have occurred at some point in the past. Accordingly, it may be difficult to identify at what point they were diverted from the system.
  • Grantees raised the issue of people versus events:  A person can be diverted in one event, then not complete the application in the next event, creating overlap between categories.
  • The intake process may vary between states. For example, in some states the applicant sees a screener, who may redirect an applicant to TANF assistance, food stamps, Medicaid with no TANF, or an emergency services program such as domestic violence. In other systems, the applicant meets immediately with an intake officer. As a result, individuals who drop out of the application process may prove difficult to classify.
  • All of the subcategories listed assume some interaction with the TANF, food stamps, or Medicaid systems. However, they do not include cases of people who never come into contact with the system because they were diverted for other reasons (e.g., time limits, work requirements).

As the discussion among grantees progressed, discussants noted that many comments on the conceptual framework were related to the process of applying or interacting with the TANF system. Accordingly, it may be possible to build a common language based around the formal TANF program intake process. For example, a state's process may be mapped to identify decision points where individuals may be informally or formally diverted from a TANF program.

Figure 1:
Sample Process Map for Identifying Applicant and Diverted TANF Cases for Study

Figure 1. Sample Process Map for Identifying Applicant and Diverted TANF Cases for Study.

While state and county application processes will be unique, many of the events in the intake process were identified by all grantees as important components of their individual definitions. As a result, the first step toward defining a common language may be careful documentation of the process by which individuals interact with the TANF program and identification of the major subcategories of this process. In mapping this process, it is critical to identify when (i.e., at what point) an individual was diverted from the TANF program, and why they were diverted. The "why" may be answered by information contained in administrative records and also by surveying the opinions of applicants.

Considering individuals' interaction with the TANF program application process as a starting point for a common language does not address those individuals who never apply for assistance although they may be eligible (see Figure 1). This construct also fails to include those who are considered "look-likes," that is, those who receive food stamps and Medicaid and appear to be eligible for TANF but do not apply, or receive cash assistance. However, given the difficulties associated with implementing this type of research, using TANF program applications as the starting point may present the most practical approach to collecting valuable data about applicant and diverted populations.

ASPE's Julie Isaacs provided the following thoughts as a summary of this session's discussion:

  • This session has shown the difficulty of finding a common definition of diversion. Because of differences among states (in policies, application procedures, data systems), the states are not studying comparable populations. We should remember, however, that state results do not have to be comparable for the research effort to have been successful.
  • The discussion highlighted the importance of carefully documenting TANF application processes and their relationship to formal and informal diversion programs when presenting research findings.

C. Methodological Issues in Diversion Studies

The second portion of the meeting's session on conducting studies with applicant and diverted TANF cases examined methodological issues and challenges faced by previous researchers when conducting similar research.

  • "Study of Diverted W-2 Participants in Milwaukee"
    Irving Piliavin, University of Wisconsin-Madison

    The University of Wisconsin-Madison has undertaken a study of applicants to Wisconsin's W-2 cash assistance program. Between March and July 1999, an intercept survey was administered to approximately 1,200 program applicants. Interviewer assignments were staggered carefully so that when an applicant was selected, an on-site interviewer was available and the potential respondent did not have to wait for an available interviewer. Researchers dismissed the option of using administrative data records for sample selection, due to potential difficulties in locating and interviewing applicants and the corresponding impact on the study's survey response rates. In comparison, the intercept survey achieved a 99.9% response rate.

    Study participants were provided with an incentive for participating in the intercept survey and the agency provided child care during the one-hour survey interview. In addition to collecting baseline information on applicants' financial status, employment, and program participation, the intercept questionnaire also collected collateral contact information that may be used to locate applicants for a one-year follow-up survey to be administered during spring/summer 2000.

    In his comments about the intercept survey questionnaire, Dr. Piliavin stressed the importance of testing the instrument prior to administration and careful consideration of future data needs (e.g., what should happen when the survey indicates an individual has no clear source of income?).

     

  • "Study of Texas Families in Transition"
    Sandra Simon, Texas Department of Human Services

    The State of Texas, in cooperation with the Public Policy Research Institute (PPRI) at Texas A&M University, conducted a study with individuals who were redirected from TANF through the Texas Works program and stayed off TANF for six consecutive months (December 1997 through May 1998) and individuals who exited TANF for any reason and remained off TANF for six consecutive months. A random sample of individuals who were redirected from TANF was selected using the Texas Department of Human Services administrative files. A 10-minute follow-up telephone survey was conducted with this sample during June/July 1998.

    • Researchers encountered considerable difficulties locating and interviewing survey participants. As a recommendation for future research efforts, DHS mentioned the usefulness of Internet directory assistance searches and "data brokers" for retrieving recent phone and address information.
    • Survey interviews were conducted using computer-assisted telephone interviewing (CATI). It was suggested that telephone survey interviewers must be well trained in the context of the survey in order to execute the survey well. Interviewers should also be given a project-specific protocol for handling requests for help from survey participants.
    • Bilingual survey interviewing was an essential component of this research effort.

    Texas DHS plans to build upon this previous work in its upcoming ASPE-funded study of redirected Texas Works cases, individuals who receive lump sum emergency payments, and applicants who are declared ineligible for financial reasons (e.g., those who have missed an appointment). Specifically, this study will incorporate the following research approaches:

    • The survey sample will be identified using an intercept survey targeted all applicants who visit a TANF program office.
    • The intercept survey instrument will collect collateral contact information which may be used for future survey contacts.
    • Within the first few weeks following the intercept survey, an in-person follow-up interview will collect information on income, employment and expenditures. A second in-person follow-up survey will be administered approximately six months later.

     

  • "Study of Informally Diverted TANF Applicants"
    Anne Moses, SPHERE Institute

    The SPHERE Institute's study in California's San Mateo Consortium (San Mateo, Santa Clara, and Santa Cruz counties) examined TANF applicants who were informally diverted for reasons of ineligibility. Of the cases included in the study:

    • 50% did not finish application, did not show all verification, or did not provide essential information
    • 30% withdrew their application
    • 10% did not cooperate with job requirements
    • 10% did not complete for other reasons

    The status of these informally diverted cases was determined using administrative data from the county's electronic case records. The SPHERE Institute is conducting two follow-up surveys with these individuals, one at six months following diversion and the other at 12 months following diversion. Of the 134 randomly selected cases, only 31 responded to the six-month follow-up survey. The Institute is exploring methods to improve survey response for this cohort in its 12-month follow-up study.

     

  • "Study of Washington's Applicant and Diverted TANF Cases"
    Debra Fogarty, Washington State Department of Social and Health Services

    Washington's Diversion Cash Assistance (DCA) program is a lump-sum grant intended to keep families from applying for TANF or State Family Assistance (SFA). If the participant enrolls in TANF/SFA within 12 months, the grant is then considered a loan, which is repaid through a deduction from the TANF grant. To evaluate DCA, Washington undertook a study that compared three types of applicant and diverted TANF cases using administrative data:

    • DCA-only cases that had not subsequently enrolled in TANF/SFA as of November 1998
    • TANF-only cases that entered TANF/SFA between November 1997 and 1998
    • DCA-to-TANF cases that received a lump-sum grant and then enrolled in TANF/SFA between November 1997 and 1998

    In addition to the DCA program study, Washington also began to examine the "look-like," or naturally diverted, population and other diverted applicants who began the application process but did not receive TANF assistance. Washington has chosen to define these populations as single-parent heads of households who receive food stamps and/or Medicaid, but not TANF (for administrative data only). These populations are being studied using a combination of administrative data and survey data.

    Group discussion about the above presentations included the following questions:

    What is a sufficient response rate for follow-up surveys in these studies?

    Dr. Piliavin from the University of Wisconsin commented that he believes an 80% response rate is sufficient to minimize nonresponse bias, and anything less than 70% is problematic.

    Did Wisconsin achieve a random sample using an intercept approach?  If not, what are the implications?

    Dr. Piliavin responded that he believes Wisconsin achieved a random sample using its intercept sampling approach. According to Dr. Piliavin, the study's comprehensive record-keeping methodology should enable them to be certain of which individuals were included in the study.

    For Texas' upcoming study, how are in-person interviews being conducted?

    In each office (two sites) the researchers hope to get 400 intercept interviews and then verify the collected information with administrative data within 45 days. At this point, field interviewers will contact respondents and invite them to participate in an interview. Survey participants will be treated to a meal in exchange for their cooperation. The in-person interview is not scripted; instead, questions groups will be established ahead of time. It is anticipated that 80 in-person interviews (40 in each site) will be completed.

    What study design issues need to be considered when using administrative data in studies with applicants and diverted TANF cases?

    • Whether state and county administrative data systems can provide sufficient information on TANF applicants who do not ultimately receive assistance and whether this information can be grouped into effective subcategories which describe the application's terminal disposition
    • How inconsistencies between survey and administrative data may be addressed
    • Whether administrative data on TANF applications and wages can be gathered from other states?

 


Cross-Project Findings and Common Administrative Data Measures

A. Session Overview

Eight states and consortia ("grantees") who received FY 1998 ASPE grants to study the outcomes of welfare reform on individuals and families who left the TANF program utilized linked administrative data sets to track families who left assistance in late 1996 and early 1997. The results of this administrative data analysis have provided preliminary insights into employment, earnings, recidivism, ongoing program participation (e.g. in Medicaid and food stamp programs), and racial differences among welfare leavers. Additionally, this administrative data has provided an opportunity to examine cross-study comparisons for a subset of outcomes commonly reported by ASPE grantees.

B. "Preliminary Findings from Administrative Data on Welfare Leavers"
Matt Lyon, ASPE

Administrative data from the studies of welfare leavers have been submitted by eight of the grantees. These findings were analyzed by ASPE and presented at the meeting by Matt Lyon. (See report:  "Summary of Research on Welfare Outcomes Funded by ASPE" DHHS/ASPE, October 19, 1999 ).

While the studies differed in their approaches to the research, the preliminary findings from these studies were consistent across grantees. Specifically:

  • Employment
    Between 50-65% of welfare leavers found work immediately following termination from the TANF program. Individual employment rates decreased slightly during the first year of exit. However, due to individuals cycling in and out of employment, the employment rate for those who were ever employed during the first 12 months of exit was between 65-75%.
  • Earnings
    Grantees used wage information from the Unemployment Insurance system to determine post-TANF wages for leavers. For the first quarter following TANF program participation, quarterly earnings ranged from $2,185-$3,868. These earnings represent the mean quarterly earnings of employed TANF leavers (i.e., individuals with no earnings in a particular quarter have not been included in the denominator when the average was calculated). On average, quarterly earnings continued to rise through the first year following exit from TANF.
  • Recidivism
    Between 5-20% of leavers received TANF assistance within one quarter following exit. This increased to 13-28% after the second quarter and remained fairly constant for the remainder of the year. Between 25-35% of TANF leavers returned to assistance at some point during the first year following exit.
  • Medicaid and Food Stamp Program Participation
    Medicaid program participation among TANF leavers tended to decline during the first year following exit. However, during this time, Medicaid enrollment rose among the children of TANF leavers. During the fourth quarter following TANF exit, 14-40% of leavers were receiving food stamps. It is important to note that data on Medicaid and food stamp recipiency was relatively inconsistent in comparison to the employment data analyzed above. Additionally, it appears as if "churning" among individuals who cycle on and off of these programs is significant, as evidenced by higher levels of individuals who have ever received food stamps and/or Medicaid.

C. "Preliminary Analysis of Racial Differences Among TANF Leavers"
Elizabeth Lower-Bach, ASPE

Analysis of historical data and recent data collected in welfare leavers studies suggests that the remaining TANF caseload is increasingly non-White. (See report:  "Preliminary Analysis of Racial Differences in Caseload Trends and Leaver Outcomes," Elizabeth Lower-Bach). An overview of these findings follows.

1)  Findings from National Data

National data reported to the Administration for Children and Families (ACF) by the states indicates no significant change in caseload trends by race since implementation of PWRORA in 1996. However, considerable variation in caseload trends exists between states. For some states, welfare reform resulted in a significant shift in the racial composition of the AFDC/TANF caseload from White to non-White. This shift in some state caseloads may be attributed to factors including:

  • Geographic concentration of minority cases in central cities with higher unemployment rates than surrounding areas and typically larger caseloads.
  • An increase in child-only cases, which are more likely to be non-white than other TANF cases.

2)  Welfare Leavers Studies

Welfare leavers studies examining the characteristics of individuals who have exited TANF have suggested the following trends:

  • Studies comparing welfare leavers to all TANF cases, or TANF cases that have not closed, have found that African-Americans were less likely to leave assistance than non-Hispanic Whites. The pattern for Hispanics was less consistent. Furthermore, there appears to have been an overall increase of Hispanics as a percentage of the overall TANF caseload (Figure 1).
  • Administrative data findings suggest that African-Americans leaving TANF are more likely to be employed one quarter following exit and have higher median earnings than Non-Hispanic Whites.
  • African-American TANF leavers are more likely than Non-Hispanic Whites to return to assistance within one year of exit.
  • White TANF recipients are more likely to have sources of support other than their own earnings (e.g., household member earnings, child support payments) that enable them to leave assistance.

In reaction to these findings, grantees added the following comments:

  • A recent study in Massachusetts found that Whites are more likely to leave the TANF program due to marriage than African-Americans.
  • In studying racial differences, Florida categorized its Hispanic community into two groups:  English and non-English speaking. Results of this analysis are forthcoming.

D. "Commonly Reported Administrative Data Measures for Leavers Surveys"
Julie Isaacs, ASPE

ASPE has developed a set of commonly reported outcomes to facilitate cross-study comparisons among grantees tracking TANF leavers through linked administrative data. It has been suggested that, when possible, ASPE grantees include data for these outcomes in their reporting. Current measures include:  employment outcomes (three measures), recidivism, and ongoing program participation (e.g., Medicaid, food stamps).

In response to requests for additional measures that are comparable across states, ASPE's Julie Isaacs presented a list of several additional measures for grantee reactions and discussion. Specifically:

Employment

  • Employment retention
  • Wage progression
  • Mean and median earnings across all leavers

Recidivism

  • Percentage of those re-entering TANF within a given month

Program Participation

  • Medicaid and food stamp participation among continuous leavers

Child care participation among all leavers and those who are employed with children less than 13 years old

Grantees had a generally favorable response to most of these proposed measures, though at least one person objected to "mean and median earnings across all leavers." Grantees also provided the following suggestions for additional outcomes measures:

  • Subgroup analysis for employment measures
  • Barriers to employment
  • Child-only cases
  • Reasons for case closure

For further information, please see "A Proposed Set of Commonly Reported Administrative Data Outcomes for Leavers Studies, revised to include ADDENDUM with Additional Measures"  (HHS/ASPE, Dec. 13, 1999).

 


"Preliminary Findings from Surveys with Former TANF Recipients"

Presented by:
Karen Westra, Arizona Department of Economic Security
Jean Du, Washington State Department of Social and Health Services
Nancy Dunton, Midwest Research Institute

Preliminary results from recently completed follow-up surveys with former TANF recipients were presented by representatives from Arizona, Washington, and Missouri. These grantees requested that their results not be published pending final approval of their reports by state and federal authorities.

 


Analysis of Response Rates and Nonresponse Bias in Surveys

A. Session Overview

This meeting session focused on nonresponse in surveys with welfare leavers and possible strategies for addressing nonresponse bias. Selected grantees also presented an overview of their current research on differences between survey respondents and nonrespondents in welfare leavers surveys, and the characteristics of individuals who responded at different points during survey fielding, using different survey modes.

B. "Issues in Nonresponse Bias"
JoAnn Kuchak, Macro International

Survey nonresponse presents a research problem to the extent that individuals not responding to a survey differ with regard to key survey variables from those who do respond. Where this occurs, survey estimates based on the respondents alone will be biased estimates of the overall survey population. In general, assuming proper survey sampling techniques, nonresponse in surveys may be introduced by:

  • Survey refusals
  • Inability to contact respondents due to failure to locate or respondent unavailability
  • Incapacity of the intended respondent to take part in the survey for reasons such as language barrier, illness, or death

Furthermore, bias introduced by nonresponse may be exacerbated if missing responses are not randomly distributed (i.e., survey nonrespondents are systematically different from those who respond). For example, nonresponse may be spread unevenly across the survey sample, with higher levels of nonresponse concentrated among specific subgroups (e.g., former TANF recipients who do not have telephones). While differential nonresponse among population subgroups may be identified, the only effective way to minimize nonresponse bias is to achieve high survey response rates.

C. "Nonresponse Bias in Welfare Leavers Surveys"
Michael Errecart, Macro International

As more follow-up surveys with former welfare recipients are undertaken, the question of what constitutes an acceptable level of nonresponse in these surveys becomes increasingly important. At this point, surveys with welfare leavers may be considered exploratory research. Little is known about how survey respondents may differ from non-respondents. Accordingly, given the fact that the magnitude of nonresponse bias is unknown for these surveys, it is important to set high response rate standards. For other surveys, such as those conducted by the U.S. Department of Education, nonresponse rates of more than 30% have been deemed unacceptable, and rates of no more than 15-20% are preferred.

Four principal strategies may reduce nonresponse bias in surveys.

  1. Minimize Survey Nonresponse
    Survey nonresponse may be minimized by using standard and specialized survey administration techniques such as:  making multiple attempts to complete a survey interview; securing collateral information for contacting survey participants in the future; using multimodal data collection; and providing participation incentives. Additionally, the universe of eligible survey participants should be carefully defined and ineligible cases should be dropped from response rate calculations.
  2. Avoid "Cherry Picking"
    A survey should be attempted with all sample members, not just those who are easy to reach or locate. Release the sample in waves when the response rate is in question.
  3. Analyze Survey Nonrespondents
    Establish credibility for the research by attempting to explain the characteristics of nonrespondents and how these characteristics may relate to key survey variables.
  4. Use Auxiliary Data
    Use administrative data to augment information collected in surveys. For example, it may be possible to collect and compare income data from both sources.

To some extent, certain statistical techniques may be used to adjust for survey nonresponse, although they can never fully address nonresponse bias:

  • Imputation
  • Survey data weighting
  • Adjustments for nonresponse by stratum
  • Post-stratification

D. "Analysis of Response Rates and Nonresponse Bias in Surveys"
Marilyn Edelhoch and Linda Martin, South Carolina Department of Social Services

South Carolina has completed six quarterly surveys with former welfare recipients, with survey response rates ranging from 75-80%. Approximately 80% of survey interviews were completed over the telephone, with the remainder occurring as field interviews. Using administrative data, South Carolina has analyzed the differences between survey respondents and nonrespondents for all six quarters. (See report:  "Analysis of Response Rates and Nonresponse Bias in Surveys"). In results presented for the fourth quarterly survey, the following statistical differences between respondents and nonrespondents were noted:

  • Nonrespondents were less likely to have had their cases closed for increases in earned income and were more likely to have had their cases closed because they were sanctioned or were determined ineligible for other reasons.
  • A slightly higher percentage of African-Americans responded to the survey than did Whites.

It was also noted that many of the ways survey nonrespondents differed from respondents were also correlated with factors that would have made it difficult for survey interviewers to locate former clients for the survey. This finding was supported by additional analysis which showed that survey participants who cannot be reached by telephone may be different, and possibly more disadvantaged, than those interviewed by telephone. Specifically:

  • A larger proportion of survey participants interviewed in the field left welfare because they were sanctioned than was the case among those contacted by telephone. Additionally, fewer in-person survey participants left assistance due to earned income than those participating by telephone.
  • At the time of the survey interview, survey participants responding in person were less likely to be employed than those individuals participating by telephone.
  • Survey participants responding in person were less likely to have a vehicle to use for transportation than those individuals participating by telephone.

E. "Nonresponse Analysis for Missouri's Leavers Study"
Nancy Dunton, Midwest Research Institute

The State of Missouri, in cooperation with the Midwest Research Institute, has completed a two-year follow-up survey with former TANF recipients. A response rate of 74.5% was achieved using a mixed-mode survey approach with computer-assisted telephone interviewing (CATI) and in-person interviewing using cellular telephone technology. MRI presented a preliminary analysis of the survey's response pattern and the characteristics of survey nonrespondents using administrative data.

Approximately 83% of survey respondents completed a survey interview by telephone when contacted by an interviewer; 11% completed an interview by calling the project's toll-free hotline; and 5% completed interviews in the field via cellular telephone. When broken down by the time it took to find the respondents, the first third was interviewed very quickly. The second third took longer, while the last third was only reached following an additional state administrative data match.

Despite the survey's relatively high response rate, survey nonrespondents were found to differ from survey respondents on several key measures. Generally, survey nonrespondents:

  • Had lower levels of educational attainment
  • Received food stamps, cash assistance, and/or Medicaid for fewer months following exit
  • Had lower mean and median quarterly earnings

F. "Discussion of Issues Related to Survey Response Rates and Nonresponse Bias"
Patricia Ruggles, ASPE

In summary, Patricia Ruggles provided the following comments:

  • While high survey response rates present a critical element in follow-up surveys with welfare populations, the degree to which the survey's sample is representative is equally important. For example, even if it is determined that the survey's sample looks like the total population in terms of key characteristics of the overall survey population, it is easy to have systematic biases that are unrelated to sample member demographics.
  • Where possible, survey nonresponse should be minimized by planning ahead and gathering information about how to contact a survey participant; offering incentives for survey participation; and using mixed-mode survey data collection (e.g., telephone and in-person interviews).
  • It is important to randomize nonresponse as much as possible. All sample members should be pursued in a study, not just those who are easy to reach.
  • The differences between survey respondents and nonrespondents should be closely examined. If differences are discovered, statistical adjustments may be pursued.

Meeting participants also added the following observations:

  • While nonrespondents in both the Missouri and South Carolina studies appeared to be of lower socioeconomic status, given the high survey response rates in both studies, the overall impact of this bias will be small.
  • Concerns were expressed over the amount of survey data weighting in welfare leavers studies and the potential impact of this weighting on the variance of survey estimates.
  • As a caution for future research, one participant commented that when sample is released in waves, it is important to make sure all sample members have an equal likelihood of being contacted for a survey; and that efforts to locate sample members are applied consistently.

 


Strategies for Achieving High Response Rates in Surveys with Former TANF Recipients

A. Session Overview

Achieving high survey response rates in follow-up surveys with former welfare recipients presents many challenges. In this session, grantees and survey administrators presented proven strategies for achieving high response rates in these follow-up surveys.

B. "Achieving A High Response Rate in Wisconsin's 1998 W-2 Leavers Survey"
Jan Van Vleck, Wisconsin Department of Workforce Development

Wisconsin's Department of Workforce Development, in partnership with the University of Wisconsin Survey Center (UWSC), achieved a survey response rate of 75% for a one-year follow-up survey with former W-2 program participants. This survey was administered using computer-assisted telephone interviewing (CATI) and in-person interviews using paper-and-pencil data collection.

The high survey response rate was achieved using a combination of approaches:

  • Mixed-mode survey interviewing
  • A pre-survey announcement letter with a reply form for updating contact information, accompanied by a postage-paid envelope
  • A toll-free hotline that survey participants could contact to complete a survey interview
  • Survey participation incentive payments of $25

The survey project was also augmented by an ongoing respondent tracing effort, which used the following techniques to locate and interview survey participants:

  • Address correction requests on all respondent mailings
  • CD-ROM phone disks ("POWERFINDER")
  • Telephone and Internet directory assistance searches
  • A nationwide credit bureau database check
  • Administrative record searches using automated eligibility and case management systems, the state's new-hire directory, social security data, Unemployment Insurance (UI) wage and benefit records, Wisconsin's Job Service Information System, and Department of Motor Vehicles driver's license information
  • Vital statistics (Social Security Death Index on the Internet and Wisconsin Vital Statistics)
  • Field research using current or former Department quality assurance staff

(See Jan Van Vleck's paper:  "Tracing and Representativeness of Responses")

C. "Lessons Learned in Surveying Current and Former TANF Recipients in Iowa"
Tom Fraker, Mathematica Policy Research

Iowa conducted a follow-up survey with individuals participating in its Family Investment Program (FIP). Contact information for survey participants was between 6-57 months old at the start of the survey period. Data collection for this survey encompassed a 13-month fielding period from July 1998-July 1999 and was accomplished using a mixed-mode survey methodology. An survey response rate of 72% was achieved for this research effort. (See draft paper: "Surveying Current and Former TANF Recipients in Iowa").

While a range of survey techniques were used during the fielding period, the overall survey approach may be described in terms of two categories:  standard survey techniques and specialized survey techniques. Standard survey techniques alone were applied during the first five months of the survey period and included:

  • Advance letter mailings
  • Telephone and in-person survey contacts
  • Use of two standard national databases to search for addresses and telephone numbers
  • Contacts with out-of-state and incarcerated sample members
  • Efforts to convert interview refusals into interview completions

Application of these standard techniques during the initial survey period resulted in a 48% response rate.

During the survey's final eight months, the standard survey techniques were augmented by a set of specialized survey techniques, including:

  • Enhanced techniques for mail contact using fliers and formal, personalized, hand-written messages
  • Tiered incentive payments which ranged from $0 at the outset of the survey fielding period to $50 at the end, with intermediate offers of $10, $15, and $25
  • Use of cellular telephone technology by field interviewers
  • Updated contact information from Iowa's administrative data systems
  • Specialized database searches using a proprietary national database
  • Second refusal conversion using an increased incentive ($25) and a priority mail letter requesting that a sample member contact the project's toll-free hotline
  • Extension of the survey field period to attain the targeted completion rate

In the ensuing discussion, the audience was particularly critical of the use of differential incentive amounts during the survey fielding period. Additionally, the audience expressed some concerns regarding the survey's relatively extended fielding period.

D. "Tips and Tricks for Achieving High Response Rates in Surveys with Welfare Leavers"
Tammy Ouellette, Macro International

Achieving high survey response rates on follow-up surveys with former welfare recipients is a difficult task. Successful survey implementation requires two principal strategies. (Please see "Tips and Tricks for Achieving High Response Rates in Surveys with Welfare Leavers"):

1. Make it convenient and attractive for individuals to participate in the survey.

Making it easy and convenient for individuals to participate in a survey is critical to survey success. Specific techniques may include:

  • Providing a toll-free project hotline that survey participants can contact to complete a survey interview, schedule an interview, or update contact information.
  • Offering incentives for survey participation
  • Combining survey modes such as telephone, in-person, and mail surveys
  • Offering flexible hours for participants to complete an interview and the ability to schedule an interview appointment
  • Holding periodic survey open houses at central interviewing sites where survey participants may come to complete an in-person interview

2. Take a systematic approach to locating and interviewing survey participants

Locating former welfare recipients for a survey months or even years after they have stopped receiving assistance requires a tracking approach which emphasizes maintaining up-to-date contact information during the interim period between case closure and the follow-up survey and using multiple methods to locate survey participants for whom contact information on file has become outdated. Specific techniques may include:

  • Collecting contact information for future contacts while a case is still open
  • Using tracking mailings to maintain contact with survey participants during the interim period between case closure and the follow-up survey
  • Checking other administrative databases for updated contact information
  • Researching past addresses and telephone numbers in case files
  • Investing in a national household director on CD-ROM
  • Having the Postal Service and express mail companies assist with respondent tracking and location

 


Linking Administrative and Survey Data

A. Session Overview

Most studies funded by ASPE welfare outcomes grants use a combination of linked administrative and survey data as a tool for analyzing the life situation of former TANF participants. Linking these data sources may present issues related to comparability and validity as well as unique logistical and mechanical concerns.

B. "Issues in Linking Survey and Administrative Data"
Nandita Verma, MDRC
George Falco, New York State Office of Temporary and Disability Assistance
Anne Moses, The SPHERE Institute
Don Oellerich, ASPE

The presenters identified several issues that researchers should examine when linking administrative and survey data sources in welfare outcomes research. (See notes:  "Comparing Measures from Administrative and Survey Data Sources.") Specifically:

1. Comparability and validity across data measures

  • It is essential to define statistics that are directly comparable from administrative and survey data. For example, comparing employment and earnings statistics may present problems due to reported reference periods.
  • Statistics collected in administrative and survey data should be compared and reconciled. It cannot be assumed that statistics from the two sources are comparable.
  • Researchers should identify appropriate places to apply this strategy. Given differences between sources for these statistics, researchers must consider how the data may be combined to inform research questions.

2. Logistics and mechanics of combining survey and administrative data

  • Linking administrative data across program areas such as food stamps, Medicaid and cash assistance may present difficulties. Merging large administrative data files, and collecting and analyzing state administrative data from unfamiliar files, may prove difficult tasks.

These concerns were highlighted in terms of a series of overlapping measures for which information may be collected in administrative or survey data. For example, in the case of wage reporting in administrative data (such as U.I.) and self-reports in surveys, there may be considerable variation.

However, the unanswered question remains:  Which data source is correct, or more reliable?  Don Oellerich provided a case where survey findings for earnings were higher than administrative data in some sites and lower in other sites.

Furthermore, findings from The SPHERE Institute's study, which compared TANF, food stamp, and Medicaid receipt statistics from administrative and survey data, showed:

  • 10% of reported survey and administrative data on TANF receipt did not agree
  • 20% of reported survey and administrative data on food stamp receipt did not agree
  • Almost 30% of reported survey and administrative data on Medicaid receipt did not agree

Given the potential for discrepancies between statistics taken from administrative and survey data sources, and the fact that the ASPE grantees are using both types of data, Julie Isaacs suggested that the most desirable approach to handling these differences is for grantees to present both sets of findings in their final reports. (See also Nandita Verma's report:  "Linking Survey and Administrative Data Issues")

C. "Using New National Data Bases for Measuring Welfare Outcomes"
Don Oellerich, ASPE
Chris Snow, ASPE

Several new national data bases which may serve as a source of information for grantees studying welfare outcomes were presented. Two of these databases, listed below, together make up the expanded Federal Parent Locator Service (FPLS):

  • The Federal Case Registry, which registers all child support cases
  • The National Directory of New Hires, which contains national data from W-4 withholding forms and Unemployment Insurance records (At this point, it has been difficult for TANF (IV-A) agencies to access e the Federal data base because of potential confidentiality concerns , Child Support (IV-D) agencies have access to data on their own IV-D cases and may release it to IV-A agencies)

 


Overview of Survey Questions and Data Analysis

A. Session Overview

To facilitate cross-project findings, ASPE's Julie Isaacs has established a framework for identifying similarities and dissimilarities among survey instruments being used by grantees to measure outcomes among former welfare recipients. In her initial work, this framework was used to categorize grantee survey questions relating to food stamp recipiency; health insurance coverage; food insecurity; access to health care and health status; and knowledge of Medicaid and food stamps eligibility. (See "Monitoring Outcomes for Former Welfare Recipients:  A Review of 11 Survey Instruments".

In this session, meeting participants were introduced to the framework and how it might be applied to three additional welfare outcomes:  child well-being, employment, and income. Following this introduction, meeting participants split into smaller topical discussion groups which further examined how grantee survey questions in these three areas may be categorized.

B. "Measuring Child Well-Being"
Marty Zaslow, Child Trends
Kathryn Tout, Child Trends
Surjeet Ahluwalia, Child Trends

In light of rapidly changing public policy, there is a need to look at the content and gaps in the current surveys as the foundation for future research. This is particularly the case when examining how surveys are measuring child well-being. It is important to consider how we are studying the well-being of children in families who have left welfare assistance, and how we are defining "child outcomes". Current studies have been inconsistent in how these outcomes have been defined. Child Trends recommended that child outcomes be defined as direct reflections of developmental status or child well-being.

In general, child outcomes may be placed into three categories:

  • Child health
  • Academic achievement and cognitive development
  • Social, emotional, and behavioral well-being

It is also important to consider intervening mechanisms such as child care, parenting behavior, the home environment, the mothers' psychological well-being, and the availability of health insurance, that affect child outcomes. These variables assess children's experiences but are not considered to be child outcomes per se. Hypotheses about child well-being must consider both positive and negative outcomes.

It is important to benchmark results with findings from national samples. Racial and ethnic subgroups contained in national studies will provide more specific information. Care must be taken in making any causal attributions. For example, children in families with employed mothers appear to be doing better, but it is not clear whether leaving TANF is the reason, or whether it is because of some other family characteristic. (See notes:  "Review of Child Outcomes in Leavers Studies" and "Exact Items from the Surveys Covering Child Outcomes" under the Child Outcomes topic category and "Childcare Questions from 12 State/Local 'leaver' Surveys" under the Child Care topic category.)

Child Trends provided the following suggestions for future research efforts:

  • It would be beneficial to select children from particular age ranges, such as preschool and/or school age.
  • Most grantee surveys cover children's health and school performance and, to a lesser extent, behavioral and socioemotional outcomes. It would be helpful to have a increased emphasis in future studies on these latter outcomes.
  • When designing future research, collaboration by research teams on the selection of items, focal child selection procedures, and timing with respect to date of exit from welfare would provide further comparability among studies.
  • Studies with a longitudinal research component would provide valuable information about how a child's well-being changes over time and in relationship to intervening mechanisms.

In the ensuing discussion, meeting participants brought up the following issues:

  • The fact that we are asking an adult to respond about a child's well-being may result in parental perceptions, rather than fact.
  • Under what circumstances is a survey interviewer required to report suspected child abuse?  Given that a "yes" to certain survey question does not constitute "reasonable suspicion," most telephone interviewers would not be subject to this requirement. However, if a fieldworker sees something during a home visit, they are obligated to report the situation to Child Protective Services.
  • Many grantees requested help from the experts on which child-outcomes questions to use in their surveys so that a succinct measure of child well-being might be constructed. The overall consensus among grantees was that they are not experts in child development and do not know the right questions to choose. They requested Child Trends' assistance in selecting approximately seven minutes worth of questions that would accurately measure child well-being. Child Trends responded to grantees' requests for additional information by providing recommendations for on a short set of questions that measure child well-being. (Please see "Potential Constructs and Items for a Child Well-Being Module in Leavers Studies".)

C. "Examining Employment Outcomes"
Helene Jennings, Macro International

This presentation addressed how to analyze employment outcomes measured in grantee surveys. By examining survey questions regarding current employment; job specifics (for example, seasonal, or part-time); other job issues (for example, barriers such as transportation, illness); previous jobs; other jobs; and unemployment, important similarities among surveys may be identified. (See report:  "Employment Outcomes for Former Welfare Recipients:  A Comparison of 11 Survey Instruments").

Helene Jennings began the session by asking meeting participants to think about some questions pertaining to each states' approach to assessing employment outcomes. To get the group brainstorming, she suggested thinking about what states are looking to capture, how the information will be used, whether they knew of anything already that works well or anything that doesn't work as well, and any results, anticipated or unanticipated. She suggested that discussion could also focus around administrative record matching, the use of guidelines or benchmarks, and methodological questions.

Helene had asked two states to share their experiences with employment outcomes:

"Employment Outcomes in the Illinois Leavers Study"
Steve Anderson, University of Illinois

Illinois's basic strategy and approach for assessing employment outcomes was to include two types of questions in their employment section:

  1. Questions about the respondent's (and spouse's) employment when they exited from TANF, and a second set of questions about the respondent's (and spouse's) current and most recent job. These questions allow comparisons between parallel questions at two points in time. (A detailed employment history since TANF exit was not included, due to time restraints and the possibility of poor respondent recall.)
  2. Questions about job stability, reasons for and consequences of job changes and unemployment, job satisfaction, and employment barriers. These questions were asked only of the respondent. These questions were asked in order to track movement on and off jobs more carefully.

Combined, these questions were intended to determine whether employment had been consistent or inconsistent.

Results from previous studies that only used a single time reference show that a higher percentage of respondents worked at some point following exiting TANF, but that fewer respondents worked continually during the exit period under study. To determine the reasons behind this turnover, Illinois used open-ended questions to determine why respondents left jobs, whether it was voluntary, or for other reasons. Wage and hour results from job changing were also asked about, along with the circumstances surrounding the job change.

The Illinois approach also probed to find out whether job inconsistency is a result of employment barriers, such as child care, transportation, health, or many others. Illinois asked these questions directly instead of looking at indicators. The barrier of domestic abuse was treated as a separate section to determine its effect on employment. The questions were designed not to be too personal or violently graphic, as the phone relationship is impersonal, but rather to focus on relationships that affect work. Furthermore, job satisfaction was examined, in order to understand why people leave jobs.

"Employment Outcomes in Massachusetts Leavers Study"
Gloria Nagle, Massachusetts Department of Transitional Assistance

Massachusetts' current research on TANF leavers includes two study cohorts. In the stody of cohort 1, the state focused primarily on collecting standard employment statistics (e.g. media-friendly data, such as that 60% of TANF recipients are working). In the upcoming study of cohort 2, the analysis will be much more complex. Data will be collected on a wide range of employment activities and related concerns, to better understand the work dynamics of welfare leavers. These data, which can help us better interpret the basic employment statistics generally found in administrative datasets, will include information on:

  • How many jobs to individuals currently have?
  • How many hours a week do they usually work?
  • What types of job benefits do they have?
  • How long does it take to get to and from work, including time needed to take children to childcare?
  • Is childcare needed outside of normal business hours?
  • To what extent do problems of domestic violence, mental health and substance abuse affect employment?

Following these presentations, the session turned into an open discussion. Meeting participants raised the following questions and issues:

  • How are individuals living who do not report income or employment?  Can surveys catch illegal activity or odd jobs?  Steve Anderson said he does not ask about illegal activities, and feels it is not accurately reported. Gloria Nagle said that a good rapport is needed to get access to information about under-the-table work, and this rapport is hard to establish over the phone.
  • Gloria Nagle concurred with a comment made earlier in the day that it would be helpful to have some commonly agreed upon expectations to measure results by, e.g. a common definition of "self-sufficiency."

D. "Calculating Total Income"
Julie Isaacs, ASPE
Howard Rolston, ACF

Administrative data alone can provide only limited information on personal income, because it does not generally include income other than earnings, and even earnings information is limited to the earnings of the leaver herself . As a result, researchers hope to capture income from other sources in the survey data. In a review of 12 survey instruments, Julie Isaacs found that 9 of the 12 collected a measure of total household income. Of these 9, 5 calculate total household income (monthly) as a sum of various sources of income, 3 ask for estimates of monthly income, and 1 asks for estimated annual income. All 12 grantee surveys ask about respondent earnings and 11 surveys ask questions about other household members' earnings. (See report:  "Calculating Total Income for Former Welfare Recipients:  A comparison of 11 Survey Instruments")

Howard Rolston distributed a table reporting "Combined Income from UI, AFDC, and Food Stamps." The table displayed quarterly income data, based on UI, AFDC and Food Stamp administrative data, for AFDC recipients that left welfare in six cities in the National Evaluation of Welfare to Work Strategies (NEWWS). In all cases, income measured by administrative data dropped sharply as recipients left welfare. Howard expressed his theory that these results are misleading because they do not capture other sources of income, such as earnings of others in the household. This suggestion, that too much information is missing from the administrative data, reinforced the point that it is necessary to design surveys that gather as much additional information as possible.

The session's discussion focused on examining different approaches taken by the grantees in measuring total income. A range of issues on the definition of total income and household composition were discussed. These issues included:

  • How grantees are asking respondents to report household income in surveys. For example, should respondents be requested to report income by source, and if so, what types of income categories should be included in surveys for respondents to select?
  • How should "household" be defined?  Some have used the term "financial unit"; some used "economic unit." It was agreed that it is important to capture information from other adults living in the household who share expenses and that there must be a way to pull together household composition along with collecting data on household income.
  • In addition to being concerned with total income, should grantees also be interested in disposable income?  For example, one way to address this is to ask whether the respondent feels financially "better off" now. The issue of Earned Income Tax Credit was also discussed.
  • How should the Federal Poverty Level be applied, and does this distinguish between cash income alone, or also be compared to cash plus non-cash income?

The question of how seasonality affects the reporting of income was also discussed. A specific example was the observation that a respondent's receipt of an income tax return might be reported during one interview, but not during another round of interviews. Some attendees reported changes affected by seasonality, but not to the extent that might be expected.

As a conclusion to this discussion, session participants agreed that further research is needed as to how total income and household composition should be measured in leavers studies. It was suggested that a "best approach" to calculating total income for respondents and their households be established. That is, what sources of income need to be identified in a survey to provide the best measure of income?  Necessarily, this requires careful examination of how a household is defined (e.g., family unit or all individuals residing in a home?  etc.).

Following the identification of this best approach, the constraints introduced by a telephone survey (e.g., time, respondent recall) should be examined and a "practical" approach to measuring total income be established. It is important to keep in mind that this practical measure will fall short of the ideal measure previously identified. However, the critical issue here will be what types of income will not be included in the measure as a result of this tradeoff. Key to this discussion will be identifying which types of income information may be reliably collected through administrative data.

 


State Discussion Session:  Next Steps

The Fall 1999 Welfare Outcomes Grantee Meeting concluded with an open forum for discussion on grantees' future technical assistance needs and next steps for their research. Grantees requested further assistance from ASPE with the following projects:

  • Developing a five-to-seven minute module of child well-being survey questions that could be administered using a telephone survey Child Trends responded to grantees' requests for additional information by providing advice on a short set of questions which measure child well-being. (Please see "Potential Constructs and Items for a Child Well-Being Module in Leavers Studies.")
  • Locating survey questions that could capture data about respondent learning disabilities
  • Providing a standardized format for reporting survey data for a core set of statistics
  • Producing public use data files for survey results
  • Constructing additional survey grids on other measures, such as household composition and childcare
  • Establishing a survey questionnaire clearinghouse that could be used to easily access other grantees' survey instruments (see forthcoming ASPE web page on leavers and diversion studies)
  • Designing summary measures for survey statistics, which would allow some comparability and benchmarking across studies