The effective sample size is the sample size from a simple random sample of respondents that would have equivalent precision to that achieved by the complex sample design actually used for the survey. Since standard statistical formulas assume simple random sampling, when using them to estimate the precision of estimates it is important to replace the actual sample size with the effective sample size.
The effective sample size is computed by dividing the actual sample size by a design effect that reflects the effect of the deviations from simple random sampling. Design effects may vary by subgroup (e.g., blacks versus whites) but will generally be fairly consistent across states for each subgroup. This is because in large national surveys, such as the three examined here, a similar sample design, including the number sampled form each PSU, is used in all states. Design effects will also vary by type of question; for example, respondents who live near each other (in the same sampled cluster) are likely to have similar poverty characteristics but are not likely to have similar disability characteristics.
From Westat's experience with these and similar surveys, we have estimated the state-level design effects shown in Table 3 for each of the four characteristics being estimated. National design effects for the CPS are higher than these because they take into account the oversampling of small states by each survey to increase the accuracy of state estimates. This assessment is only examining state estimates, and therefore is only concerned with the survey design within each state.2
Design effects are a function of the average number of completed interviews for the domain of interest that are completed in each cluster. Thus, design effects for subpopulations tend to be smaller than for the entire population, assuming the subpopulations are spread fairly evenly throughout the population. Design effects for children and the elderly may therefore be smaller than those in Table 3. Given that blacks and Hispanics are not evenly distributed across the population, their design effects are not likely to differ from those in the table. For purposes of this assessment, we have assumed that the design effects in the table apply to all subpopulations.
The CPS does no oversampling within states, so there is no additional design effect from differential weighting. (The one exception is that on the March supplement Hispanics are oversampled at twice their normal rate. Given that they represent a small proportion of the total sample, the increase in design effect is not significant.) An absence of oversampling is also true of the 1993 and 1996 SIPP panels. However, the 1996 SPD will oversample low-income populations, resulting in an additional design effect for analyses from that survey. Beginning with the 1995 sample, the NHIS is oversampling blacks and Hispanics, so any analyses of the NHIS will also have to incorporate that design effect. Oversampling in these surveys will also result in larger sample sizes for these subpopulations than would otherwise be observed.
The sample sizes for the 1996 CPS and 1993 SIPP panel vary across states for all of the populations of interest. Table 4 provides the minimum and maximum actual state sample sizes for each survey for each of the populations of interest. These CPS sample sizes are based on respondents to the 1996 March supplement. Sample sizes for the main CPS questionnaire are a little larger since approximately 10 percent of respondents to the main questionnaire do not participate in the supplement, but Hispanic respondents to the previous November's CPS are asked the supplement questions in March. Thus, for questions asked on the main questionnaire (which does not include any of the four questions used in this assessment) the CPS sample sizes will be somewhat larger than used in this assessment. SIPP only asks those under age 70 about work disability, so for this question the minimum and maximum elderly SIPP sizes are 4 and 220. The appendices provide state level detail for sample sizes.