To optimize the quantity and quality of data available to support the analyses for this report, data were obtained in two ways. First, in Alabama and Louisiana, we analyzed individual-level data that Mathematica had already acquired for the Robert Wood Johnson Foundation MaxEnroll project. In both administrative files, the data contain a monthly eligibility code that allows us to identify children who enrolled through ELE versus those who enrolled through standard processes. Using these data, we answered the questions above by, first, identifying the number of new enrollees who obtained coverage under ELE and under standard enrollment pathways in the two states and, second, comparing the available demographic characteristics and durations of coverage of these two groups of new enrollees.
Second, in two other ELE states—Iowa and New Jersey—we carried out a similar analysis by using aggregate data tables requested from each state’s Medicaid and (as necessary) CHIP data administrator.31 We submitted shells for these tables, shown in Appendix B, to states along with information about how to populate them.
On the first data table, we asked the states to provide counts of monthly new enrollments processed by ELE and by traditional methods for children who qualified for Medicaid or CHIP on the basis of income (rather than a reason such as disability or foster care status).32 The requested data ranged from the year before the state adopted ELE (as determined by CMS) to the most recent month available. Next, we asked states to provide the monthly enrollment counts disaggregated by several demographic characteristics. For example, we requested data on a child’s age, primary language, citizenship status, household income, and urban/rural status. We also asked states to review past enrollment records to look for a recent period of prior public coverage in Medicaid or CHIP, to help address whether ELE enrollees are truly new to the system and how recently they might have had contact.
The second data table requested information on continuous coverage, disenrollments, and transfers, stratified by ELE status. For example, for each monthly cohort of new enrollees in the specified program (CHIP or Medicaid), we requested data on the number of beneficiaries who remained enrolled in the program 6, 12, 13, 18, and 24 months after initial enrollment, how many had disenrolled from the program at those time points, and how many had transferred to the other program at those time points. To assess the extent to which ELE enrollees “churn” back to the same program after disenrolling, among those individuals who disenrolled fewer than 13 months after initially enrolling in the program, we requested data on the number who reenrolled (via ELE or traditional routes) within 3, 6, or 12 months.33
To simplify the data request and ensure that we would be able to provide quality assurance by reviewing the spreadsheets for internal consistency, we chose not to request data on several more complex measures of interest such as the average length of continuous enrollment, average gap in enrollment, or churning rate. In the second year of the evaluation, we plan to analyze these and several other retention outcomes using individual-level administrative data for the six states with ELE programs effective as of December 2010.
We shared the table shells with states in January 2012 and scheduled an orientation call, including both policy staff and the information systems staff directly responsible for compiling the data request, to walk through the table shells and guidebook and to answer any immediate questions. We had further follow-up with the states as needed, and periodically reached out to states to assess progress and provide support until the populated tables had been submitted in a usable format. We also conducted follow-up calls with states as needed to ensure that we understood the data that states had reported.
Some data elements in the tables could not be populated by states because of limitations in their data systems, limiting the extent of certain comparisons. Most notably, in New Jersey, the state data system only maintains new enrollment counts for applications processed by the state’s vendor and so does not include counts for children who enrolled at county Medicaid offices (approximately two-thirds of new enrollments).34 This limitation does not affect the counts of new enrollments linked to ELE in the state, which are all processed centrally; however, it does substantially understate the count of “non-ELE” new enrollees, in turn undermining comparisons between the two groups. In addition, Iowa’s separate CHIP program and New Jersey were not able to include information on enrollees’ prior public coverage. Each state was missing some important demographic and family-level characteristic or had a high proportion of missing values; this was particularly true in Alabama and Louisiana, where the data files were not created for this project but for another purpose (as discussed above).
Because ELE is being used to enroll Medicaid- or CHIP-eligible but uninsured children who participate in other public programs or who have been identified using partner agency data, this analysis focuses on assessing ELE’s impact on new enrollment. We also anticipate that new enrollee counts will be more sensitive to ELE as opposed to total counts. For Alabama and Louisiana, where we had access to individual-level enrollment data, we defined a new enrollment as a two-month spell of coverage preceded by a gap in public coverage of at least two months. We asked states completing the aggregate enrollment tables to mimic this definition, to the best of their ability given data constraints.
31 Ideally, we would have conducted the analysis using individual-level data for all ELE states, but the timeline for this study was too short to establish new data use agreements. Thus, for these four states, we obtained only aggregate data, specified in a series of table shells. Two other ELE States, Maryland and Oregon, could not respond to our request; Oregon lacked sufficient staff resources to populate the tables and Maryland’s data systems lack a marker for ELE to populate the tables successfully.
32 The tables are structured to collect Medicaid and CHIP data separately, recognizing that the two programs’ data systems and availability of variables might differ in some States.
33 We requested data on churning; however, the follow-up periods are currently too short to provide an accurate picture in most states. Therefore, we have not included this in the first-year evaluation, but will assess differences in churn rates in the second year of the evaluation.
34 New Jersey Department of Human Services, personal communication, April 2, 2012.