Skip to main content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Key Themes: Reflections from the Child Indicators Projects

Publication Date

General Uses of Child Indicator Studies

Mairéad Reidy. Ph.D.,

Senior Research Associate
Chapin Hall Center for Children
University of Chicago,
(773) 256 5174 (phone)
reidy-mairead@chmail.spc.uchicago.edu

This short paper discusses the general usefulness of indicator studies, and is based, in part, on discussions among the fourteen states participating in the ASPE Child Indicators Project.

Sponsored by the U.S. Department of Health and Human Services (HHS) Office of the Assistant Secretary for Planning and Evaluation (ASPE), with additional support from the Administration for Children and Families (ACF) and The David and Lucile Packard Foundation, the Child Indicators project has aimed over the past 3 years to promote state efforts to develop and monitor indicators of health and well-being of children during this era of shifting policy. The fourteen participating states are Alaska, California, Delaware, Florida, Georgia, Hawaii, Maine, Maryland, Minnesota, New York, Rhode Island, Utah, Vermont, and West Virginia. Chapin Hall Center for Children provided technical assistance to grantees. Grantees typically exchanged knowledge and expertise through a series of technical assistance workshops coordinated by and held at Chapin Hall Center for Children. The workshops encouraged peer leadership and collaboration among states, and provided states with an opportunity to work with and learn from one another on areas of common interest. This short paper draws on the discussions of these meetings as well as individual consultation with states. I am grateful to participants for sharing their insights.

  • Indicator studies can highlight generally that things may not be working as planned, and can guide decisions o the types of in-depth evaluations that might be helpful.
  • Using indicator studies, policymakers and researchers can examine broad trends over time. We track indicators over time for a range of purposes including the following:
    • to describe child, family, community conditions
    • to inform state and local community planning and policymaking
    • to improve programs for children and families (e.g., increase access, improve quality)
    • to measure progress in improving child outcomes
    • to monitor changes for children in relation to investments and policy choices
  • We can enhance the power of indicators to monitor broad trends and to inform policy when we analyze how sets of indicators vary across socioeconomic and demographic subgroups, counties, regions, etc., and when we cluster indicators and examine whether sets of related indicators move in the same direction. It is important to remember that a change in one indicator may disguise movement in another area. Without looking at a series of indicators, what appears to be a clear improvement may in fact be deterioration. For example, improvements in kindergarten retention rates (where fewer children are being retained) imply that children are faring better. However, without looking at other indicators for example, later success rates of children not retained, it is difficult to interpret whether a change in the retention rate in fact shows that children are doing better. Likewise, to be sure that a decline in substitute care placement rates is a positive outcome for children, the indicator should be accompanied by other indicators such as a declining rates of child abuse and neglect
  • Although indicator studies cannot establish causal relationships between initiatives and outcomes, they can monitor progress toward outcomes across time. Indicator studies can thus play critical roles in monitoring progress towards goals and in documenting whether changes in outcomes are occurring in desired directions.
  • Indicator studies can warn that things are not working as planned. Such warning can precipitate in-depth evaluations (Prosser and Stagner, 1997).
  • The power of indicators to monitor policy and program outcomes can be enhanced if they are used in conjunction with a logic model. The logic model can guide decisions about what indicators should be measured and in what order the measurement should take place. The logic model enables us to measure short-or intermediate-term outcome indicators with some confidence that observable change on those outcomes will be followed later by changes in longer-term outcomes.
  • Indicators can complement the data collected from impact studies by placing results in the context of broader social and economic trends.
  • The conditions necessary for implementing impact studies do not always exist. Governments are sometimes reluctant or unable to prevent the exposure of families to a promising new initiative, making it impossible to create a control group. Furthermore, when policy changes occur quickly and in multiple programs, as is the case with many statewide early childhood initiatives, it is often extremely difficult to isolate the effects of a single policy or initiative using impact studies (Child Trends 2000). Under these circumstances, indicators may serve as the only source of information on the general direction of change.

References:

Child Trends (2000) "Children and Welfare reform: A Guide to Evaluating the Effects of State welfare Policies on Children", Child Trends.

The Importance of Cross-Agency and State-Community Collaboration in Child Indicator Development: Reflections from the Child Indicators Project

Mairéad Reidy. Ph.D.,

Senior Research Associate
Chapin Hall Center for Children
University of Chicago,
(773) 256 5174 (phone)
reidy-mairead@chmail.spc.uchicago.edu

This short paper is based on discussions between the fourteen states participating in the ASPE Child Indicators Project. It focuses on state reflections on the importance of cross-agency and state-community collaboration for developing and sustaining indicator work, and on the factors that contribute to successful collaboration.

Sponsored by the U.S. Department of Health and Human Services (HHS) Office of the Assistant Secretary for Planning and Evaluation (ASPE), with additional support from the Administration for Children and Families (ACF) and The David and Lucile Packard Foundation, the Child Indicators project has aimed over the past 3 years to promote state efforts to develop and monitor indicators of health and well-being of children during this era of shifting policy. The fourteen participating states are Alaska, California, Delaware, Florida, Georgia, Hawaii, Maine, Maryland, Minnesota, New York, Rhode Island, Utah, Vermont, and West Virginia. Chapin Hall Center for Children provided technical assistance to grantees. Grantees typically exchanged knowledge and expertise through a series of technical assistance workshops coordinated by and held at Chapin Hall Center for Children. The workshops encouraged peer leadership and collaboration among states, and provided states with an opportunity to work with and learn from one another on areas of common interest. This short paper draws on the discussions of these meetings as well as individual consultation with states. I am grateful to participants for sharing their insights.

  • Cross-agency collaboration is seen as critical to the development and sustainability of indicators. The Child Indicators Project centered around partnerships among state government agencies with lead responsibility for addressing children's issues and programs, including children's health, education, welfare, and income support programs. These partnerships were of central strategic importance in building widespread support for establishing goals for children and for sharing responsibility for building indicators and tracking progress towards these goals.
  • Cross-agency collaboration is more likely when working with child outcomes that many agencies can rally around, where no one agency is solely responsible for moving the indicator, and when agencies understand the interconnectedness of the effects of program expenditures across agencies.
  • School readiness is an example of such a child outcome. The multidimensional nature of school readiness means that no one agency is solely responsible for moving the indicators. Likewise, the interconnectedness of the effects of expenditures across agencies provides an incentive to collaborate. For example, the department of education understands that money invested in health and early childhood care makes their work easier down the line. Likewise, the child welfare agencies need high-quality childcare slots available for at-risk children.
  • Many states also point to the importance of locating a school readiness indicators initiative within a centralized body, such as a governor's children's cabinet, and of complementing such a top-down approach with grassroots approaches that involve the community at all levels.
  • There is widespread agreement across states that it is critical to establish true partnerships among such community stakeholders as residents, parents, teachers, health care providers, and others. These partnerships are critical to the identification of community-relevant indicators, to the interpretation of readiness profiles, and to effectively use indicators to inform policy changes at the state and local level.
  • Cross-agency collaboration is critical to amass the expenditures necessary for indicator development because, although it is possible to draw on existing staff and resources for indicators developed using administrative data, indicator development involving survey work can be costly, and there is much competition for scarce resources.

Guiding Principles for Selecting Child Indicators: Reflections from the Child Indicators Project

Mairéad Reidy. Ph.D.,
Senior Research Associate
Chapin Hall Center for Children
University of Chicago,
(773) 256 5174 (phone)
reidy-mairead@chmail.spc.uchicago.edu

This short paper is based on discussions between the fourteen states participating in the ASPE Child Indicators Project. It focuses on state reflections on the factors that are important in the selection of indicators at the state and local levels.

Sponsored by the U.S. Department of Health and Human Services (HHS) Office of the Assistant Secretary for Planning and Evaluation (ASPE), with additional support from the Administration for Children and Families (ACF) and The David and Lucile Packard Foundation, the Child Indicators project has aimed over the past 3 years to promote state efforts to develop and monitor indicators of health and well-being of children during this era of shifting policy. The fourteen participating states are Alaska, California, Delaware, Florida, Georgia, Hawaii, Maine, Maryland, Minnesota, New York, Rhode Island, Utah, Vermont, and West Virginia.Chapin Hall Center for Children provided technical assistance to grantees. Grantees typically exchanged knowledge and expertise through a series of technical assistance workshops coordinated by and held at Chapin Hall Center for Children. The workshops encouraged peer leadership and collaboration among states, and provided states with an opportunity to work with and learn from one another on areas of common interest. This short paper draws on the discussions of these meetings as well as individual consultation with states. I am grateful to participants for sharing their insights.

  • When monitoring goals for children or measuring outcomes from child initiatives, the experience of the participating states in the ASPE Child Indicators Initiative suggests that the choice of indicators is typically driven by the policy priorities and energies of policymakers, the audience for the indicators, the availability of data, and by the strongest predictors of desired outcomes.
    The priorities of policymakers can change often. When policymakers need to justify expenditures to the legislature to secure continued funding for an initiative, the priorities of the legislature will take precedence and measures will be selected to cater to those priorities. Advocates, service providers, and researchers may all demand and require different indicators than those useful to legislators. What satisfies one may be confusing to another, and all need different levels of detail and explanation. The choice of indicators will also be determined by available data. Typically, health data aside, early childhood data is scarce. Initiatives will understandably focus at least initially on what is measurable. Choice is further guided by the outcomes of interest and the research on legitimate predictors of outcomes and interim measures of expected change, as articulated in the theory of change.
  • It was widely believed that communities need to own the indicators. The choice of indicators at the community level will also be determined by the needs of the community, where the energies of a community lie, and the availability of trend data.
    Communities need to feel invested in indicator selection and use. For the community to own these indicators, many states believed that the community must participate in the selection process. Community residents must not feel that the indicator choice has been foisted on them. As at the state level, the choice of indicators at the community level will be determined by the needs of the community, where the energies of a community lie, and the availability of data. Some states, such as Vermont, that provide available trend data on outcomes to communities have found that this data can be an important catalyst for community engagement and ownership of indicators.
  • It is critically important to select measures that are appropriate for diverse cultural, racial/ethnic, and economic groups and are adaptable to local circumstances.
    Cultural differences may mean that certain indicators, useful in some states or in some communities, may be irrelevant in others. For example, whether a child is read to everyday may have less meaning in a state such as Alaska in which some cultural groups rely more on an oral tradition.
  • At both the state and community levels, it is critical to choose measures that have high-quality data that will be available for a period of years.
    There are groups of indicators that fall more easily in to this category that have their origin in ongoing vital statistics, Census data, and administrative data. This is not to preclude indicators built from sample surveys, but we need to acknowledge that samples of sufficient size are needed to disseminate reliable state and regional data, and that these surveys need to be repeated to build trend data.
  • At both the state and community levels, measures selected should be clear in interpretation over time, across localities and subgroups.
    Trends in an indicator should ideally represent unambiguously whether conditions are improving. It should be clear that when an indicator moves in a particular direction that it represents an improvement (or deterioration) in well-being overall. School achievement test data in Florida has, at times, not included certain children including those with low attendance throughout the year, making it very difficult to use the indicator to say reliably whether schools are getting any better or worse over time. Sometimes we can improve clarity by mapping sets of related indicators. A decline in the percentage of children in special education programs can be considered an improving picture for children if we also show that fewer children need services. Some indicators are particularly sensitive to variation in practice at the district level such as child abuse and neglect and foster care placement rates and juvenile crime rates. These rates are sensitive to child protection agency practice and the criminal justice system practice at the local level (who gets reported and into the system, the rate referral of juvenile offenders to courts, etc.). When measures are sensitive to variation in practice over time or variation at the regional or local level, every effort should be made to acknowledge these differences.
  • Measures selected should also be shown to be robust and comparable for the socioeconomic and demographic groups involved in the initiative.
    Many measures have been developed using white middle class families, and can fail to pick up important dimensions of the lives of particular cultures or income groups.

Data Development Principles for States and Communities Engaged in Child Indicator Studies: Reflections from the Child Indicators Project

Mairéad Reidy. Ph.D.,

Senior Research Associate
Chapin Hall Center for Children
University of Chicago,
(773) 256 5174 (phone)
reidy-mairead@chmail.spc.uchicago.edu

This short paper is based on discussions between the fourteen states participating in the ASPE Child Indicators Project. It focuses on state reflections on a series of data development principles for states and communities engaged in child indicator studies.

Sponsored by the U.S. Department of Health and Human Services (HHS) Office of the Assistant Secretary for Planning and Evaluation (ASPE), with additional support from the Administration for Children and Families (ACF) and The David and Lucile Packard Foundation, the Child Indicators project has aimed over the past 3 years to promote state efforts to develop and monitor indicators of health and well-being of children during this era of shifting policy. The fourteen participating states are Alaska, California, Delaware, Florida, Georgia, Hawaii, Maine, Maryland, Minnesota, New York, Rhode Island, Utah, Vermont, and West Virginia.Chapin Hall Center for Children provided technical assistance to grantees. Grantees typically exchanged knowledge and expertise through a series of technical assistance workshops coordinated by and held at Chapin Hall Center for Children. The workshops encouraged peer leadership and collaboration among states, and provided states with an opportunity to work with and learn from one another on areas of common interest. This short paper draws on the discussions of these meetings as well as individual consultation with states. I am grateful to participants for sharing their insights.

  • Focus attention initially on a small number of measures for which you have data.
    Don't let the perfect be the enemy of the good by waiting for the perfect set of indicators. It is important to start somewhere even if it is modestly.
  • There is also a need to intentionally maximize existing data sources.
    Maximizing the use of administrative databases is seen by many states as a priority, given the tremendous cost associated with surveying and direct assessments, and the possibility of low response rates for such surveys. Some states have therefore employed what are referred to as "data quality technicians" to assess existing data sources across departments.
  • Be upfront about data quality at all times, and acknowledge the limitations of administrative data
    It is critical to recognize that measures of problems such as child abuse and neglect identified from administrative records do not necessarily give a true measure of the extent of child abuse and neglect in society. These records simply represent our system's response to abuse and neglect, and will exclude any cases that are not brought to the attention of or found to be substantiated by the relevant authorities. Such measures can also be sensitive to variation in practice either over time or at the regional or local levels, so that a similar case may be substantiated in one county and not in another, making comparisons either over time or across regions problematic. it is important to get behind the indicator and acknowledge who may be omitted in the measure, and that variation in practice may hinder comparison.
  • Build sets of indicators incrementally.
    Although states tend, in the short run, to focus attention on a small number of measures for which they have data, most agree that in order to build an effective series of indicators, it is important to have a broader, longer-term vision, and to build sets of indicators incrementally.
  • If you cannot measure the outcomes of interest immediately, concentrate initially on interim or proxy measures of expected change.
    If you can show that some of the very strong predictors of the outcomes of interest are being put in place, it may be reasonable to predict that in the long run the outcome of interest will improve. For example, improving the school readiness of children may take many years, and be a very difficult or costly to measure, but if you can show improvement in some of the strong predictors of school readiness such as increased access to high quality child care or increased access to primary health care providers, it may be reasonable to predict that he school readiness of children will improve.
  • Recognize the need to develop new data sources.
    The states that have made the most advances in child indicator development recognize that it is often necessary to incorporate multiple data collection strategies and perspectives. For example, the states that have made the most progress in developing school readiness measures recognize that to do so, it is generally necessary, in addition to developing administrative data, to survey children, parents, teachers, school principals, health care providers, or community groupsand, although most have no plans to do soto engage in direct child assessment.
  • States recognize that it is critically important to use measures that are appropriate for diverse cultural and racial/ethnic and economic groups and are adaptable to local circumstances, and most grapple with how to find and test these measures.
    Cultural differences may mean that certain indicators that are useful in some states may be irrelevant in others. For example, whether a child is read to everyday may have less meaning in a culture that relies more on an oral tradition.
  • A useful strategy for many states to reduce survey costs has been to piggyback on existing surveys, and to tap into the Internet for data collection.
    Rhode Island, for example, has successfully added school readiness and childcare measures to both their Market Rate survey and their School Accountability for Learning and Teaching (SALT) survey. Vermont has successfully added questions to the Youth Risk Behavioral Survey (YRBS) and the Search Institute's Asset Survey. Problems cited by many states of such piggybacking include the lack of control it offers over the timing of measures and the inability to plan for monitoring trends over time. In addition, some states have mentioned the role that the Internet can play in collecting information. Public schools are increasingly connected to the Internet and those connections may help secure information from children and teachers. Concerns regarding parental consent to such data gathering and confidentiality were given serious thought by participating states. States indicate that they need more help with sampling strategies and how to identify community samples that reflect the diversity of the community.
  • It is typically agreed that surveys should include scales and items from previous surveys and that assessment should be based primarily on instruments, scales, and items from existing procedures with known reliability and validity for the contexts in which they are used.
    Sometimes we lack reliable and valid measures for particular population subgroups. In the past, states have had to forge ahead without these warranties. More recent developments in national surveys focusing on school readiness, with samples including extensive subgroups of low-income and minority children, or in the case of the Family and Child Experiences Survey (FACES), focusing exclusively on low-income families, are beginning to provide extensive information on the generalizability of measures across subgroups. The final test of questions and items may be whether it fits with a state’s early childhood emphasis.

Communications Strategies for Reporting Indicators at the State and Local Levels:Reflections from the State Child Indicators Project

Mairéad Reidy. Ph.D.,

Senior Research Associate
Chapin Hall Center for Children
University of Chicago,
(773) 256 5174 (phone)
reidy-mairead@chmail.spc.uchicago.edu

This short paper is based on discussions between the fourteen states participating in the ASPE Child Indicators Project. It focuses on state reflections on communications strategies for reporting indicators at the state and local levels.

Sponsored by the U.S. Department of Health and Human Services (HHS) Office of the Assistant Secretary for Planning and Evaluation (ASPE), with additional support from the Administration for Children and Families (ACF) and The David and Lucile Packard Foundation, the Child Indicators project has aimed over the past 3 years to promote state efforts to develop and monitor indicators of health and well-being of children during this era of shifting policy. The fourteen participating states are Alaska, California, Delaware, Florida, Georgia, Hawaii, Maine, Maryland, Minnesota, New York, Rhode Island, Utah, Vermont, and West Virginia.Chapin Hall Center for Children provided technical assistance to grantees. Grantees typically exchanged knowledge and expertise through a series of technical assistance workshops coordinated by and held at Chapin Hall Center for Children. The workshops encouraged peer leadership and collaboration among states, and provided states with an opportunity to work with and learn from one another on areas of common interest. This short paper draws on the discussions of these meetings as well as individual consultation with states. I am grateful to participants for sharing their insights.

  • There is a need for a common language.
    Generally, there is widespread agreement across states that it is important to develop and sustain a common language around indicators to convey common meaning among different people.
  • Indicators should be clear in interpretation.
    The selected measures must make sense to the layperson. It is thus critically important to select the most intuitively meaningful measures. Child care turnover rates or the number of child care settings a child experiences in a given time period, for example, can be measured in a number of different ways. With different data, we can produce both an event turnover rate and a cohort turnover rate. Typically, we report the event rate, the rate of turnover in any given year/time period. The cohort rate, by contrast, identifies the rate of turnover over a number of years among a particular birth cohort or subgroup, and is typically significantly higher than the event rate. It is sometimes far more intuitively obvious to people how to interpret the cohort rate, and the communication impact of choosing this measure over the event rate could potentially be enormous. States like Vermont are moving in this direction.
  • It is critical to be honest about data quality issues.
    Although it is important to not let the perfect be the enemy of the good, it is essential to be frank about issues of data quality. It is important to publicize the data with all its defects. This can help interpretation and additionally can shine light on those responsible for data collection and lead to improved data.
  • It is important to strategize around as many "publics" as possible.
    Many states believe that it is essential for effective communication to strategize around as many "publics" or audiences as possible. Legislatures, parents, community leaders, and the media often need different kinds of reports and levels of detail and explanation. It is critical to explain data in terms that resonate with the specific audience. For example, provider turnover rates in childcare might be effectively linked to the impact that turnover has on the business community.
  • It is essential to communicate effectively and planfully with the media.
    In communicating with the media, it was generally believed that it is important to put out reports frequently, and to set out conclusions in laypersons terms. Rhode Island typically gives advance copies of reports to the media allowing time for clarification of uncertainties prior to deadlines. Some states suggest that it is important to couple data with human presentations. If reporting, for example, on the results of a survey of kindergarten teachers, it can be helpful to have a panel of teachers share their experiences at the same time.
  • The importance of communities owning data, and obtaining community input on indicator selection via roundtables or other forums was stressed.
    Measures collected across all communities should also be augmented with additional measures that are pertinent to local circumstances and sensitive to the unique community-specific characteristics of children and families. Standardization across communities is valuable as it allows important comparison across communities, but indicators that are sensitive to the unique community-specific characteristics of children and families will be more relevant to those interested in charting change over time within a community. It is furthermore critical that states communicate results to communities before releasing data.
  • It is important to offer some training to communities on how to interpret data.
    Community-level indicators can be very powerful tools. They can give communities information about the areas in which they have been most successful and the dimensions for which greater efforts must be expended. States reported that both the state and the communities themselves tended toward community or county-level performance comparisons. States agreed that it is useful for communities to hear that there is always a distribution of performance and that typically there will always be communities that are above or below a specific community. Community-level data can be particularly useful when they are used to track trends over time. It is thus as important to stress their usefulness in allowing communities to compare themselves to themselves over time as well as their usefulness in making multi-community comparison. It is also critical for communities to take into account their socioeconomic and demographic make-up, and to understand that multivariate tools are often necessary to enable communities to assess how they might expect themselves to perform on indicators relative to other communities given their socioeconomic and demographic make-up.
  • An important general concern was the issue of indicator improvement over time, and its implications for consistency and for monitoring long-term trends. In particular, states that had developed innovative and improved indicators and had replaced old indicators with these new indicators, worried about losing meaningful trend analyses. States encouraged each other to continue to collect both the old and new indicators, and to drop the old indicators only when a reasonable time trend in the new ones had been achieved.

Political, Legal, and Technical Issues In Data Linking: Reflections From The Child Indicators Project

Mairéad Reidy. Ph.D.,

Senior Research Associate
Chapin Hall Center for Children
University of Chicago,
(773) 256 5174 (phone)
reidy-mairead@chmail.spc.uchicago.edu

This short paper is based on discussions between the fourteen states participating in the ASPE Child Indicators Project. It focuses on state reflections on the political, legal, ethical, and technical challenges they face in data linking. It is not a comprehensive review of the challenges of data linking but rather focuses on those issues pertinent to participating states and discussed during the Child Indicators Technical Assistance workshops.

Sponsored by the U.S. Department of Health and Human Services (HHS) Office of the Assistant Secretary for Planning and Evaluation (ASPE), with additional support from the Administration for Children and Families (ACF) and The David and Lucile Packard Foundation, the Child Indicators project has aimed over the past 3 years to promote state efforts to develop and monitor indicators of health and well-being of children during this era of shifting policy. The fourteen participating states are Alaska, California, Delaware, Florida, Georgia, Hawaii, Maine, Maryland, Minnesota, New York, Rhode Island, Utah, Vermont, and West Virginia. Chapin Hall Center for Children provided technical assistance to grantees. Grantees typically exchanged knowledge and expertise through a series of technical assistance workshops coordinated by and held at Chapin Hall Center for Children. The workshops encouraged peer leadership and collaboration among states, and provided states with an opportunity to work with and learn from one another on areas of common interest. This short paper draws on the discussions of these meetings as well as individual consultation with states. I am grateful to participants for sharing their insights.

Purposes of Linked Data

  • The broad goal of linking or integrating administrative records among Child Indicator Initiative participants is to generate new knowledge about the prevalence and patterns of service use of children and their families.

    Sometimes data linking is necessary to satisfy federal reporting requirements, but more typically it is done to answer specific questions about outcomes or service utilization among clients. The Minnesota Department of Human Services for example has linked TANF data with Medicaid, housing, employment records and child support data both for the TANF federal report and for use in a TANF longitudinal study. They have further linked Medicaid records with SSI records to help to identify children with disabilities who are not receiving Medicaid services.

Data Sharing Across Agencies

There is a general consensus that although great progress has been made on the technological aspects of data linking and establishing a common identifier, we still have a lot of unanswered questions regarding the political and legal challenges around confidentiality.

Political Concerns with Data Sharing

  • There is a need to build bridges between state initiatives, state agencies, and communities to help promote buy-in on the importance of data sharing by all responsible agencies, and to ensure public engagement in the sharing process.
  • A clearer articulation of the benefits of data linking must be used to rally support at the agency level around data linking. When approaching agencies to ask them for data, it is essential to be up front about the benefits of data linking.
  • Even when there is buy-in on the importance of data sharing, agencies who are the owners of their own data often have not established rules for data sharing. Their decisions to share data can seem arbitrary. Sometimes it is speculated that staff at certain agencies are reluctant to share data for tracking purposes because of the increase in workload such an agreement would necessitate. Rules for data sharing would be welcomed to avoid such arbitration and speculation.

Legal Concerns with Data Sharing Across Agencies

States noted the following legal concerns in their data linking work:

  • There is considerable variation in privacy laws across states.
  • When planning new data collection, or when planning to integrate administrative data, states suggest it is important to follow the following steps:
    1. Know the requirements of confidentiality before collecting data.
    2. Use lawyers as consultants at the planning level as this can help determine how far study and data collection can go without violating laws.
    3. Examine all aspects of obtaining active or passive consent before settling on one form of obtaining consent.
  • A variety of approaches to legally share data was suggested by participants; each had their own concerns.
    1. Informed Consent
      An informed consent process could empower a family to approve the use and sharing of data related to them. Questions were raised about whether families understand the rights they are signing away. In legal circles, there are claims that an individual must understand what he or she is signing away for that act to be binding.
    2. Use of Social Security Numbers or 'Blocking' to link data
      An umbrella tracking system, based on clients' social security numbers, could allow researchers to identify when and in which services individuals enroll. However, the line where confidentiality begins and ends is blurred. Some states regard using social security numbers as identifiers to be a breach of federal law and some do not. Also, in large states, like California, fraudulent social security numbers are easily bought and sold. Tracking can also be done through "blocking," a process using a combination of individual characteristics--such as name and date of birth. This works well in some states. However, in substate jurisdictions with small populations, this kind of information could lead to disclosure of an individual's identity and thus violate confidentiality.
    3. Universal Identification Number
      Agencies could create a universal identifier based on encrypted names. This would allow tracking without breaching confidentiality. However, encryption requires high technological capacity and collaboration among agencies.

Criteria for a Common Identifier

California houses the largest Medicaid database in the country and is in the process of cleaning other data in the system to link them with it. Researchers in California have initiated an attempt to form a common identifier of clients in the system to minimize duplication and to allow tracking across systems once linking occurs. Researchers have thus identified six criteria that the common identifier has to meet:

  1. Universality. The identifier would be assigned with ease.
  2. Durability. After assigned, the identifier would have the capability to follow an individual for his entire service-use history.
  3. Non-invasive. Assigning the identifier should not violate confidentiality.
  4. Flexibility. The identifier has to be able to move through and beyond agency boundaries with ease.
  5. Uniqueness. There must be enough digits/letters within the identifier so that it is unique to the individual.
  6. Financially feasible. The whole process has to take place under tight budgets.

Technical Issues with Data Sharing Across Agencies

Though there are many obstacles in constructing linked data, participants maintained the solutions lie in creativity. Everybody involved can contribute and must remain flexible. Some important considerations to establishing databases were highlighted including:

  • It is critical to assign common identifiers at uniform periods in the clients' lifetimes (e.g., birth, immunizations, etc.).
  • It is important to distinguish between household and family data. An individual can live in multiple households and with multiple families. Relationship data, although while difficult to maintain, is the most consistent data one can keep as far as a household is concerned--a mother will always be a mother; foster parents are foster parents until their time is terminated.
  • It is essential to link data incrementally.
  • When matching, there are two potential forms of error rate. One is mismatching rate, which is defined as the rate at which matches are being made, but to the wrong people. The other is the rate at which no matches are being made at all. When data matching, there is a tendency to have one sort of error rate more than the other. Each type is inevitable; however, researchers must pay attention to why matching problems are occurring and adjust their matching techniques according to which type of error rate is most acceptable for their particular study.

Data Warehousing: Lessons Shared About the Linking Process

The Minnesota Department of Human Services in developing a data warehouse put forward the following lessons learned over the course of their work:

  • To approach warehouse development incrementally
  • When feasible, to link the data in the source systems prior to extracting data to the warehouse.
  • The source data is most reliable when it is part of the purpose of the system.
  • Similar data submitted by disparate systems often yield unreliable comparisons.

Use of Census 2000 and the American Community Survey for Indicators at the State and Local Levels

by:
Cynthia M. Tauber, U.S. Census Bureau and University of Baltimore
Mairéad Reidy, Chapin Hall Center for Children

This short paper draws on the presentation made by Cynthia Tauber (U.S. Census Bureau and University of Baltimore at the Spring 2001 Child Indicators meeting on the use of the Census 2000 and the American Community Survey for child indicators at the state and local levels.

Sponsored by the U.S. Department of Health and Human Services (HHS), Office of the Assistant Secretary for Planning and Evaluation (ASPE), with additional support from the Administration for Children and Families (ACF) and the David and Lucile Packard Foundation, the Child Indicators project has aimed over the past 3 years to promote state efforts to develop and monitor indicators of health and well-being of children during this era of shifting policy. The fourteen participating states are Alaska, California, Delaware, Florida, Georgia, Hawaii, Maine, Maryland, Minnesota, New York, Rhode Island, Utah, Vermont, and West Virginia. Chapin Hall Center for Children provided technical assistance to grantees. Grantees typically exchanged knowledge and expertise through a series of technical assistance workshops coordinated by and held at Chapin Hall Center for Children. The workshops encouraged peer leadership and collaboration among states, and provided states with an opportunity to work with and learn from one another on areas of common interest.

  • The presentation provided an overview of Census 2000 and the American Community Survey and reviewed how these can be used to build child and family well-being indicators.

Census 2000

  • The main purpose of Census 2000 is to count the population every 10 years, while the American Community Survey provides yearly updated information on the characteristics of the population. Both provide statistics for small geographic areas and small population groups. The questions on the American Community Survey provide indicators that are similar to those of the Census 2000 long form.
  • The Census 2000 short form asks seven questions of evey person and housing unit in the U.S. about age, race, Hispanic origin, gender, household relationship, and housing tenure (owner or rented). Field staff determine characteristics of vacant housing units.
    • Respondents could select one or more races, a change from 1990. As in past Censuses, there is a separate question on Hispanic origin.
    • Less than 2 percent of the total U.S. population marked two or more races; the percentage is higher among children. There are 126 race and Hispanic origin categories in some Census products. Most products, however, show only the counts of those who reported six single racial groups and "two or more races." See Sharon M. Lee, Using the New Racial Categories in the 2000 Census, (http://www.aecf.org/kidscount/categories.htm).
  • Additional questions are asked in the long form of a sample of housing units and people living in group quarters.
    • Population statistics are provided on a range of topics including marital status, place of birth/citizenship, disability, ancestry, migration, language spoken at home and ability to speak English, school enrollment and educational attainment, grandparents as caregivers, place of work and journey to work, occupation, industry and class of worker, work status in the week before the Census or the last year in which the person worked, and income in 1999.
    • A new question asked about grandparents as caregivers for dependent children and for how long they had been responsible for their basic needs.
    • In Census 2000, the disability question specifically asks about vision or hearing impairments as well as conditions that limit learning or remembering.
    • Housing statistics based on the long form include number of rooms and bedrooms, plumbing and kitchen facilities, the age and value of the housing unit, and questions to indicate housing affordability including the cost and type of utilities, mortgage/rent paid, and taxes and insurance.
    • Results are available for geographic levels, including the Block (short form information only), Block Group, Census Tract, County, Metropolitan area, state, and national levels.
  • A significant change from the 1990 Census is the race question. Various groups are working out options for comparing racial categories from the 1990 and 2000 Censuses.
  • Information about Census 2000 products, documentation, and the product release schedule are on the Census Bureau's website: http://www.census.gov. Other sites:

American Community Survey

  • The American Community Survey, once the sample is fully implemented in every county (planned to start in 2003), will provide annual-average estimates of demographic, housing, social, and economic characteristics updated every year for the nation, all states, and jurisdictions of 65,000 or more people. Statistics for small areas will be updated for multi-year averages (3-year averages for areas of 20,000 to 64,999 and 5-year averages for areas of less than 20,000 people). With the annually updated averages, it will be possible to measure changes over time for small areas and population groups.
  • The American Community Survey provides new opportunities for researchers. The statistics are updated every year. This permits measurement of the level and direction of change and indicators of program performance. Information about migration patterns will be available. The survey helps in the assessment of needs and resources and informed strategic decisionmaking.
  • The Census Bureau plans to replace the long form with the American Community Survey for the 2010 Census.
  • The Congress approves questions on the decennial Census and the American Community Survey. They have approved only those questions mandated or required by Federal legislation or court cases. That presents considerable challenges to adding new questions to the American Community Survey or the next Census.
  • Be cautious about comparisons of survey and administrative datasets. There are crucial differences in concepts and data collection methods among datasets. As such, estimates of population characteristics from surveys such as the decennial Census and the American Community Survey will differ (see http://www.ubalt.edu/jfi/jfc/publications.htm).
  • Researchers are encouraged to report their needs for tabulations to the Census Bureau to consider for future American Community Survey or Census products.