Mairéad Reidy. Ph.D.,
Senior Research Associate
Chapin Hall Center for Children
University of Chicago,
(773) 256 5174 (phone)
This short paper is based on discussions between the fourteen states participating in the ASPE Child Indicators Project. It focuses on state reflections on a series of data development principles for states and communities engaged in child indicator studies.
Sponsored by the U.S. Department of Health and Human Services (HHS) Office of the Assistant Secretary for Planning and Evaluation (ASPE), with additional support from the Administration for Children and Families (ACF) and The David and Lucile Packard Foundation, the Child Indicators project has aimed over the past 3 years to promote state efforts to develop and monitor indicators of health and well-being of children during this era of shifting policy. The fourteen participating states are Alaska, California, Delaware, Florida, Georgia, Hawaii, Maine, Maryland, Minnesota, New York, Rhode Island, Utah, Vermont, and West Virginia.Chapin Hall Center for Children provided technical assistance to grantees. Grantees typically exchanged knowledge and expertise through a series of technical assistance workshops coordinated by and held at Chapin Hall Center for Children. The workshops encouraged peer leadership and collaboration among states, and provided states with an opportunity to work with and learn from one another on areas of common interest. This short paper draws on the discussions of these meetings as well as individual consultation with states. I am grateful to participants for sharing their insights.
- Focus attention initially on a small number of measures for which you have data.
Don't let the perfect be the enemy of the good by waiting for the perfect set of indicators. It is important to start somewhere even if it is modestly.
- There is also a need to intentionally maximize existing data sources.
Maximizing the use of administrative databases is seen by many states as a priority, given the tremendous cost associated with surveying and direct assessments, and the possibility of low response rates for such surveys. Some states have therefore employed what are referred to as "data quality technicians" to assess existing data sources across departments.
- Be upfront about data quality at all times, and acknowledge the limitations of administrative data
It is critical to recognize that measures of problems such as child abuse and neglect identified from administrative records do not necessarily give a true measure of the extent of child abuse and neglect in society. These records simply represent our system's response to abuse and neglect, and will exclude any cases that are not brought to the attention of or found to be substantiated by the relevant authorities. Such measures can also be sensitive to variation in practice either over time or at the regional or local levels, so that a similar case may be substantiated in one county and not in another, making comparisons either over time or across regions problematic. it is important to get behind the indicator and acknowledge who may be omitted in the measure, and that variation in practice may hinder comparison.
- Build sets of indicators incrementally.
Although states tend, in the short run, to focus attention on a small number of measures for which they have data, most agree that in order to build an effective series of indicators, it is important to have a broader, longer-term vision, and to build sets of indicators incrementally.
- If you cannot measure the outcomes of interest immediately, concentrate initially on interim or proxy measures of expected change.
If you can show that some of the very strong predictors of the outcomes of interest are being put in place, it may be reasonable to predict that in the long run the outcome of interest will improve. For example, improving the school readiness of children may take many years, and be a very difficult or costly to measure, but if you can show improvement in some of the strong predictors of school readiness such as increased access to high quality child care or increased access to primary health care providers, it may be reasonable to predict that he school readiness of children will improve.
- Recognize the need to develop new data sources.
The states that have made the most advances in child indicator development recognize that it is often necessary to incorporate multiple data collection strategies and perspectives. For example, the states that have made the most progress in developing school readiness measures recognize that to do so, it is generally necessary, in addition to developing administrative data, to survey children, parents, teachers, school principals, health care providers, or community groupsand, although most have no plans to do soto engage in direct child assessment.
- States recognize that it is critically important to use measures that are appropriate for diverse cultural and racial/ethnic and economic groups and are adaptable to local circumstances, and most grapple with how to find and test these measures.
Cultural differences may mean that certain indicators that are useful in some states may be irrelevant in others. For example, whether a child is read to everyday may have less meaning in a culture that relies more on an oral tradition.
- A useful strategy for many states to reduce survey costs has been to piggyback on existing surveys, and to tap into the Internet for data collection.
Rhode Island, for example, has successfully added school readiness and childcare measures to both their Market Rate survey and their School Accountability for Learning and Teaching (SALT) survey. Vermont has successfully added questions to the Youth Risk Behavioral Survey (YRBS) and the Search Institute's Asset Survey. Problems cited by many states of such piggybacking include the lack of control it offers over the timing of measures and the inability to plan for monitoring trends over time. In addition, some states have mentioned the role that the Internet can play in collecting information. Public schools are increasingly connected to the Internet and those connections may help secure information from children and teachers. Concerns regarding parental consent to such data gathering and confidentiality were given serious thought by participating states. States indicate that they need more help with sampling strategies and how to identify community samples that reflect the diversity of the community.
- It is typically agreed that surveys should include scales and items from previous surveys and that assessment should be based primarily on instruments, scales, and items from existing procedures with known reliability and validity for the contexts in which they are used.
Sometimes we lack reliable and valid measures for particular population subgroups. In the past, states have had to forge ahead without these warranties. More recent developments in national surveys focusing on school readiness, with samples including extensive subgroups of low-income and minority children, or in the case of the Family and Child Experiences Survey (FACES), focusing exclusively on low-income families, are beginning to provide extensive information on the generalizability of measures across subgroups. The final test of questions and items may be whether it fits with a state’s early childhood emphasis.