Mairéad Reidy. Ph.D.,
Senior Research Associate
Chapin Hall Center for Children
University of Chicago,
(773) 256 5174 (phone)
This short paper is based on discussions between the fourteen states participating in the ASPE Child Indicators Project. It focuses on state reflections on communications strategies for reporting indicators at the state and local levels.
Sponsored by the U.S. Department of Health and Human Services (HHS) Office of the Assistant Secretary for Planning and Evaluation (ASPE), with additional support from the Administration for Children and Families (ACF) and The David and Lucile Packard Foundation, the Child Indicators project has aimed over the past 3 years to promote state efforts to develop and monitor indicators of health and well-being of children during this era of shifting policy. The fourteen participating states are Alaska, California, Delaware, Florida, Georgia, Hawaii, Maine, Maryland, Minnesota, New York, Rhode Island, Utah, Vermont, and West Virginia.Chapin Hall Center for Children provided technical assistance to grantees. Grantees typically exchanged knowledge and expertise through a series of technical assistance workshops coordinated by and held at Chapin Hall Center for Children. The workshops encouraged peer leadership and collaboration among states, and provided states with an opportunity to work with and learn from one another on areas of common interest. This short paper draws on the discussions of these meetings as well as individual consultation with states. I am grateful to participants for sharing their insights.
- There is a need for a common language.
Generally, there is widespread agreement across states that it is important to develop and sustain a common language around indicators to convey common meaning among different people.
- Indicators should be clear in interpretation.
The selected measures must make sense to the layperson. It is thus critically important to select the most intuitively meaningful measures. Child care turnover rates or the number of child care settings a child experiences in a given time period, for example, can be measured in a number of different ways. With different data, we can produce both an event turnover rate and a cohort turnover rate. Typically, we report the event rate, the rate of turnover in any given year/time period. The cohort rate, by contrast, identifies the rate of turnover over a number of years among a particular birth cohort or subgroup, and is typically significantly higher than the event rate. It is sometimes far more intuitively obvious to people how to interpret the cohort rate, and the communication impact of choosing this measure over the event rate could potentially be enormous. States like Vermont are moving in this direction.
- It is critical to be honest about data quality issues.
Although it is important to not let the perfect be the enemy of the good, it is essential to be frank about issues of data quality. It is important to publicize the data with all its defects. This can help interpretation and additionally can shine light on those responsible for data collection and lead to improved data.
- It is important to strategize around as many "publics" as possible.
Many states believe that it is essential for effective communication to strategize around as many "publics" or audiences as possible. Legislatures, parents, community leaders, and the media often need different kinds of reports and levels of detail and explanation. It is critical to explain data in terms that resonate with the specific audience. For example, provider turnover rates in childcare might be effectively linked to the impact that turnover has on the business community.
- It is essential to communicate effectively and planfully with the media.
In communicating with the media, it was generally believed that it is important to put out reports frequently, and to set out conclusions in laypersons terms. Rhode Island typically gives advance copies of reports to the media allowing time for clarification of uncertainties prior to deadlines. Some states suggest that it is important to couple data with human presentations. If reporting, for example, on the results of a survey of kindergarten teachers, it can be helpful to have a panel of teachers share their experiences at the same time.
- The importance of communities owning data, and obtaining community input on indicator selection via roundtables or other forums was stressed.
Measures collected across all communities should also be augmented with additional measures that are pertinent to local circumstances and sensitive to the unique community-specific characteristics of children and families. Standardization across communities is valuable as it allows important comparison across communities, but indicators that are sensitive to the unique community-specific characteristics of children and families will be more relevant to those interested in charting change over time within a community. It is furthermore critical that states communicate results to communities before releasing data.
- It is important to offer some training to communities on how to interpret data.
Community-level indicators can be very powerful tools. They can give communities information about the areas in which they have been most successful and the dimensions for which greater efforts must be expended. States reported that both the state and the communities themselves tended toward community or county-level performance comparisons. States agreed that it is useful for communities to hear that there is always a distribution of performance and that typically there will always be communities that are above or below a specific community. Community-level data can be particularly useful when they are used to track trends over time. It is thus as important to stress their usefulness in allowing communities to compare themselves to themselves over time as well as their usefulness in making multi-community comparison. It is also critical for communities to take into account their socioeconomic and demographic make-up, and to understand that multivariate tools are often necessary to enable communities to assess how they might expect themselves to perform on indicators relative to other communities given their socioeconomic and demographic make-up.
- An important general concern was the issue of indicator improvement over time, and its implications for consistency and for monitoring long-term trends. In particular, states that had developed innovative and improved indicators and had replaced old indicators with these new indicators, worried about losing meaningful trend analyses. States encouraged each other to continue to collect both the old and new indicators, and to drop the old indicators only when a reasonable time trend in the new ones had been achieved.