A Summary of the Meeting of
April 28 through 30, 1999
Advancing States' Child Indicator Initiatives is sponsored by the Office of the Assistant Secretary for Planning and Evaluation of the U.S. Department of Health and Human Services. Martha Moorehouse of ASPE is the Project Officer. This summary was prepared by the Chapin Hall Center for Children at the University of Chicago. Harold Richman is the project Principal Investigator and Mairéad Reidy is the Project Director.
Chapin Hall Center for Children
1313 East 60th Street
Chicago, Illinois 60637
773/753-5900
Lunchtime Opening Session
Harold Richman
Harold Richman, Director of Chapin Hall, welcomed the delegates from the 13 original project states and those from a state new to the project, California. He stressed the important role that state input had played in planning the meetings. Richman then introduced Martha Moorehouse, Senior Research Policy Analyst in the of the Office of the Assistant Secretary for Planning and Evaluation of the Department of Health and Human Services.
Martha Moorehouse
After expressing her pleasure at being at the meetings and noting the team approach that she and her ASPE colleagues take to the project, Martha Moorehouse echoed Richman, stressing that meeting plans were made in response to concerns and interests expressed earlier by the states. Moorehouse noted that the participation of California is supported by the Packard Foundation.
Moorehouse said that some states had asked if there are good data on conditions in states of which they are not aware. She said that ASPE staff had discussed the availability of data and felt that much of the expertise on where states are and where they can go, and where communities are and where they can go, is in this room. She said that the meeting's greatest goal was to make it possible for the states to build on each other's accomplishments and expertise. Moorehouse then introduced Ann Segal.
Ann Segal
Ann Segal, Deputy Assistant Secretary for Policy Initiatives at HHS, welcome the delegates and described a new effort recently announced and sponsored by the office of the vice-president called BOOST (Better Opportunities and Outcomes Starting Today). BOOST, with no money attached, is meant to establish a three-way, federal-state-local, partnership to increase flexibility to improve outcomes for children. Communities are required to:
- Have outcomes for children in mind
- Measure those outcomes over time
- Have a set of plans for things that they wanted to do
The announcement of BOOST yielded 60 generally broad proposals. Segal noted that this project is both helping remove federal and state barriers and also revealing that some perceived barriers are not actually inhibiting progress. Segal expects that fourteen BOOST communities will be selected. Segal said that the link for the states was in the indicators and she expected that ASPE would contact the states about this initiative. She said that BOOST is a test to see if you can help communities improve their indicators on the ground. Segal then introduced Jody McCoy.
Jody McCoy
Policy Analyst Jody McCoy focused her remarks on the process for applying for ASPE funding for the projects second year. She said that each currently funded project was eligible to apply for further funding next year and that she expected Chapin Hall to host two meetings next year. She then highlighted two key points found in the letter announcing the availability of second-year funds:
- The second-year application should include an overview of states first year efforts to meet the goals and objectives presented in the first year's application for funding and should describe the proposed activities for the second year to further these goals. ASPE considers this second-year funding to be in support of current work, and no change of direction or other alteration in the partnership will be required.
- The application should describe how the state expects to continue working on children's well-being indicators after the end of the second year. ASPE is not concerned whether or not states retain their current team structure of staffing, or whether they go out to get funding, but does wish to see how states have thought about institutionalizing this work within the state government structure.
The application due date was June 15 and awards will be made in late September, one year after the date of the original awards
Mairéad Reidy
Mairéad Reidy, Chapin Hall Senior Research Associate, sketched the three-track meeting agenda and the structure of the meetings. She explained how selected states will present their experiences in case-study format in sessions and outlined the role of the resource persons. Reidy then invited the resource persons to introduce themselves, followed by representatives of each state.
Track 1, Session 1:
Public Engagement in the Indicator Process, Part 1
The session began with an introduction from Ada Skyles of Chapin Hall who touched on the following over-arching themes of public engagement work:
- Engagement is an active process; one never completes public engagement
- Different strategies are required for accessing different publics
- Engagement work must be anchored to your end goal for indicator use
- The litmus test for public engagement is what will be useful for the audience you are trying to engage
- It is important to be realistic about the process of public engagement
Skyles concluded by introducing her co-facilitator, Janet Bittner from the Carl Vinson Institute of Government at the University of Georgia in Atlanta.
Janet Bittner
Bittner started by discussing the why, who, what, where and how of a framework of public engagement. As to why engage the public in indicator work, Bittner invoked the creation of sounder public policy and stronger community planning as possible examples of the outcomes of indicator work. In terms of who should be engaged, she suggested that states work to conceive of as many "publics" as possible, (such as, advocates, business people, human service providers, the media, and organized labor). Turning to the "what," Bittner suggested that states think about what specific indicators would create the messages necessary for speaking to different audiences. Finally, under the combined category of "where and how," she urged states to carefully examine both their process of data collection and dissemination.
States supplemented these categories with additional questions including:
- At what point in the process should data collection commence?
- How do we convince stakeholder to become involved?
- How do we insure access to data for the largest number of users?
Session resource person Ralph Hamilton, of Palm Beach County, Florida, added that it is important to start with the customer in mind, and to realize that public engagement involves moving power from one place to another. Finally, he noted that tools used for engagement have to serve a defined strategy.
Utah
This discussion among the states was followed by two state presentations. The first presentation was made by Rita Penza of Utah who was filling in for an absent colleague from Utah Children, the Kids Count organization in Utah. Penza presented a public engagement program called the Advocacy Academy. The program, created to encourage a wider and more active use of the Kids Count data in Utah, offers child advocacy education to community-level leaders. Over a three-day period, participants are trained in legislative communication and processes, grassroots techniques, media relations, and data usage. Primary users of the program have been teachers and youth development professionals. Each participant agrees to give two community presentations upon completion of the program.
West Virginia
This presentation was followed by a presentation from Steve Heasley of West Virginia. Mr. Heasley spoke on lessons learned in West Virginia through their Family Resource Networks. These Networks are composed of groups of family members, community members, and service providers who meet to promote positive outcomes for children, youth, and families. Heasley began by citing the following three main points:
- It is worth taking the time to build the infrastructure for community input in your state.
- This building process requires an investment of money and technical assistance
- The Family Resource Networks are guided by the notion that the real experts are the users of services
The Family Resource Networks have accomplished a great deal for West Virginia. Among the benefits, Heasley sited direct input from service consumers, and mobilization of communities around issues that are important to them.
The session concluded with a discussion of themes from the afternoon, led by Bittner. The group concluded that:
- Public Engagement should stimulate action.
- A common indicator language must be developed.
- This common indicator language must have the ability to break down turf barriers.
The discussion was to be continued in Track 1, Session 1, part 2.
Track 1, Session 1:
Public Engagement in the Indicator Process, Part 2
The session began with a review of the key issues raised in Part I of the session. Ada Skyles of Chapin Hall and Janet Bittner of the Carl Vinson Institute at the University of Georgia summarized the remaining public engagement questions into four areas:
- Policy input. How to engage communities in using indicators to impact policy? How can stakeholders be convinced to be involved?
- Engagement strategies. How do you know when you are engaging stakeholders? Who are the various publics? At what point should you try to engage? What are the challenges?
- Keeping the user in focus. How do you produce information that speaks to people? How do you collect and disseminate data to the largest number of users?
- Consumer engagement. What are the strategies for consumers?
Rhode Island
This categorization was followed by a presentation by Ann Marie Harrington of Rhode Island Kids Count, who spoke on Rhode Island's success in engaging a variety of publics in their indicators work. First, Harrington noted the successful partnerships that Kids Count had developed with state agencies. Specifically, she highlighted the creation of a data liaison, a single agency representative designated by the agency for the purpose of providing Kids Count with updated information. Secondly, she added that their success had been achieved by a positive relationship with the media. This relationship allowed Kids Count to establish themselves as a reliable source for stories in the media. In summary, Harrington observed that their success with engagement was due to thinking about the process of engagement from the onset of the project, maintaining a flexibility about routes to engagement, and embracing the errors they had made over the course of their existence.
Maryland
Karen Finn and Roann Tsakalas of Maryland followed Harrington. They spoke about their experience with engaging the public in the indicator development process, in particular on the mechanisms used for public engagement in the creation of Maryland's eight results for children that were developed by the Governor's Task Force. A series of community-level roundtables were held throughout Maryland for gathering input on the eight proposed "results" for children. To accomplish this, Maryland relied on an internal structure of existing Local Management Boards composed of key community figures. The speakers stressed the distinction between asking community members to respond to a pre-developed set of results rather than developing the indicator list themselves. Maryland noted that this choice was an asset to their process. Finn and Tsakalas concluded by noting that the roundtables served a dual purpose for future indicator work. First, the process helped educate people on the potential for indicator use as a measurement of community well-being. Second, the process helped state-level administrators understand what data was available, feasible, and sustainable on the community level.
Discussion
This presentation was followed by a concluding discussion of the implications of data dissemination for public engagement. Questions raised by the states included whether or not the state was in a good position to interpret county-level data, and how to get past people's anger upon releasing the data. During the discussion, resource person Ralph Hamilton offered an example of how to share the interpretation of the data with the affected communities. He said that a school-reform initiative with which he was connected produced a report anchored by data. They then convened a series of community meetings at which "individual communities reacted to the report." Hamilton's group drew on what was said at those meetings to produce a second, qualitative version of the report. He said:
The benefit was twofold. It validated the various communities' ownership of that knowledge, because it was their stories being written up. Secondly, it created a second press event for us when we released the second report which reinforced the first. It was a way to think about the indicators being manifested in numbers and in stories and having different lives as they took these different forms.
Skyles added that Hamilton's point stressed the importance of remembering that the numbers represent people and the interpretation belongs to the people whom it describes. "Telling the story behind the numbers allows the power of the numbers to come forth," she said.
Another key issue raised was the possibility that the data can stigmatize a community. In response to some of these concerns, Georgia has developed a handbook on how to use data appropriately. The group devoted some time to discussing the issue of stigmatization, racial and otherwise, resulting from indicator reporting. Strategies suggested for minimizing stigmatization include:
- Developing positive relationships with communities before releasing the data
- Considering carefully who delivers your message
- Using percent improvement as opposed to ranking to illustrate changes among communities
- Focusing on data that is dispelling current stereotypes
Summaries
Skyles asked Hamilton and Bittner to summarize the session. Hamilton said that it was critical to think in multiple dimensions, to develop a broad-based communications strategy. He said that it is very important that the particular uses of the indicators need to be tied to a specific strategy, a clear articulation of why these indicators are being employed. Decisions about which audience to approach, at which level to approach an audience, and in what way to approach them, need to be made in light of that clear strategic approach. Otherwise, the indicators may not have the maximum impact.
The session concluded with Bittner noting the importance of engaging all of the stakeholders in the complex world of many audiences. Citing Con Hogan, she said "As complex as this is, if we don't to keep the messages simple we won't to be able to communicate effectively what we need to communicate . . . . As sophisticated as we need to be in thinking about this, how can we be as sophisticated about keeping it understandable?"
Track 1, Session 2:
Indicator Use in the Creation of Overall Goals for Children
The session began with an introduction by Fred Wulczyn of Chapin Hall who outlined the three objectives:
- How to encourage buy-in on the part of policy makers
- How to build consensus around statewide goals
- How to create feedback processes to stimulate action
Wulczyn asked states to express their concerns or issues in these areas, and several states added that one should think and talk about engaging the entire range of policy people from middle-level bureaucrats to high-level elected officials, and also about how to create buy-in that can withstand changes in administration.
Ralph Hamilton
Wulczyn's opening remarks were followed by a presentation by resource person Ralph Hamilton, who spoke about the process of public engagement and the Palm Beach Children's Services Council. The Council is an independent taxing authority that levies funds within the county and distributes them as needed. The goal of the Children's Services Council is to create system reform that moves toward support of preventative programming. He began by noting three distinctions of his experience:
- The Children's Services Council work was county-based and came with a different perspective
- This project was more interested in specific outcomes, so chose to focus less on specific measurements
- The Children's Services Council did not choose to do broad-based public engagement
Hamilton then proceeded to share his strategy of looking first at the power alignment in the county. Secondly, the task force focused their engagement efforts on leadership/agency directors. Thirdly, they put together a very simple and clear framework of outcomes based on developmental life stages that was easily accessible for those they were trying to win over.
Hamilton concluded his presentation with the following lessons:
- Ideas matter
- Focus on where you want to be
- Delay "how" conversations until general climate is changed
- Identify key leaders and sell ideas in different ways to different leaders with strategic message definition
- Map your successes
- Don't try to take on the whole system; match your strategy to your capacity
Delaware
Hamilton's presentation was followed by a presentation of Maria Aristigueta, Assistant Professor at the University of Delaware. Aristigueta spoke on her work around engaging state agencies in the use of indicators for results-based accountability through a strategic planning process. She presented her framework for indicator use, and spent a good deal of time discussing the difficult process of attempting to find an agency willing to participate in her process. Aristigueta's concern was that she had tried to engage both strong and weak agencies in the process, and neither seemed interested. After subsequent discussion, the group agreed that it would be better to begin by piloting her work on a strong agency before working with a weak group. The group agreed that characteristics of a strong agency include:
- An agency that works well or is working consistently with the federal government
- An agency that uses Kids Count data well and is accustomed to indicator use
Maine
Following this discussion, Mr. Michael Lahti from the University of Southern Maine spoke about Maine's work in creating an indicator framework with their children's cabinet. Maine's primary challenge is now to move toward increased agency cooperation around the new indicator framework. It was noted that it is often useful to avoid organizing indicators around functions (i.e. categorizing indicators by health, education, etc.) as it can perpetuate the sense of separate agencies taking separate responsibilities for children's well-being. New York described its success in the area of agency cooperation around indicators, attributing that success, in part, to the fact that their governor championed their effort.
Vermont
Finally, Con Hogan, the Secretary of Human Services in Vermont, shared the story of indicator work in Vermont, noting some important ingredients to achieving buy-in from policy makers and agency leaders. First, he noted that it was important to use indicators to communicate successes to these leaders, giving agencies full credit for improvements. Secondly, Hogan suggested providing state legislators with district-level data, making them aware of the state of children in their district. He noted that distributing the data in this pro-active fashion changes the dynamic so that agencies hold the legislators accountable, rather than vice versa.
In conclusion, several contributors mentioned the importance of securing buy-in from middle-level managers in agencies since they are most likely to institutionalize the use of indicators.
Track 1, Session 4:
Technology and Indicators
Introduction
The session was opened by Fred Wulczyn of Chapin Hall, who called it a "session on disseminating information via the web."
"This is the end stage of public engagement," he declared, meaning that dissemination followed the earlier process of helping the public appreciate and accept indicators and was aimed at making information available and democratizing access so that indicators are a viable tool for decision making at all levels. He called the web an "enormously powerful" tool for this dissemination.
Two states presented, Minnesota and Vermont. Readers please note that much of this session was devoted to demonstrations of the Minnesota and Vermont web sites, something that cannot be effectively rendered in a paper summary. Readers may wish to contact the speakers directly with questions.
Minnesota
Jim Ramstrom spoke for Minnesota. Minnesota's program, Datanet, is a series of twenty different databases, including children's indicators. Datanet's goal is to create large databases that are flexible and dynamic. Minnesota places on the web a children's services report card using relatively low-cost hardware, software packages, and technical support in the form of student programmers.
Minnesota asks visitors to register when they come in, so that the state can keep track of which different organizations are using the system. Users of the site can choose indicator data by county and select a variety of comparisons. They can also aggregate by county. The maps and charts in the Minnesota presentation are dynamic and change with the population the user selects. Ramstrom sketched some of the uses for the information and indicated Minnesota's intention to use the 2000 Census data to support analysis at the block level. To make the data more accessible to the public, Minnesota uses maps to help users identify the Census tract of interest. This mapping package, like a number of packages Minnesota uses, is free.
Minnesota estimates that about 70 percent of users of the site are outside government. Schools are major users.
Vermont
John Ferrara of Vermont, began his presentation by providing the URLs for the Vermont Department of Education, www.state.vt.us/educ.
The Vermont web-based school report was launched in February 1996 with district-, school-, and state-level data on its website. A policy research center, the Center for Rural Studies at the University of Vermont, provided technical expertise to Vermont's education department. The current version of the school report, developed by the state with the support of the Center, includes state data, school data, definitions of the indicators, and a variety of simple links to other Vermont data, including community profiles and tax information.
Vermont aims to provide data as close to the school level as possible. This can be hard because town and school-level data don't always match. Murphey, also of Vermont, added that it is important to consider where change takes place so that the unit of analysis is the unit where change takes place. Ferrara added that, by statute, the school is now the "unit of accountability" in Vermont. Wulczyn commented on the importance on looking at the change at the right time in the process.
School reports contain three or four years of data, if those data are available. The site also offers some context for the data elements to help users interpret the data and also additional community information, such as Census data.
Ferrara demonstrated the Vermont website and entertained questions from the audience on the sources of the data, including the contents of the standardized tests administered to students, and other topics. He also acknowledged Vermont's debt to Maryland's database development work and in the way in which information is organized on the site. Maryland's site is found at: www.mdk12.org/index.html. He named some of the software packages Vermont currently uses.
Murphey commented on some of the challenges of organizing and maintaining an accessible, understandable, user-friendly site. He reminded states to avoid letting technology get ahead of a public's capacity to ask meaningful questions of the data.
Track 2, Session 1:
Welfare Reform
Opening Remarks
The first track 2 session, Welfare Reform, began mid-afternoon on Wednesday. The moderator was Mairéad Reidy. Reidy welcomed participants and took a few moments to outline the content of the Indicator Development Track. She said that, in discussions over the previous few months, the states had consistently raised three interrelated areas that are central to efforts to develop goals for children and define a core group of indicators that can be used regularly to track children's health and well-being during periods of dramatic policy changes. These were:
- How to fill in the gaps in the construction and development of indicators to monitor what is happening to children as a result of welfare reform, particularly in such critical areas as child health and well-being. Children are affected by the social, physical, and economic environments in which they live, and welfare reform may potentially affect those environments. To gain a broad sense of the trends in child health and well-being as welfare reform plays out, we need to develop a set of indicators that capture these critical aspects of children's lives.
- How to build indicators of child care in response to the widespread agreement that accessible, affordable, quality child care is essential in moving parents from welfare to work, and that quality child care and early childhood education can foster children's intellectual, social and emotional development.
- How to develop indicators of school readiness and capture those early childhood health status and well-being (and quality child care) issues that are integrally linked to School Readiness. A second priority was to think about adolescent health and indicators of positive youth development.
Reidy then raised some themes that cut across the sessions:
- How do we go about defining a core group of welfare reform, child care, school readiness, and health and well-being indicators that can be used to regularly track children? Are there models we should be looking to?
- How can indicators selection be informed by the research literature; how should we use the research on welfare reform that is emerging now to suggest the next generation of indicators that we should track?
- How do we take into account that some groups of children are differentially effected by policies? What do we have to do to ensure that we have tracked and monitored the effects on different subgroups of children?
Introductions
Reidy then introduced the three speakers for the session: David Murphey, Senior Policy Analyst in the Vermont Agency of Human Services, Sherry Campanelli, Associate Director of the Rhode Island Department of Human Services, and Christine Johnson, Policy Analyst at Florida State University.
Reidy noted that a few years ago, Vermont was one of 13 states to receive a planning grant to augment their welfare reform evaluation studies with measures of child outcomes. With funding from HHS and several foundations, and technical assistance from Child Trends Inc., this group met and worked to develop a framework to monitor the effects of welfare reform on children. In addition to identifying the targets of welfare reform, including income, employment and family formation changes, the framework captures aspects of children's lives likely to be affected by reform such as child care, the home environment, and parental psychological well-being. Child outcomes, including education, health and safety, and social and emotional adjustment form an integral part of the framework. David Murphey would speak to how Vermont has used this framework to inform the selection of indicators of welfare reform.
The Midwest Welfare Peer Assistance Group (WELPAN) has developed a second set of indicators that are being used by many states to monitor the effects of welfare reform. Sherry Campanelli would discuss Rhode Island's use of this framework.
Florida is an example of a state that in the selection of indicators has pulled from a variety of frameworks. Chris Johnson would discuss Florida's development of indicators, with a particular look at Florida's indicators that focus on children in general, poor children, and those going on and off welfare.
Reidy then introduced the resource person, Professor Larry Aber of the National Center for Children in Poverty at Columbia University. Also attending was Professor Tom Corbett of the University of Wisconsin School of Social Work and director of WELPAN. State delegates then introduced themselves and identified some of the indicators of child well-being under welfare reform in which they had particular interest.
Vermont
Murphey of Vermont noted that the state has a federal waiver that allows it great flexibility in implementing its welfare restructuring approach. This approach has always been child friendly and has resulted in a very strong interest in child outcomes. Welfare reform evaluation in Vermont is experimental in design, with a control and two treatment groups. (The control group receives support under the regulations as they existed before welfare reform. A second group receives increased supports and the third group receives those same increased supports, but is subject to time limits). Working with MDRC, Vermont developed an evaluation plan that included many child outcomes. Participating in the child outcomes planning project, and mapping the child outcomes framework to their plan, also provided good input and validation for their existing plan. Vermont employs a core set of domains of child outcomes that look at the direct and indirect, and positive and negative, impacts of welfare reform. Preliminary evaluation results (conducted by MDRC) show only small differences in selected outcomes between the control and experimental groups.
A parallel activity in Vermont has been the development of a set of population-based indicators as part of their community profiles work. The child outcomes framework again influenced the selection of indicators. This indicator framework is not tied to any one policy, but Vermont hopes that, over the long run, the indicators will tell something about any major policy change including welfare reform. Murphey highlighted the importance of the differences between data on the population at large and those on the participants in programs.
Discussion: Program-Level Evaluation versus Population-Based Indicators
Larry Aber pointed out that because Vermont has a welfare waiver, it has been able to employ a random-assignment welfare reform experiment. This option is often unavailable to states, and states need to understand how to use indicator frameworks to monitor the effects of reform. Murphey said that well-chosen population-based data could work as a substitute for program data in some instances. Aber again highlighted those aspects of the child outcomes framework that are related to child well-being. He noted that there may be positive and negative, and direct and indirect effects, and that is was helpful to lay these out and think of effects going in different directions.
Rhode Island
Campanelli of Rhode Island began by noting that Rhode Island's welfare reform legislation predated the federal welfare reform legislation by 20 days and was implemented in May 1997. Unlike Vermont, Rhode Island did not have a federal waiver, but similar to Vermont, it provides many supports for families as they transition to work. RI is committed to making a long-term investment in families and children. Calling the system a work-support system, Campanelli sketched some of the specific benefits of the Rhode Island package, such as free child care and free health care and generous income disregards, and cited the state's determination to see that children do not become worse off as a result of welfare reform.
Campanelli noted that, in the long run, the state would have the results of a longitudinal evaluation of welfare reform, but in the meantime, they used their integrated administrative data to build a series of indicators on program use (child care, food stamps, case management use, Medicaid, etc.). To comply with the reporting requirement of the reform legislation, and using state administrative data, Rhode Island produces a report on a broad range of case characteristics, income, and demographic items, modeled in part on a the WELPLAN framework. They tailored the WELPLAN indicators to match the Rhode Island policy priorities. Campanelli distinguished two types of indicators: First, measures of behavior and conditions or statuses that can be tracked over time across subgroups of people or geographic units, and process indicators or measures of how well programs are functioning. She gave a number of examples of indicators being tracked, including the average cash per caseload which has been declining over time that would receive significant attention in the subsequent discussion. She pointed to the value of developing administrative data for this purpose allowing for large numbers of cases to be tracked over time at relatively low cost. She pointed to the need to flush out children's issues in the next report.
She identified what she referred to as Rhode Island Road Rules when developing and reporting indicators
- Beg, borrow, or steal data and ideas
- Start with the data you have
- Tailor your indicators to available data and to reflect your policy objectives
- Keep the communication simple, use graphics
- Highlight that which says something focus on those indicators that relate to the treatment you are trying to effect
- Put your best stuff first and explain that your objective is to measuring progress
- Don't rest on your laurels, outline the changes you anticipated and those you did not
Discussion: Interpretation of Indicators
Professor Aber asked the representatives from Rhode Island how confident they were of the meanings of the trends they found. He proposed that in order to interpret whether the decline in the average cash spent by the state per caseload over time was a positive or a negative outcome for families, it is be important to also examine if there have been any changes in the average family income over time. The identified trend in state expenditure can clearly mean two very different things for families (they are better off or worse off financially), and the additional information on total income can allow us to interpret the meaning of the indicator. He acknowledged the need to keep things simple, but pointed to the importance of bringing clusters of indicators together in order to check on the meaning of indicators. Unemployment insurance data could be used to partly explain the meaning of the observed decline in the average amount of cash assistance per caseload. This discussion was taken up more fully after the next presentation.
Florida
Christine Johnson of Florida noted that her state's approach to welfare reform supports families making the transition from welfare to work. However, Florida does not provide a "soft-landing" compared to Vermont and Rhode Island. Florida mixes incentives with disincentives such as more stringent time limits and sanctions intended not only to promote the work-first message but also strongly discourage welfare dependency.
The focus of Florida's indicator development is synthesis of information from research, administrative data and other sources. The long-range goal is to shape a research agenda that will fill important data gaps and better inform policy makers about welfare reform's impacts on children.
Florida established a statewide task force to figure out how to generate information for indicators and how to promote their use. The task force was made up of a broad range of people, and in their selection of indicators, they drew on a number of child well-being frameworks including those developed by Child Trends, Florida Kids Count, Florida Benchmarks, and the state's performance-based budgeting practices. Florida would next like to gather information from the people who are using Florida's indicators (including child advocates and citizens interested in the policy process) in order to understand what kinds of information and indicators policy makers are looking for and need. They will use this feedback to help refine the indicators.
Florida's indicators look at three particular groups of the child population children in general, poor children, and children affected by welfare reform. Johnson noted that children move among categories, and that it is important to compare the statuses of children across categories. Johnson noted that they divide indicators into two groups outcomes and contributing factors. Access to child care or medical insurance, for example, are not considered outcomes. Child neglect was clearly an outcome that can result from lack of available child care. Policymakers need to get a handle on such contributing factors to know what they need to change in order to get outcomes moving in the right direction.
She discussed a number of special resources available to Florida for building indicators, including the Urban Institute's National Survey of America's Families, and some special studies on the effects of welfare reform on access to health insurance and mental health status. She noted that, to date, Florida's primary research focus has been on adults, not children, although the state is fortunate to be involved in a number of national research projects related to children. An initial inventory of national, state and local projects related to children and welfare reform has been developed so that synthesis of findings can begin in the second year of the project.
Johnson noted the challenges of finding meaningful data in some cases. She pointed to the problems of interpreting administrative data on service utilization in understanding the effects of Welfare Reform. How do we interpret increased use of asthma-related services for example? Is the rate of asthma on the rise or are children with asthma presenting themselves more effectively for services?
Discussion
Frameworks
Calling the states brave, Aber said that they do not seem to need to invent new frameworks for themselves, but they do feel the need to tailor existing frameworks (such as the Child Trends and WELPLAN frameworks) for a better fit. States recognized that the Child Trends and WELPAN frameworks have been developed through multistate partnerships. Aber also observed a lot of interest in how to choose a framework, and thought that it is important that the frameworks be checked against the research base, but not determined by a research base that was itself evolving. He noted that the Child Trends framework was prepared "in dialogue" with research.
Martha Moorehouse noted that the frameworks are working across two lines of thinking. First, there is a series of core hypotheses about what is happening to families and children under welfare reform. Second, the frameworks reflect core sets of thinking about what the research has told us over time about the effects of poverty and employment, etc. Purposeful borrowing across frameworks would support the states' ability to tell stories about what is happening in the current service reform environment.
It was noted that some of the questions about choosing the right framework are signs of states building consensus over issues. Tom Corbett explained that WELPLAN is a framework developed by eight Midwestern states seeking to get beyond caseload findings and understand the interrelated issues that underlie what is happening to families under welfare reform. WELPLAN includes the hard- and soft-landing approaches to welfare reform of participating states, and yet these states came to agreement on a broad sense of outcomes. He noted that the language of WELPAN borrows heavily from the child outcomes process.
It was further noted that states view the frameworks as a public education tools that help policy makers in state government legislative and executive branches, non-profits, and citizens groups get somewhat on the same page.
Causality
Discussion then turned to such issues as what the presenting states can now say about the aggregate effects of Welfare Reform on the health and well-being of children. It was widely agreed that indicators could not be used to establish causality. Changes in indicators may be the result of many factors, and it is usually very difficult to attribute them to specific (welfare) policy changes. Indicators do however provide valuable feedback that supports midcourse corrections.
This discussion touched on the relative merits of the use of population-based and program-derived data in developing indicators. The value of population-based indicators is in bringing attention to broad changes in well-being. These changes may be indirectly a result of a program or policy change. Program-level indicators can be compared with population data, such as is available in Kids Count, to address how the program population differs from the population at large.
Martha Moorehouse noted that the session's discussion was in tune with discussions at the federal level about causality. She noted that federal policy makers would not want to use indicators to get at the causal effects of welfare reform, and are careful in any work about indicators to make this clear. They do find them highly informative however. She stressed the importance of sets of indicators to track different kinds of information and to support comparisons. Referring back to the earlier discussion, she suggested that mapping trends in the rate of sanctioning and the earnings of families over time alongside expenditures on cases over time would allow us to interpret the decline in the average expenditure per case over time. She saw multiple ways to think about sets of indicators, and noted the importance of analyzing whether sets of indicators are tracking in the same direction, and of analyzing these sets across population subgroups. Participants agreed that it was essential to analyze multiple indicators simultaneously.
Moorehouse advised the states to be careful in how they describe change or its absence. "Poor kids are not doing well," she said. "If welfare reform is not improving the trajectories of poor kids, then kids are in trouble."
Outcome versus Process Indicators
In the absence of outcome indicators, participants suggested that it would be good to focus on the measuring the key determinants of outcomes. Knowing that children had access to health care or high quality child care may allow us to be confident about health and well being outcomes. This plan was considered good in the short run. Over the long run however it was felt that we should plan to measure the child and family well being outcomes also.
Selection, Development and Communication of Key Indicators
Aber suggested that in selecting indicators states take two things as given. First, that it will take five to ten years to develop adequate indicators systems. Second, early steps might be guided by having things be as useful as they can be now without restraining the vision too much.
Discussion also focused on the value of descriptive information. It was argued that descriptive information can inform the development of indicators. Rhode Island pointed to their movement from a description of child care slots to the development of an indicator reflecting need versus supply of child care slots.
It was generally agreed that child well-being cannot be measured without new data collection. There is a need for both administrative data and population-based survey data. It is not possible to study the impacts of reform on children exclusively with administrative data as policies are diverting families from services. Discussion focused on how to develop and use surveys. Strategic implementation of surveys was considered to be essential. Suggestions included piloting in high impact communities and demonstrating the usefulness of the protocol before implementing statewide. Many states pointed to the need to produce indicators at the county or community level, and it was suggested that states select a number of representative communities to work in initially. Working collaboratively with universities and graduate students to mine existing administrative data was recommended.
The issue of indicator interpretation was an important one for states. The data are complex and open to judgement on its interpretation. It might be important to establish a task force to collectively review the indicators and reach consensus on what they are telling us. This group could be comprised of the advocacy community, state policymakers, and researchers etc. It was suggested that if no one is responsible for interpretation then no one owns the indicators. It is essential to expose the rival stories or interpretations of the indicators.
Aber concluded that issues not raised in the discussion included how to choose indicators pertinent to children at different stages of their lives.
Track 2, Session 2:
Child Care
Mairéad Reidy opened the session by noting the widespread agreement among the policy and practice communities that quality early childhood experiences are important for supporting children's social and emotional development, and that preventing families from requiring public assistance, moving families from public assistance to work, and sustaining that employment, requires that affordable child care be available. States had expressed a strong interest in developing appropriate indicators of child care availability, affordability, and quality. Serving as a resource person and facilitator for this session was economist Ann Dryden Witte, a faculty member at Florida International University with long experience in research on a number of aspects of child care availability, utilization, and outcomes.
Reidy sketched a number of topics she expected the session to address. These were:
- How do we develop indicators that capture those aspects of child care quality affordability and accessibility that help women maintain employment or prevent them from using child care? She noted issues of stability, cost, location, group size, physical safety and provider-parent relationships as examples.
- In developing childcare indicators, how do we factor in culture and belief about child rearing? Is the meaning of an outcome the same across all groups? She noted, for example, that Hispanics are less likely to use formal center-based child care, and some research shows that such care may not be in accordance with socialization goals.
- What are the data issues involved in the development of childcare indicators? We need to consider the limitations of both administrative and survey data. When and how can mothers and providers be used as sources of information? Which questions can providers and mothers reasonably address and which need on-site data collection?
- What population should we look at? How much should the focus go beyond low-income families?
- Should we be concerned about developing outcome indicators on child development, and how do we view such child outcomes in cultural context?
Utah
Reidy then introduced the first of the two speakers: Rita Penza, Utah Child Indicators Project Coordinator. She noted that Utah was at an early stage in their indicator development. To support the goal of having all children live in a nurturing and economically secure environment, the child care objective in Utah is that all families have access to affordable and quality child care. To monitor progress towards this objective Utah needed to develop indicators of childcare accessibility, affordability and quality.
Ms Penza focused on Utah's work to measure child care availability. The indicator now being used is the number of licensed or certified childcare slots relative to the number of children where both or the only parent is works full or part time. She noted the strength of this indicator lay in its ability to compare the number of slots with the demand for child care. She acknowledged that while this indicator measured the number of child care slots available, it did not provide information on the number of children filling those slots in a given day. She noted that it was possible that multiple children used a given slot at different times in a given day. Utah does not measure other factors, such as affordability, acceptability, accessibility, and quality.
Ms Penza then spoke of aspects of quality of child care that Utah is attempting to measure using their database on licensed child care facilities. This licensing inspection database contains information on the type of facility (center-based care, licensed family child care, residential certified care etc.), and the number of children served in the facility. In addition, although limited, the database contains information on important aspects of quality of care such as whether staff are trained in CPR, the types of complaints filed against the facility along with the proposed correction plans, whether the center passed the local health inspection, and whether it has a fire escape. She noted that this data is further clearly limited by the fact that is only contains information on children already in childcare.
Utah's objective is to identify indicators that need to be tracked on an ongoing basis. A plan that is currently being discussed is to identify all of the indicators that can be produced using available (administrative) data. Child care providers and researchers and parents would be asked to rate these indicators, and Utah would track those with the highest ratings.
Discussion
Witte noted that all licensed contractors have on-site inspections and that inspection data can be helpful at getting at selected quality issues. She urged states to examine these data. She noted different levels of quality measures and urged participants to distinguish between process and outcome indicators. She noted that the licensing inspection data contained some relevant information that could be used to measure process quality. She suggested that valuable data might be gathered from regulatory groups that track the ongoing monitoring of data quality, although participants noted that such monitoring happens irregularly. This led to a number of observations about the forces that provide an incentive or a disincentive for providers to obtain licenses in different states. (And hence the ability of this data to reflect and represent all child care facilities.) It also inspired discussion of how federal tax law definitions of an employee and an independent contractor have restricted states' abilities to gather tracking information on the use of subsidized child care by low-income families.
The session participants also discussed survey data quality and the importance of high response rates. Witte observed that one of the difficulties with surveys is the industry-wide trend in declining response rates. When response rates are low we need to question the validity of the data. She recommended that states considering the use of surveys should be sure and hire a good survey firm.
Rhode Island
Catherine Walsh, Rhode Island's Kids Count Director, discussed Rhode Island's development of measures of child care. The Rhode Island presentation was structured to address two main points, how Rhode Island's indicators were developed in stages, growing from descriptive information to indicators, and what remains to be done.
Interest in developing child care indicators arose in Rhode Island, in part, from research indicating that across the state only one in six children was in high-quality child care. This finding, while focusing attention on all children, also highlighted the need to pay attention to low-income children, whose access to high-quality care may be limited. The fact that two-thirds of women with children under age 6 were in the workforce, and that welfare reform was about to move many low-income women into the workforce, all point to a need to understand child care access, affordability, and quality for all children. With welfare reform, Rhode Island began to work on adjusting child care subsidies so that they would be available to low-income working families, but there was some concern as to whether slots would be available for families transitioning to work. Investment in child care in Rhode Island is significant. The state has doubled its investment in child care, including federal and state dollars, and subsidies will be available at 250 percent of the FPL by January 2000.
Walsh went on to outline the evolution of a number of indicators. First, she showed the licensed and certified child care for children under 6 in 1998 and compared the children in need of regulated care with the number of regulated child care slots. The second indicator that she described was the percentage of eligible three- and four-year-olds enrolled in Head Start. Walsh compared Rhode Island with a series of core cities.
She noted that in the absence of perfect data it was often necessary to use the best available estimates. The denominator of the number of 3 and 4 year olds who are eligible for Head Start came from AFDC enrollment data. It is not possible to get poverty data at the city and town level in Rhode Island. The closest guess of the child poverty level was provided by the number of children enrolled in AFDC a program available to families at 100 percent of the FPL. She acknowledged that this had limitations, as some families who were eligible for cash assistance at the time did not take it up.
She further noted the need to make numerous assumptions when developing indicators. In working out the demand for center-based care, they factored in the results of a national survey which found that 47 percent of working parents choose center-based care. They further factored into the denominator the fact that only approximately 50 percent of AFDC families would end up in the workforce, and among AFDC families only approximately 75 percent would choose center-based care. They believed therefore that their final product was a good, if conservative, indicator of need relative to available slots.
Sometimes data collection is very time consuming. To get an estimate of the percent of children served in the head start program at the community level, they called each Head Start program to gather information. Having data at the community or city/town level has been essential to highlight those communities that are underserved by Head Start, and to demonstrate the under supply of child care in rural communities. Rhode Island typically highlights what is happening in high poverty communities (more than 15 percent of children living in poverty) relative to all communities.
For each indicator that Rhode Island Kids Count develops, they present a clear definition of the indicator, they outline what it means and why they choose it, and why they believe it will be helpful to policymakers. Walsh then set out a framework of indicators to pay special attention to the capacity, affordability, and quality issues in child care.
Capacity
- Number of children who need care
- Number of slots available (across different ages)
Affordability
- Number of low-income families needing child care (relative to the number of subsidies being used)
- Reimbursement as a percentage of market rate (an important benchmark)
- Availability of funds for new slots and improvements
Quality
- What percentage of centers are high, medium, low?
- What is the staff turnover rate? (a recent NAEYC study showed that high quality centers have a 13 percent turnover rate, and low quality centers had a turnover rate of 50 percent)
- What is the staff's level of training and education
- Are the practices developmentally appropriate?
- Is the staff-child ratio developmentally appropriate
- What is the level of parent involvement
Discussion
Discussion turned to the importance of measuring quality, and the need to find measures that are useful, useable, trackable, and not overly costly. The importance of developing indicators of quality across children of different ages and races/ethnicity was raised. A number of participants noted the importance of child turnover in addition to staff turnover. The issue of whether or not accreditation is a quality issue was discussed, and it was noted that different accreditation bodies have different standards making it difficult to use accreditation effectively.
The discussion then moved to accessibility with a large, rural state noting its own accessibility challenges. Witte referred the participants to the use of GIS in determining accessibility and mentioned that family child care seems to work better than child care centers in rural areas. Attention also focused on the relative merits of family and center child care, and in particular, what works best in terms of accessibility, especially in a rural state, and how to bring services to families who are not in center based child care. In order to strengthen the quality of family child care, Rhode Island noted that it is exploring linking a family and their home-based child care provider with a child care center. Witte said that she sees three markets in child care twenty-four-hour child care found near twenty-four-hour facilities such as airports and medical centers, centers along commuter routes, and family child care.
When attention focused on unmet need, Witte referred participants to a paper entitled "Estimating the Unmet Need for Child Care: A Practical Approach," that builds on some literature on estimating the unmet need for medical services. She said that much of the focus around unmet need is on infant care but that we also needed to pay attention to toddler care.
Discussion then moved to the selection of indicators, taking into account many issues. Aber and Witte, along with state delegates, agreed that a single indicator isn't sufficient and that indicators that capture different using populations (such infants, toddlers, and before-and-after school program participants) and different facets of the system (such as reimbursement rates, centers for disabled populations, care for child protective services cases) are necessary.
Ann Witte then summarized the session. She called both state presentations excellent and noted some additional things for states to consider, such as population density, special needs populations, and markets. The session concluded with discussion on how this child care indicators group might be linked with a child care research partnership.
Track 2, Session 3:
School Readiness and Health and Well-Being
The purpose of this session was to discuss development of school readiness indicators. Representatives from four states Delaware, Utah, Vermont, and Alaska presented case studies of their indicator development experiences and invited input from meeting participants.
Introduction
Harold Richman opened the session by sketching its organization and introducing resource person Martha Zaslow, Assistant Director of Research and Senior Research Associate at Child Trends.
Martha Zaslow
Zaslow posed several issues about school readiness and school-readiness indicators, drawing on a symposium she had recently attended.
- Conceptualization. How do you conceptualize school readiness? Is school readiness a discrete set of testable skills, is it community sensitive, or does it reflect a process?
- Emphasis. How do you want to focus your emphasis? School readiness measures may capture preliteracy skills or may concentrate on social and emotional skills.
- Timing. When do you measure school readiness, at the beginning of a school year to capture the range of skills in the classroom or do you measure at the end of the common year after you have a class with common experiences? Timing will depend on what we want to measure.
- Information. Who do we inform with school-readiness measures? Are they informative for parents, at the classroom level, school level, or at the community level? She noted that Utah's indicators are informative at multiple levels.
- Harm. There is a potential to create situations in which schools teach to a test, those in which children are screened out of some educational options, those in which communities or children are labeled or stigmatized.
Delaware
Nancy Wilson, Interagency Policy Coordinator, presented for Delaware. Delaware's school readiness assessment process
- Is curriculum embedded
- Follows children's progress
- Is used to inform parents, teachers, and schools
Delaware's welfare reform program provides full funding for childcare subsidies. Pre-K programming is fully funded at a level equal to 100 percent of the poverty population.
The educational climate in Delaware includes
- Standard-based assessment for education reform purposes that may track individual children's progress over time and may be linked to salaries
- School choice and charter schools
- A governor's office and legislature that are very interested in early childhood brain development research
Wilson notes that all of these changes are coming along at a time when some in the policy community would like to move more slowly.
Delaware is working with Sharon Lynn Kagen of Yale. They are also working with the University of Delaware on an interagency evaluation of early childhood programs. They are talking about measuring the status of children in five domains physical, social-emotional, language and communication, cognition and general knowledge, and approaches to learning and having a variety of ongoing programs to help develop measurers, including focus groups, checklists, and other data collection activities.
Utah
Rita Penza, Utah's Child Indicators Project Coordinator, described the kindergarten readiness instrument produced by the state office of education. The instrument was developed in response to a legislative call for a pre- and post-kindergarten reading skills assessment. The mandatory pre-K assessment is used to provide information to schools, looking in particular at six literacy and numeracy domains: visual and phonetic awareness, comprehension, literacy background, numeracy, and social adaptation. It is conducted individually by the student's regular teacher, and parents are at-times present. It is offered in both English and Spanish. (Districts may perform other assessments in addition to, but not instead of, this one.)
The pre-K instruments are not made publicly available to prevent the possibility of child care providers teaching to the test.
Questions and Concerns
A number of individuals asked questions of Penza and voiced concerns. Some questions focused on other indicators related to school performance, such as heath measures. Some attending the session worried that school-readiness inquiries might stigmatize children, families, or communities and seemed concerned about whether or not the grade-like categories used in the assessments are indeed too much like grades. Discussion also turned to feedback to the community. Questioners asked if these assessments could be used to help direct community-level investment. One participant suggested that the results be shared with preschools and with parents so that they could teach the skills required on the test. Zaslow noted that McMaster University in Canada has an instrument that is both cognitive and social-emotional in its scope.
Vermont
David Murphey, Senior Policy Analyst in Vermont's Agency of Human Services, sketched his state's approach to school-readiness measures. Vermont began with a survey of kindergarten teachers who were asked to rate their classes in aggregate in a handful of domains. He noted two problems with the system some teachers disliked using it and there was no inter-rater reliability but said that it yielded some indicator-like information. It was last used during the 1997-98 school year.
In developing its school-readiness indicators, Vermont had three goals. It wanted an assessment that:
- Is grounded in the research literature
- Includes the idea of schools being ready for children and not just children's readiness for schools
- Includes children's health status
The new Vermont instrument, now in draft, has three parts: an individual assessment by the kindergarten teacher filled out two months into the school year, measures of schools' readiness for children, and a health care provider checklist.
Murphey noted the importance of grounding the assessment culturally and politically in his state. Vermont will be working with kindergarten teachers, health care providers, and others to get advice on the concept of school readiness and also get buy in. This will be part of a number of efforts to make sure that the assessment hangs together empirically.
He noted that the reasons for developing these indicators are unclear:
- Are the indicators for assessing student performances?
- Do they assess schools or preschool programs?
- Or are these community well-being indicators?
He also noted that there is disagreement over whether it is better to detect variation and target resources or to suppress variation to avoid screening out children. Murphey also offered up what he called a radical proposition that all kids are ready to learn all the time and it was up to the school to figure out how to teach them.
Questions and Discussion about Vermont
Larry Aber of the National Center for Children in Poverty at Columbia University offered a number of observations, saying that
- It will take a number of years to arrive at measures. Utah's measures, for example, might not be as much ready-for-school measures as they might be preliteracy measures.
- The school-readiness indicators need to be mapped to their communities, as is envisioned in Vermont.
Murphey added that Vermont was pretty interested in mapping readiness indicators to communities, noting that their teacher assessments of classrooms rated about 25 percent of kindergartners unready to learn. About the same percentage of Vermont second graders are also rated as unready and a similar percentage of Vermonters drop out of high school.
The idea of validating school-readiness data by linkage with other data was raised.
Martha Moorehouse. Martha Moorehouse, Senior Research Policy Analyst in the of the Office of the Assistant Secretary for Planning and Evaluation at HHS noted that a question raised repeatedly is to what end are we measuring school readiness? She said that, in the national arena, indicators are valuable even if they are far away from the education issues embracing issues such as health and income level. Moorehouse cited Sam Meisels's work that points to a need to study how a child learns over time, something that can't be accomplished by a single test. She said that Head Start is mandated to follow children's progress and suggested that where Head Start has to go, the rest of the early childhood field follows.
Another participant suggested that a health care questionnaire for providers might ask
- Has the child has had a well-child visit?
- Was a problem located?
- Was there follow-up?
Ann Segal. Ann Segal, Deputy Assistant Secretary for Policy Initiatives at HHS made a number of points, among them that the power gained by bringing other service systems to the benefit of children can be lost when school-readiness is assessed in terms of literacy skills. Making school readiness a test of one development moves away from comprehensive solutions.
Alaska
Alaska's speaker discussed that state's kindergarten developmental profile and the cultural competence issues associated with school readiness. The kindergarten profile that came about with other school-accountability requirements was part of legislation that reapportioned school funding, moving dollars from rural to urban school districts. The evaluations will rank schools at four levels, distinguished, successful, deficient, and in crisis. Three key issues for Alaskans regarding these evaluations are
- Cultural norms versus school expectations. An example of a cultural norm with implications for school performance is children's tiredness stemming from the late hours they keep during the summer in northern Alaska in anticipation of the long hours of winter darkness.
- Civil rights related to access to English. Standard English is not the vernacular of some villages, and districts object to being judged on children's command of a dialect they speak or hear less frequently.
- How will the data be used. Concerns about how the state will employ these data have been raised. For example, some in Alaska are concerned that the data will be demoralizing to districts.
- Abuse and neglect issues reporting issues. Teachers have asked that if they report a child as being tired and hungry on the assessment, but do not report the child as neglected to child protective services, will they be liable for discipline themselves?
Alaska authorities are addressing these issues by moving slowly and seeking ways to be flexible. An example of the state's flexibility is that it expects to allow the communities to decide how they will use the data. But Alaska is committed to its standards. State Board of Education members have said that cultural diversity is not an excuse for not meeting standards and that saying that rural children cannot meet the same standards as urban children is racist.
Questions and Discussion
Larry Aber. Aber noted the need for social science research to be more culturally fair and proposed three practical, problem-solving approaches to foster cultural sensitivity. These three approaches were:
- Sitting down and talking together to determine appropriate items
- Pooling culturally common and culturally specific measures
- Building in items that assume multi-cultural competence
Ann Segal. Ann Segal called this an "evolutionary process." She referred to the first education goal, that became a Health and Human Services goal early childhood programs and kids being born healthy and staying healthy.
Martha Zaslow
Zaslow summarized the session. She remarked that she heard the group backing away from or becoming cautious about measuring school readiness and working toward a set of predictors of school readiness. A second theme she heard was the desire for measures of investment in children's early years by communities.
She heard people's desire for
- A broad measure, not a narrow measure, that includes health and social-emotional components
- School readiness measures that are contextualized
- School readiness measures that are not tools to hurt or label children
- Measures that capture schools' readiness for children
- Information to be widely distributed and go back to the early child community
She heard worries about
- Informant bias
- Identity of the audience
- Whether process measures can be boiled down to school readiness
- Whether it would be useful to engage in cross community discussion
- And whether states want school readiness to be a birth-to-three issue
Track 2, Session 3, Part 2 and Track 3, Session 4:
Youth Development and Survey Use and the Development
Introduction
Moderator Allen Harden of Chapin Hall opened the session by sketching the value of surveys in studying particular populations, both at a point in time and over time. He also said that surveys are relatively inexpensive, but tempered his remarks on cost by noting that good surveys require high response rates and that seeking such rates can elevate costs. He cited remarks made by Ann Witte made in the earlier child care session on response rates. Harden then introduced the resource person, sociology Professor William C. McCready, who directs the survey laboratory at Northern Illinois University.
Surveys
William McCready
McCready began by taking up issues raised by Harden response rates and costs. He pointed out that, across the industry, response rates are declining, and suggested a number of reasons for this decline and ways in which the industry is trying to counter that decline. On the issue of costs, McCready noted that fielding a survey is expensive, but piggybacking questions on an existing survey can be quite economical. He described the way that county health departments in Illinois are piggybacking questions on a CDC survey for which his center is the survey contractor.
McCready sketched a number of recent innovations in the survey business. These included cognitive mapping work that tries to write questions that will be better understood by respondents. He also mentioned methods of allowing respondents greater flexibility in determining when they respond by allowing them to schedule appointments. He also noted the difficulty posed by overlapping area codes to the process of drawing telephone samples.
McCready then introduced the three speakers: Charlene Gaspar, who spoke on Hawaii's annual health survey; Jennifer Jewiss, who spoke on Vermont's use of the Youth Risk-Behavior Survey and the Search Institute's Asset Survey; and Deborah Benson, who spoke on the complexities of data gathering in New York state.
Charlene Gaspar
The Hawaii health survey collects a range of data by telephone from a random household sample under the sponsorship of the Office of Health Status and Monitoring. Other agencies save money over the costs of fielding their own surveys by piggyback questions on the survey addressing such topics as insurance, food and security, and child care. Staff members from these piggybacking agencies participate in interviewer training related to the questions they generate. The survey oversamples some areas to get community-level data. Once processed and weighted, these data are made available to community health officials.
McCready. Building on Gaspar's remarks, McCready noted one issue for participants to consider about such surveys is that the quality of the household information known by the survey respondent varies. Households whose respondents do not maintain a lot of information on the health issues examined by the survey may be penalized relative to households that do maintain documentation. McCready suggested that the importance of documentation might be incorporated into a prevention initiative.
Jennifer Jewiss
Jennifer Jewiss of Vermont began by saying that Vermont faces substantial challenges, particularly in the areas of teen suicide and alcohol-related driving fatalities. To address teen risk, Vermont uses the Youth Risk-Behavior Survey and an asset survey embodying forty developmental assets twenty internal and twenty external devised by the Search Institute. The risk-behavior survey features a set of ninety core questions on such topics as violence, injury, sexual behavior, and truancy. Adjusting these surveys to Vermont's needs, such as the addition of optional questions to capture attitudes that are thought to predate behavioral change, and also analyzing the resulting data, is accomplished through a collaboration among Vermont agencies.
The forty assets on which the Search Institute survey focuses include twenty that are deemed external to young people and twenty thought to be internal. (But all relate in some way to environmental factors.) Jewiss noted that the Search Institute's work is based on sizeable data collections and research. She noted that some of the problems with the Search Institute's work such as, that it over-represents white rural and suburban youth and that it is cross-sectional and not longitudinal. Despite such limitations, Vermont finds the asset survey is very useful in helping the state report positive findings and that it can help engage families and organize communities around issues. Reporting both risks and positive indicators are key to Vermont's goal of reporting both good and bad news.
As is true in Hawaii, Vermont looks for ways to turn the data back to communities. Jewiss noted a number of questions with which Vermont currently wrestles:
- Where do we go from here?
- How often should these surveys be fielded?
- How can they document the stories coming out of communities?
- How can they document the roles of community assets in the lives of individuals youth in Vermont?
Deborah Benson
Through a series of interagency planning meetings focused on what state agencies need in terms of youth data, New York has developed an augmented Youth Risk-Behavior Survey to explore both risks and assets. Next year, that survey will include additional items on attitudes and perceptions regarding alcohol.
New York is also working toward coordinating and consolidating data collections from students on to a single survey to be fielded on a single survey day in order to reduce the burden on schools. Some New York counties also use the Search Institute asset survey and the state is considering fielding it statewide, but the cost of this data collection would be high. Benson said that New York is looking for a quick-and-dirty approach to meet both state and community data needs.
New York is also working with counties to help them incorporate nonprofit organizations and others in planning youth development activities.
Youth Development
Martha Zaslow
After a break, Martha Zaslow, Assistant Director of Research and Senior Research Associate at Child Trends, spoke about youth development. Zaslow began by noting some of the issues that she thought very important in considering the use of surveys. These included:
- The importance of focusing on indicators of positive youth development as well as risks, something Zaslow believes is critically important to communities.
- Piggybacking on existing surveys in order to conserve resources and enhance the context of the data.
- The need to develop and implement surveys carefully, taking into account response issues, interviewer training, and pretesting.
She then began to discuss resources available to states for benchmarking. Zaslow recommended that participants obtain a copy of a national trends report on positive youth development produced by ASPE and pointed out some positive youth development indicators available from national data collections that states might use for comparison.
Discussion
Issues raised during discussion included both survey and youth development questions.
Surveys
States asked about how achieving comparability between national findings and state and local conditions, about uses of data at the local level, and about collecting data from out-of-school youth. McCready made a number of observations about data and their uses, including:
- The suggestion that focus groups can provide valuable information.
- School district administrators will find ways to manipulate data collections to make themselves look good.
- It is difficult to know how data are going to be used. For example, the biggest users of school district data on CD-ROM in Illinois are Realtors.
- Giving the data to all sides in a policy debate allows the community the chance to defend itself against people trying to help it.
- It is easier to use a list sample than to draw a sample.
- Allowing respondents to schedule an interview has been successful for NIU.
- Theatre majors make good interviewers.
Youth Development
Participants also discussed an indicator analogous to school readiness that might be applied to adolescents and what that indicator might include. Discussion touched on readiness for work and assessments of labor skills categories. Utah volunteered that it was working on measures of post-high school success, but were running into tracking problems.
Some participants spoke about measuring whether schools were ready for students.
Track 3, Session 1:
Data Issues, Data Sharing, Interagency
Agreements, and Confidentiality
Moderating the session and serving as resource persons were Robert Goerge and Bong Joo Lee of Chapin Hall. Dr. Goerge opened the session by describing it as taking a broad approach to a number of important data issues, such as obstacles to constructing a data warehouse and highlighted possible approaches, to data sharing within and between agencies, masking and encryption, linking with survey data, and other topics. Dr. Goerge then asked meeting participants to identify issues about which they would be interested in hearing. Informed consent, budgeting issues, confidentiality, and general administrative challenges were among the identified topics about which discussion would be welcome.
Dr. Goerge then introduced Yvonne Katz, Special Projects Director at the West Virginia Prevention Resource Center.
West Virginia
Ms. Katz began by describing her own qualifications and providing background information about West Virginia. She then sketched the way that West Virginia brought together a number of stakeholders many associated with state agencies but also community and political stakeholders to develop ways to share data and use data. Included in her presentation were examples of how West Virginia addressed, through partnership, issues of cooperation, community involvement and ownership, the problem of community reactions to unflattering perspectives that some data can provide, confidentiality, and active versus passive consent.
Discussion following Ms. Katz's presentation touched on a range of issues and resulted in a number of suggestions that are detailed below.
Suggested Approaches to Confidentiality and Concerns Regarding Those Approaches
Informed Consent
An informed consent process could empower a family to approve the use and sharing of data related to them. A question was raised on whether families would understand the rights they were signing away. In legal circles, there are claims that an individual must understand what he or she is signing away for that act to be binding.
Social Security Numbers
An umbrella tracking system, based on clients' social security numbers, could allow researchers to identify when and in which services individuals enroll. However, the line where confidentiality begins and ends is blurred. Some states regard using Social Security numbers as identifiers to be a breach of federal law and some do not. Also, in large states, like California, fraudulent Social Security numbers are easily bought and sold.
Tracking can be done through "blocking," a process based on individual characteristics and data such as name and date of birth. This works well in some states. However, in substate jurisdictions with small populations, this kind of information could lead to disclosure of an individual's identity and thus violate confidentiality.
Universal Identification Number
Agencies could create a universal identifier based on encrypted names. This would allow tracking without breaching confidentiality. However, encryption requires high technological capacity and collaboration among agencies.
The general consensus among participants was that the capacity and willingness of agencies to share data need to be developed and improved before substantial progress in database construction can be made. Some participants speculated that staff at certain agencies were reluctant to share data for tracking purposes because of the increase in workload such an agreement would necessitate. (In order to have policy implications at various levels, data must be linked across agencies at the federal, state, and local levels). Also, often agencies are the owners of their own data and arbitrarily designate who they will share data with and for what purpose.
Many participants acknowledged and highlighted the importance of community-level data, noting that the administrative data that most researchers are after is centralized at the state level; likewise, many contended, local-level data support partnerships between agencies, policy makers, and university researchers work better than any other kind.
Collecting data at the community level gives it context and dimensionality which allows for development of useful indicators, but does not always support comparisons with other data based on large areas and does not always capture community circumstances. For example, data on a rural area may fail to highlight the cultural difference among members of that community and such differences in culture may be particularly relevant even if racial diversity, a more commonly examined indicator of context, is lacking. Therefore, before analysis, the origin of data and the community from which it is derived should always be considered.
Planning for data collection was discussed at length. The following list exhibits the array of suggestions made by researchers from all areas of the country:
- Know the requirements of confidentiality before collecting data
- Using lawyers as consultants at the planning level can help measure how far study and data collection can go without violating laws
- Examine all aspects of obtaining active or passive consent before settling on one form of obtaining consent
- When approaching agencies to ask them for data, be up front, make them believe in the cause behind your research
- Outcomes have often been used as tools to rally a community around a specific issue
- Use all the resources you can, even external resources for the purposes of data collection
Though there are many obstacles in constructing a data warehouse capable of tracking individuals across agencies, participants maintained the solutions lie in creativity. Everybody involved can contribute and must remain flexible statisticians must realize the validity of qualitative analysis, researchers must acknowledge the capacity of residents to inform and act.
Track 3, Session 2:
Data Management and Linkage Across Programs and Agencies
The moderators and resource persons for this session were Robert Goerge and Bong Joo Lee of Chapin Hall.
The common goal of developing indicators for tracking individuals' service history is approached with various strategies. Some reasons for the diverse approaches include, but are not limited to, population of state, geography, population of illegal immigrants, and manpower to pull together a wide array of agencies under the same umbrella. In this section, strategies already in practice and the states employing those strategies are outlined; problems with creating a database are highlighted in the following section.
Vermont
David Murphey and John Ferrara spoke for Vermont. Vermont is working to pull a number of agencies together under a central data system that they call a "data hotel." The hotel's purpose is to track service use at the individual level. Currently, new services are entering the system in a staggered process; the delay to include new programs is due to lack of manpower. At the point of entry, each client is assigned a seven-digit number which serves as a common identifier. The number is given and coincides with specific characteristics of clients such as name or race. Clients are given the option to approve or decline the use of this number for the purposes of tracking. This number follows clients as they use agencies and services and minimizes unnecessary repetition in the collection of information when clients enter a new service.
Some complications have arisen in constructing a database of children using services. For instance, Vermont does not use a child's name to identify service use, but Vermont schools use names as sole identifiers. These different approaches have caused frustration for Vermont's database contractor and for stakeholders. As a minor remedy, though, all state departments are converting their tracking strategies to the common identifier number mentioned above. Uniform measures to track children should provide researchers with the necessary tools to understand service use in a way that could affect policy. However, because the databases house data only on children, the information is lost at the time of a student's school completion. Stakeholders are in the process of developing a plan that would allow examination of service use beyond school time.
California
Dr. Geraldine Oliva spoke for California. She began by noting that California houses the largest Medicaid database in the country and is in the process of cleaning other data in the system to link them with it. Researchers have initiated an attempt to form a common identifier of clients in the system to minimize duplication and to allow tracking across systems once linking occurs. Researchers have identified six criteria that the common identifier has to meet:
- Universality. The identifier would be assigned with ease.
- Durability. After assigned, the identifier would have the capability to follow an individual for his entire service-use history.
- Non-invasive. Assigning the identifier should not violate confidentiality.
- Flexibility. The identifier has to be able to move through and beyond agency boundaries with ease.
- Uniqueness. There must be enough digits/letters within the identifier so that it is unique to the individual.
- Financially feasible. The whole process has to take place under tight budgets.
The context in which California officials established the identifier was unique; some counties are using fingerprints and other invasive techniques to assure that no clients received services fraudulently. So as to disallow fraudulent clients and remain non-invasive, the researchers established a virtual identifier. Based on core data elements such as client name and Social Security number a number is assigned at random to stand as a common patient code. This code is recognized by all services under the database umbrella and allows for cross agency tracking. The system can also work "backwards," that is, it can use probabilistic linking to find a client's lost or forgotten code number. This strategy also minimizes duplication that commonly occurs in larger states. Duplication can occur when people have the same date of birth, last name, gender, and class. Duplication can also result from Social Security number fraud.
Georgia
Lyn Myers spoke for Georgia. Georgia maintains a database that links department of labor information with TANF data. The common identifiers used are Social Security number and name. Using Social Security numbers and names limits the analyses Georgia can perform because of confidentiality concerns. Georgia cannot contact employers of TANF leavers or TANF leavers themselves to determine the quality of individuals' work or their reason for leaving work. Georgia cannot check data quality and must make assumptions on whether data is correct based on experience. For example, unlikely high income levels are seen as evidence of incorrect data. To maximize data usage, Georgia uses number matches even if the name does not match.
Discussion
In discussion that followed, participants noted that every form of creating common identifiers has the potential to breach confidentiality. Some other strategies to identify individuals were discussed, including blocking. In general, participants were wary of using Social Security numbers and names as the only identifying elements because of substantial inaccuracy. A number of participants also referred to a new identifying strategy that examines the number of people in two or more systems using only date-of-birth information. This approach is event-centered. It does not allow for tracking, but does allow for confidential analyses of datasets and comparison of numbers across systems.
Further discussion focused on data matching accuracy. When matching, there are two potential forms of error rate. One is mismatching rate, which is defined as the rate at which matches are being made, but to the wrong people. The other is the rate at which no matches are being made at all. When data matching, there is a tendency to have one sort of error rate more than the other. Each type is inevitable; however, researchers must pay attention to why matching problems are occurring and adjust their matching techniques according to which type of error rate is most acceptable for their particular study.
In summary, some important considerations to establishing databases were highlighted:
- States must assign common identifies at uniform periods in clients' lifetimes (e.g., birth, immunizations, etc.).
- It is important to distinguish between household and family data. An individual can live in multiple households and with multiple families.
- Relationship data, although while tough to maintain, is the most consistent data one can keep as far as a household is concerned a mother will always be a mother; foster parents are foster parents until their time is terminated.
- Case definition is not uniform across agencies.
Track 3, Session 3:
Data Issues, Data Analysis
Chapin Hall's Fred Wulczyn was both moderator and resource person for this session. He focused on five questions central to all types of data analysis:
- How many?
- Who are they?
- How long are they in 'X'?
- What did they get while there?
- What was the outcome?
For the purposes of this session, child welfare/foster care population was used as the example event. At the outset of analysis, researchers must develop a solid "theory of change." In this case, data were gathered to inform a theory of foster care population increase.
How Many?
Wulczyn sad that researchers sometimes exhibit a tendency to try to explain trends, rather than examine them. This muddies the picture of what is really happening. For instance, meeting participants identified many reasons for foster care population increase in New York; most of the suggested reasons were entry-focused (AIDS, crack epidemics, etc.). However, the basic reason for the population increase was that admission numbers were higher than discharges. The question then becomes why were admission numbers increasing faster than discharge numbers. In order to enhance the numbers and to get a better picture of the foster care situation, data analysis of this sort requires a population model that strives to articulate the relationship between population size, admission and discharge rates, and length of stay.
Who Are They?
The distinction between positive and negative outcomes (who stays in the program versus who is discharged) must be realized in order to comprehend the cause of event change. Policy levers within programs are different for different people. For instance, infants admitted to foster care may be discharged as five-year-olds while a child admitted as an eight-year-old may be discharged as a sixteen-year-old. The age differences may be a factor in length of stay. Researchers must examine who is staying and who is leaving in the context of individual characteristics and system dynamics.
How Long Do They Stay?
Examining how long children stay in foster care with relation to who they are and system dynamics requires multi-variate analyses in order to understand system dynamics. Drastic changes in event structure have long-lasting effects or effects that are realized long after they occur. For instance, in the case of foster care, drastic population shifts, known as booms or lows, continue to affect population size years after they occur. If a great number of children are admitted one year, many of those same children will be discharged years later. This will decrease population size. Also, a boom in admissions without a coinciding boom in discharges, even when average length of stay remains stable, results in a raise in population for a long period of time. In light of this, Fred suggested that a backlog of data is necessary to examine the real reasons behind population change. A drastic change in event could be the result of unclear definition of service eligibility or discharge eligibility. Just as children have different experiences, so do the services they use. Therefore, definitional refinement should be context-specific.
This sort of analysis reveals the importance of examining the event in question (population fluctuation), as well as what makes children vulnerable. In order to induce change, researchers should pay attention to the mechanics of services as well as the children they serve.
What They Get
Of course, programs, even in the same sector, offer varying services. This is why researchers need to develop multi-variate indicators. With multi-variate indicators, researchers can better identify causes of event change and thus avoid making ill-informed decisions regarding adjustment.
Outcomes
The process of data analysis is aimed at improving the outcome for clients involved in the program being studied. Wulczyn pointed out that in order to have the most positive outcomes, policy decisions based on data analysis need to be made with the most dominant factors in mind. For instance, in the case of foster care in New York, the government assumed that a raise in population was due to a raise in the number of children eligible for services and gave additional beds to a foster care provider. However, in order to maximize funding, foster care providers often adjust their definition of service eligibility to fill as many beds as they have; in this case, the decision to increase the amount of beds at a facility affected children adversely admissions rose because of bed numbers, not need. In light of this example, Fred suggests that policy decisions be made only after a sufficient amount of data has been collected and outcomes considered.
Discussion
To validate the use of an indicator, Fred suggested approaching it from three angles: entry-cohort, exit-cohort, and point-in-time cohort. Ideally, these three variables will rise and fall symbiotically. If they do not, the service begs further examination. (Entry cohort needs no unraveling, whereas exit and point-in-time references do.) Further, in order for indicators to be accepted in the general public as valid elements of policy change, researchers must show results in their given area.
Briefly, participants discussed nodes of communication to facilitate policy change. Using the media to communicate findings was suggested as was presenting findings directly to programs to help demonstrate to them where and how to improve services. Because the issues and methodology in projects are often complex, it is important to take time to explain the process of study thoroughly to stakeholders; if providers do not understand findings or if they have questions about the methodology, they may be less inclined to accept their shortcomings, and ultimately, clients will suffer. In short, the transfer of valid and well-explained information shifts the power to the client.
In closing, Wulczyn noted that data collection and analysis strategies should be uniform, regardless of the event being examined. Although, he used foster care population as the central example of his presentation, the concepts could be applied to a number of different topics. Wulczyn told participants that looking at events in the most basic terms often provides for the most rich and meaningful indicator formation. In general, he suggested the collection of different kinds of data and linking them to specific facets of the event being studied.