Skip to main content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.


The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

A Summary of the Meeting of May 30-June 1, 2001

Publication Date

This technical assistance workshop was the fourth in a series of technical assistance workshops hosted by the Chapin Hall Center for Children for participating states in the Child Indicators Initiative. The Office of the Assistant Secretary for Planning and Evaluation, U.S. Department of Health and Human Services, sponsors the Advancing States Child Indicator Initiatives Project. Martha Moorehouse is the Project Officer. The Chapin Hall Center for Children at the University of Chicago provides technical assistance to the Indicator Project states and prepared this summary. Harold Richman of Chapin Hall is the Principal Investigator and MairГ©ad Reidy is the Project Director.

Chapin Hall Center for Children
1313 East 60th Street
Chicago, Illinois 60637

A Chapin Hall Working Paper, CS-76


Overview: The technical assistance workshop, held at Chapin Hall from May 30-June 1, 2001, was the fourth in a series of technical assistance workshops hosted by the Chapin Hall Center for Children for participating states in the Child Indicators Initiative. The workshops encouraged peer leadership and collaboration among states, and provided states with an opportunity to work with and learn from one another on areas of common interest. While the grant period had ended, the aim of the meeting was to help sustain the work of states by providing an opportunity to discuss successes and new challenges and also to have a chance to reflect on the future use of indicators. As in previous meetings, Chapin Hall discussed technical assistance needs extensively with participating states and worked collaboratively with them to develop the agenda for this meeting. State participants reported on progress, shared information on successes and brainstormed around emerging and ongoing challenges. Key experts in the field of child indicators were invited to share their expertise and brainstorm with participants as they discussed challenges and successes. The May 2001 workshops provided participants with practical guidance in areas including: (1) The factors and contexts that influence how indicators get used in states; and (2) The role of indicators as a reference tool in policy planning, development and evaluation. The meeting included the following sessions and the minutes of these sessions follow:

The factors and contexts that influence how indicators get used in states - with a reflection on how to survive changes in leadership.

The session coordinator was Harold Richman, Chapin Hall, and the guest speaker was Cornelius D. Hogan, Senior Consultant, Annie E. Casey Foundation, Baltimore.

Growing an outcomes based culture within communities.

The session coordinator was Ada Skyles, Chapin Hall, and the presenter was David Murphey, Senior Policy Analyst, Vermont Agency of Human Services.

The role of indicators as a reference tool

This session looks at indicators' utility in policy planning, development and evaluation and emphasizes how indicators have played into past legislation or executive change and the role they might play in future change. The session coordinator was Fred Wulczyn of Chapin Hall and the guest speakers were Christine Ferguson, Director of Rhode Island Department of Human Services, and James Dimas, Senior Associate, Casey Strategic Consulting Group, Annie E. Casey Foundation, Baltimore. Elizabeth Burke Bryant of Rhode Island KidsCount also made a formal presentation.

Use of the Census 2000 for indicators at the state and local level.

The session coordinator was Allen Harden, Chapin Hall, and the guest speaker was Cynthia Taeuber, Program Policy Advisor, University of Baltimore and the Census Bureau.

Legal and ethical issues in data linking.

The session coordinator and presenter was Robert Goerge, Chapin Hall.

Generating new knowledge from linked administrative data. The session coordinator and presenter was Bong Joo Lee, Chapin Hall.

How to use the Web to collect and distribute indicators.

The session coordinator was Fred Wulczyn, Chapin Hall, and the guest speaker was Dean Duncan, Clinical Assistant Professor, University of North Carolina at Chapel Hill.

Update on school readiness indicators and the use of indicators in early childhood initiatives.

The session coordinator was Mairead Reidy, Chapin Hall. The guest speaker was John Love, Senior Fellow, Mathematica Policy Research. Cathie Walsh and Elizabeth Burke Bryant, Rhode Island Kidscount, made formal presentations.

Use of indicators to track welfare reform.

The session coordinator was Mairead Reidy, Chapin Hall. The guest speaker was Larry Aber, Director, School of Public Health National Center for Children in Poverty. A formal presentation was made by Martha Moorehouse, Director, Division of Children and Youth Policy, Office of the Assistant Secretary for Planning and Evaluation, U.S. Department of Health and Human Services.

How to train community partners on how to use data, and how to identify and deal with pitfalls.

The session coordinator was Bong Joo Lee, Chapin Hall. David Murphey, Senior Policy Analyst, Vermont Agency of Human Services, made a formal presentation.

International indicators update.

Session coordinators and presenters were Robert Goerge and Mairead Reidy, Chapin Hall, and Larry Aber, School of Public Health, National Center for Children in Poverty.


Welcome, Opening Remarks, and Introduction

Harold Richman and Mairead Reidy of Chapin Hall welcomed the participants and introduced Martha Moorehouse of the U.S. Department of Health and Human Services, Office of the Assistant Secretary for Planning and Evaluation.

Martha Moorehouse

Moorehouse began by explaining why it was important to meet, even though the grant period has ended. She said in part:

We thought a lot about whether it made sense to do a meeting after the project was over. We went into the project believing this was about working with states where they were and helping them move along, individually and together, in work with child indicators. Fundamentally, this project is part of government working both at the national level and at the state level in sustaining work on indicators. We decided that if this is about sustaining work on indicators, then we would like to see you after the grants, with the idea that this was part of your sustained work.

Moorehouse said that through the input of the states, the resource people attending this meeting, and the staff at Chapin Hall, the meeting was developed as an effort not just to show off what had been accomplished, but to move the work forward. She and the meeting planners had assessed the remaining issues and work that was still needed, and identified the assistance and discussion that could be helpful. Moorehouse noted that Ann Segal, formerly of ASPE and now of the Packard Foundation, had been a big part of the effort and had joined the meeting. She also said that ASPE will continue, through its own and interagency activities, to make available a variety of federal statistics on child well-being.

Moorehouse said that the Indicators project came out of work that ASPE launched as welfare reform was occurring and was part of a larger policy interest in examining effects of welfare reform. The effort began by working with states that were conducting specific welfare evaluations that included outcomes for children. Some of those early studies have not been published. That work reinforced ASPE's feeling that the opportunities to conduct randomized experiments to illuminate policy issues involving children are few. This project was aimed at working with states to help them develop tools to monitor child well-being over time, and to do this work in partnership with state government and with a strong focus on building state capacity.

Mairéad Reidy

Reidy said the aim of the meeting was to help sustain the work by providing an opportunity to discuss successes and new challenges and also to have a chance to reflect on the future use of indicators. She thanked those who had helped shape the agenda and welcomed invited guests.

The Factors and Contexts that Influence How Indicators Get Used in States, With a Reflection on How to Survive Changes in Leadership

The first speaker was Cornelius D. Hogan, Senior Consultant with the Annie E. Casey Foundation of Baltimore and former Secretary of the Vermont Agency of Human Services. He was introduced by Harold Richman of Chapin Hall.

Hogan was followed by Gwendoline Angelet of Delaware, Christine Johnson of Florida, and Michael Lahti of Maine. Most sessions from these meetings are summarized. In this case and a few others, we offer a transcript of remarks by the speakers.

Con Hogan

When we think of indicators and outcomes, we have to think about neighborhoods. We have to remember we are working on the behalf of neighborhoods and communities. That is what this work is all about. It is at the neighborhood level, at the community level that indicators have a context. And indicators have to have a context. An indicator without a context is like a great meal without ambiance. It is like a politician without an audience or a sailboat without wind. It is the ambiance, the audience, the wind that brings them to life. Context brings an indicator to life.

Indicators are not worth much unless they are put in a strong context. Another word for that context is a common purpose. Without a strong articulated common purpose, indicators don't have resonance. In fact, it is very hard to talk about indicators without talking about the common purpose. One way to guarantee a powerful context is to formulate indicators that describe cross-programmatic, cross-organizational, cross-sector outcomes. Similarly, contexts that are cross-programmatic, cross-organizational, cross-sector will bring indicators to life in a powerful way.

When we discuss indicators it is important that we not become involved in the discussion of specific programs and indicators for the success of that program. The idea of measuring accountability for a specific program mires us in a discussion of the details of those specific programs--often this is a negative discussion of specific programs--and it doesn't allow us to discuss indicators at a higher level. It also prevents us from placing those indicators in context.

To have a broad impact, the outcomes must resonate. There are a number of ways to obtain outcome resonance:

  1. Use short, simple, declarative sentences--a noun, a verb, and an object. Language is essential for developing a context for indicators. For example, the following indicator is short and sweet: All children are ready for school.
  2. Use language that engages people at an emotional level. Language that brings forth emotions is language that connects with people. Example: All babies are born healthy. This is an outcome that tugs at the heart. Everybody can relate to that. Indicators are more powerful when they are stated in the context of a strong, declarative, emotional, from-the-heart statement.
  3. Use developmental language, where possible. Indicators become more powerful when the language you use to describe them connects with the stories of all our lives. They compel attention, touch people in the heart, and bring more technical indicators to life. If your language relates to the life cycle of human beings--from birth to death--you give your indicators and outcomes a rich context.
  4. Don't threaten other models or other people's work. All professions and disciplines have important contributions to make to this work. Don't write or speak in a way that threatens other people, disciplines, or other specialties. There has to be room in your language for everybody. It is important that the language you use doesn't exclude.
  5. Don't slip into jargon or the specific language of a discipline. Don't clutter up your language with obscure language and jargon. I know this is part of our make up, but this kind of language is foreign to the rest of the world. No language about Medicaid, Title XX, PRWORA (Personal Responsibility and Work Opportunity Reconciliation Act).
  6. Use words that any of us can understand. This is essential if communities are to become key players in this work.

Here are some examples of indicators and outcomes written with the above suggestions in mind:

  • Pregnant women and newborns thrive
  • Infants and children thrive
  • Children are ready for school
  • Children succeed in school
  • Children live in caring and supportive families
  • Youth choose healthy behaviors
  • Youth become successful adults
  • People live in safe and supporting communities
  • Elders and people with disabilities are resources in their communities and live with dignity and independence in settings they prefer

These outcomes are stated in positive terms. If the outcomes are positive statements, they motivate. Whenever you can restate a negative outcome in positive terms, you have unlocked another level of motivation. Here are some examples of powerful outcome statements that have made a difference:

  • Win the war
  • All children have a human relationship they can depend on
  • All babies are born healthy
  • No taxation without representation
  • Better lives through living and learning
  • Covering kids
  • Youth choose healthy behaviors
  • Put a man on the moon
  • All children can
  • Caring communities
  • End welfare as we know it

Win the war. One of the most powerful outcome statements this country has ever been touched by was "win the war." I am old enough to remember my mom collecting tinfoil during the war, collecting her nylons. Everybody knew they had something to contribute. They weren't told what to do.

Here is a case study in the Trondheim region of Norway, which has about 120,000 people, about half the population of Vermont. Their bureaucracies are fascinating. They are much more vertical than the ones here, there are more steps from top to bottom. These bureaucracies are also much more specialized. They will take our child welfare function and break that up into three bureaucracies. By the time you get to the top of their bureaucracies you really can't see what the other guys are doing.

What is the common purpose they could organize themselves around, no matter what bureaucracy they were part of, no matter what organization they belong to? This is the language they came up with: "All children have a human relationship they can depend on." Think about the power of this indicator. The stronger, more powerful, more understandable the common purpose is, the more the context brings indicators to life.

Powerful outcomes and their indicators:

  • Are clear and declarative statements of fundamental well-being.
  • Are bigger than any program or organization.
  • Connect to us emotionally. Ideally they provide a developmental view of our lives.
  • Are stated in positive terms where possible.
  • Are measurable, by and large.
  • Are presented over time.
  • Extend beyond political cycles.
  • Are presented nationally, statewide, and locally.
  • Are interactive.
  • Are accumulative over time.

Preserve the culture. This is a raging issue for native Hawaiians and what role they are going to play in the future. Preserve the culture. That language meant something to them. They understand it. They can measure it. It is a very powerful idea.

As we listen to the reactions in the different states we ought to have a red flag that goes up every time we inadvertently bring up an indicator that relates to a specific program. If you really want to keep this powerful, make sure you talk about cross-sector indicators. Indicators that touch more than one program, more than one organizational unit, more than one personality, more than one leader, have much more resonance.

Another thing I've seen states do that you should watch out for is put together indicators for only two or three years. You are laying a trap for yourself if you put together indicators for only two or three years. You really have to collect indicator data for a decade or so. A few years of indicators can get connected to a single administration. You really should be looking at this stuff over the course of a decade to really cross sectors and work out all of the political things. Otherwise, it can really grab you by the neck.

If indicators are used as part of a positive challenge, they communicate volumes. Example: What can you do in your organization to improve teen pregnancy rates? Being able to ask a question and get answers is part of what makes the indicator process an actual process. You have to be able to convert indicators into a personal and interpersonal challenge. When I was Secretary of Vermont's Agency of Human Services I worked with 12 different departments. I always asked the commissioner of each department the same evaluation question every year: What did you do, or can you do, to improve the well-being of the people we serve as described by our agreed upon outcomes? You would be amazed, when you personalize it that way, how the indicators take on importance and become key to the work we all do.

When you have your indicator data, map it out. Part of the indicator challenge is not just talking about indicators, but mapping your data out, putting it into graphs, or some other format, that can easily be used by the press to get word of your program out. It is essential that you find a format that makes it easy for the press to "steal" and place on the news.

Don't focus on just one indicator. There are indicator herds. In conceptualizing indicators, there are always leading indicators which the other indicators follow. If you can find one or two that are going in the right direction, you can be guaranteed you will have 4 or 5 going in the right direction. It is all connected. For example, examining lowering infant mortality leads to considering lowering smoking rates for pregnant women. That in turn leads to thinking about higher education levels, because women who drop out of school are more likely to become pregnant which leads to lower teen pregnancy rates. Lowering teen pregnancy rates leads us to lower child abuse rates. And so it goes.

Everything is connected. Anywhere leads to everywhere. The more you think about your indicators in a connected way, the more you get an integrated view of your work. And the more integrated your view is, the more you can see how it all fits in context, the more your work will resonate with others, and more satisfying and effective your work becomes.

Another way to work with your indicators is to connect changes in indicators with costs. If you do this, you may be able to open up a whole new realm of discussions with a range of people you never thought possible before. You may also be able to bring new arguments to your political process.

If you are able to compute the correlation between indicators and cost you will be able to build relationships with businesses and businesspeople. You will, in a sense, be meeting them on their turf because this is how they think: in profits and loss, benefits and costs. Businesspeople may not know the technical side of teen pregnancies, for example, but they do know that if you have fewer teen pregnancies it costs the state less. Having these numbers available can be very powerful politically. Once again, map these numbers, put them into a graph that states your message simply and strongly. If you describe the movement of your indicators in this way, you have provided a context that will speak to more people and will inspire a sense of pride in your accomplishments.

To survive leadership changes you have to get to the point where your indicators and outcomes are embedded in the culture. One way to do this is to connect the concepts and language of your initiatives to those of other sectors and initiatives. You can, for example, connect with businesspeople in your community by using a modified balance sheet to show the equivalence of positive equity and positive well-being for children, families, and communities. One way to look at indicators is as something akin to a business balance sheet. Our equivalent to the items on a balance sheet is the well-being of our people. The classical balance sheet has been around for a couple thousand years. It can easily be adapted for your purposes. How do the ideas of a balance sheet map across to key indicators? If you have improved indicators, that's a positive cash flow. Improved indicators are short-term assets. Long-term assets include outcomes and indicator structures.

Businesspeople understand balance sheets. They especially appreciate methods of measuring intangible assets. For a business, intangible assets include good will, the capacity and experience of the workforce, customer value, and leadership. Our intangible assets include common purpose, political credibility, community engagement, community assets, and leadership. If you put your indicators and outcomes into this context you will see businesspeople light up. They will trust you and believe you know what you are doing. If you can connect your indicators language to other sectors, you know you have got something that is universal.

I'll end with this fundamental thought about outcomes. The more complicated the organization, the more dangerous the situation, the longer the time line, the larger the area, the more people involved, the more complex the information, the more intense the politics, the more compelling the common purpose must be.

Gwendoline B. Angalet, Acting Director of Child Mental Health Services, Department of Services for Children, Youth, and Their Families, State of Delaware

What I'd like to talk to you about for the next few minutes is the practical side of transition and the practice of trying to survive changes in leadership within a state as it relates to social indicators of children's health and well being. In 1998, Delaware published its first Families Count/Kids Count fact books through collaboration with the Kids Count Project administered by the University of Delaware. We have a wonderful relationship with our Kids Count project in Delaware. I think it will be the entity that helps us maintain the momentum as we go forward promoting the use of indicators in Delaware. They have been our strong partner in this effort over the past couple of years. The indicators we use are broad-based indicators that cut across organizational lines, sector lines, geographic lines, that try to paint a complete picture of what is happening with children and families in the context of their communities. What is important to remember here is that we started in 1998.

With the support of the federal grant from U.S. DHHS on using social indicators to promote children's' health and well-being, we broadened the engagement with communities. To this end, we developed a video. We have just completed the video and we are just now distributing it across the state for community groups to use. We have a guide that goes along with the video about ways you can get involved with your community to advocate on behalf of children--one child at a time, or in a group setting. We also have an evaluation form for those people who want to give us feedback about how the video is being received and some action steps they've taken as a result. We'll be able to compile this information and use it to inform our process as we go forward.

These products were done during the administration of our former governor, Governor Thomas Carper. He was a strong believer in what gets measured, gets done. That tone emanated from the top and through our agencies in state government. It resonated with our partner, the University of Delaware, and through our community. He created the Family Services Cabinet Council, which was an important vehicle for bringing together, twice a month, the state agencies concerned with children and family issues.

The Governor personally chaired the council. We had an active agenda and made much progress on children's' issues during those 8 years.

What's Happening Now in Transition from Governor Carper to Governor Minner?

One of the things I am finding out--and I've been through four transitions (I've been in state government a long time)--is that this transition is very different from any that we have gone through. You don't think when you transition from one governor to another from the same party that you are going to go through a complete turnover in the leadership of organizations. We really haven't in the past. But we did this time. That effect is being felt as we try to maintain momentum around those things we thought were working in the prior administration.

Transitions take a long time. They don't happen in just the first 100 days. Because your secretaries get appointed, then they appoint division directors or agency heads, who appoint others, and you have a series of changes, a trickle down effect. So it takes a long time to put it all in place. The learning curve that the new team had takes a while. It's funny, I understood this before, but I understand it now in a very practical way, as we try to maintain the continuity in our work on the indicators.

The new administration is really trying to figure out very basic things like making sure all our computers work, making sure that day-to-day operations are going on. At the same time they are trying to appoint their agency heads. In this case, our Governor is very organized. She had a list of every thing she wanted to get done. She hit the ground running.

Her list of things to get done was very much related to what her campaign platform had been. She didn't hear a lot about social indicators of children's health and well-being on the campaign trail. So that was not at the top of her list of items she wanted to take action on. Then, the 2000 fact book for Families Count/Kids Count came out in January. We invited the Governor to come for the initial press conference.

Some of the indicators in the book were not going in the right direction. Her new team didn't feel comfortable with her coming because they thought she might be put on the spot and they weren't prepared to deal quickly with a response.

We thought we'd done our homework. We met with key advisors before the new team took office. We had the support of new cabinet members. We thought we were doing all the right things. But we came to realize we weren't. We thought what the governor could do is say she was going to continue the work of the cabinet council and give them the charge to work on the indicators that were not going in the right direction and come up with some comprehensive strategies that could address those. However, we pushed a little too hard and as a result, were unable to make much progress.

By the time Kids Count was having its first annual conference in March--and we were actively participating in that--the Governor felt a little more comfortable. She had a little more time to look at the information. Her immediate staff felt more comfortable and that was important. So, she was able to come. She talked about supporting, in concept, the indicators and how she wanted to continue to work around collaboration. She announced that she would appoint an interagency group for bringing the different sectors together to do this.

What Did We Learn From This?

One of the things that really hit home with me was the idea that, if I had it to do over again, I would have produced the video within the first few months of our grant project and gotten it out there. And we would have put the community engagement piece into high gear quickly, really quickly. Because if the governor had heard, while she was campaigning last fall, that citizens were concerned about this indicator or that indicator, it would have resonated with her and her team. I think the other thing that we would have done is spend more time with the governor's team--even during those campaign days--and built those relationships. We would have had an easier time with the top level, so that when we invited the governor to the press conference in January, they would have said, "Oh, it's okay. We know this is what is happening here and we can work through it."

I think the other thing we would have done--and we are trying to do this now--is talk to the legislators about this book. I think when we start to break the data down and make it more personal for individual legislators, personalize it for their individual senatorial or representative districts, then it is going to be more meaningful for them. And they're not going to think about this as a big book with lots of data. They're going to start to think about this in more real terms, in terms of the impact this is going to have on their constituents. So the degree to which we can personalize this information for the legislators, I think that's going to help us reach out.

Another thing we learned from this is that, at the 35,000-foot level, everyone says the indicators are good. But when it comes time to translate the indicators into very practical usage, whether it's advocacy in the community or whether it's policy making and allocation of resources at a state agency, that's a whole different ballgame. I think we have to work to help people make those translations. For example, consider children's readiness for school, and all the indicators that fit into it. We have to help people break this down into a usable factoid, that they can make sense of and really act on.

So where do we go from here? University of Delaware is the entity that is helping us take the important next steps for maintaining the use of the Kids Count, Families Count indicators and the use of the video. We're going to promote the video more. We're going to use these as tools to strengthen our relationships. And one of the things I am very much assured by is that the money that's in the state budget to support Kids Count, which includes production of the Families Count indicators, is still there. Nobody has redlined it. That is a very good start for us because we are really crunching for resources in our state.

The last thing we are going to have to do is continue to have our partners rally around a common purpose. My new boss is a planner and a marketer. What she came up with--which is really helping us--is "Think of the child first." That's become our motto. But I think it is the kind of motto that resonates with a lot of people in terms of looking for that central purpose, that common ground. So, as we go forward, we are going to think of the child first and hopefully that will give us that rallying flag for us to keep things moving forward to survive this change of leadership.

Comments on Factors and Contexts That Influence how Indicators Get Used -- With Reflection on Surviving Changes in Leadership

Christine Johnson, Florida Education Policy Studies, Learning Systems Institute, Florida State University

Political context: Changes in power and leadership affect commitment to indicators projects, which by their nature, need to be long-term:

Changes in administration and political party control that affect public policy agendas and priorities. In general, newly elected officials want to "make their own mark." This is particularly when there's a change in political parties. Projects started under a prior administration may not be continued at all or may change dramatically in their form or emphasis. Also, differences emerge in the perception of government's role not only in solving problems, but also in how these problems are measured. In Florida, these issues were manifested as follows:

  1. A shift from democratic to republication control in both the Governor's Office and the state legislature since 1994.
  2. With that change, more emphasis is being placed on limiting the size and role of state government and devolving responsibility and control to the local level. Also, education, workforce development and growth management are higher policy/funding priorities at this point; human/social services have relatively less emphasis than in the past.
  3. The effects of these changes are: (a) less interest in funding or sustaining indicator initiatives (at least two major state indicator initiatives have died since 1998); (b) more interest in tracking indicators in targeted policy areas, such as education, where the state has a key role in holding local school districts accountable; (c) more interest in "getting things done" through a corporate management style (less emphasis on public engagement and participation which by it's nature is more time-consuming and process oriented). This translates into collecting information that will be of immediate use to policymakers as opposed to keeping the public engaged and informed.

Term limits: In Florida, over half of state House and Senate members were new this year because of the beginning of term limits. As a result, there is less potential for continuing prior commitments and initiatives. New legislators don't necessarily understand the history of why indicator or other initiatives were developed - or buy into the idea. Also, they bring different ideas about how to set policy and run government.

Organizational Structure and Change

  • The State of Florida does not have a Children's Cabinet or similar structure to organize or support a state-initiated community indicators project focusing on children. Functions (e.g., education, child welfare, health, child support) are "siloed" in different agencies, which, for the most part, stay within their own boundaries as far as indicators and performance measures are concerned. Also, the state's concern is primarily on use of measures for accountability - a more intermediate level of measurement--rather than on broad measures of well-being. Some exceptions are measures related to educational achievement and maternal/child health - because public schools and public health by their mission, focus on the entire population of children. However, the state's focus on accountability - has to a large extent put indicators and measurement, in general, to punitive use. It immediately conjures up the question "who do we blame?" rather than "what will it take at the state and local level to make trends move in the right direction?"
  • Major organizational changes (e.g., in workforce development) also can affect indicator initiatives--in positive or negative ways. For example, Florida has or is currently undergoing major changes in the organizational structure for education, workforce development, school readiness, and child/family services. These changes usually mean changes in what gets measured and how--which can interfere with comparisons across time. However, these changes also present opportunities for improving measures, integrating information systems and making measures more meaningful.

Demographic context: Demographics may play a part in the type of indicator systems developed and how public buy-in is achieved. Florida has an ethnically diverse population and a large percentage of seniors (age 65 and older) relative to the rest of the country. Most indicator systems developed at the state and local level do not focus on specific subpopulations (e.g., children only, African Americans only). They include indicators relevant to these subgroups - but embedded in a broader range of indicators geared toward the general population. Within this framework, subpopulations are addressed through specific indicators (e.g., low birthweight) or breakdowns by race, age, region or other factor.

Local context: Large urban areas have the capacity to start and continue indicator projects on their own resources--without depending on state support. This has certainly been true in Jacksonville, Palm Beach, Orlando, Fort Lauderdale and other metropolitan areas of Florida. In the long-term, these projects may be more stable because of their independence and closeness to the issues of greatest importance at the local level. On the other hand, rural communities with fewer resources, tend to be left out of the picture.

Leadership for indicator projects, at least in Florida, appears to be more stable at the local than the state level - although the same political uncertainties apply. The Jacksonville Community Council, Inc. indicators project, for example, has experienced substantial fluctuations in annual funding depending on support coming from the mayor's office or the chairman of the local chamber of commerce. What has helped to sustain this 15-year initiative through these funding ups and downs is (1) the participation and buy-in at the community level which was strong from the start, (2) the use of indicators to do studies of local issues and problems, (3) the public's trust in the quality of data and (4) the public's continuing expectation of data availability.

Public and institutional reaction: Reaction seems to be generally positive to indicator systems that seek to "generally inform"--without consequences. Problems usually arise when these systems actually get used, for example, to cut funding for programs or to grade schools "A" to "F" on student achievement. When decisions "hit close to home" reactions are likely to be most intense. For example, based on anecdotal evidence, there seems to be a backlash among parents of high-achieving students in public schools in Florida, who are concerned about how their children will be affected by what they perceive to be "high stakes" testing. Likewise, schools, agencies and other institutions can get defensive about what indicators say - particularly if the public or the media use the data to criticize them. Developing buy-in and creating a climate of cooperation can mitigate these effects but are difficult to achieve, particularly when public opinion is divisive or state-local relationships are mistrustful.

Measurement context: The range and availability of data are continuing to increase at the state and local level--a real plus for indicator projects. Data collection itself is improving, as is the technology for analyzing and distributing information to policymakers and other consumers. However, we still haven't mastered the art of interpreting this information meaningfully in all of its complexity. For example, people still tend to focus on individual measures without understanding problems on a broader, more complex level - a difficult task given the quantity of information out there and the interconnections of programs and outcomes. How do we keep it simple enough to understand, yet avoid misleading policymakers and the public into simplistic solutions?

Policymakers typically want to know more than broad outcomes. They want diagnostic information so they know "what to fix." The challenge is communicating this information - including cause-and-effect relationships - in a clear, concise, timely and useful way. Indicator systems are useful signaling and analysis tools, but at least for policymakers, there is demand for broader synthesis of information from a variety of sources - indicators, research studies, surveys, etc. - to guide decision-making on a host of issues within a relatively short time frame, such as a legislative session. Policymakers are certainly not the only major users of indicator data - but they do have an important influence on state and local capacity to solve problems and improve the well-being of the population.

Michel Lahti, Manager of Evaluation Services, Institute for Public Sector Innovation, Edmund S. Muskie School of Public Service, University of Southern Maine

Our work in Maine has resulted in a project called Maine Marks. Our first publication came out in February. I want to give you a little context and focus on what we have incorporated into the process. Because we have not experienced a change in governorship during the period of this process, we still have three more years with the current governor, who is a big supporter of this. I want to talk about how we've tried to create a structure for this to continue, regardless of what happens.

This has to be bigger than any one person or program. I often think what we are doing is telling a story. I keep thinking about people sitting around a fire at the gates of their community.

You walk up to the first ones and they tell you what is happening in the community. The stories we want to hear become important, not necessarily who the king is.

The first publication for Maine Marks project came out in February 2001. It took us about 2 years to develop. This grant was a huge influence on that--as well as Kids Count--they are a very significant partner.

What happened in Maine is that a governor's Children's Cabinet was established five years ago, not in statute but through a Danforth Foundation grant and the vision of the current governor to bring these agencies together. The group established a set of 12 outcome statements for kids, families, and communities in Maine.

We were given the outcome statements right from the beginning. Our tack was to look at indicators that could relate with those outcome statements. We ended up with 80 indicators, which is a big number. We had a lot of conversations about that number. I think back to a meeting here where David Murphey was talking about not being too afraid of big numbers because then a lot of people can find a place to sit.

I think we have a lot of indicators that are not well defined. But people felt strongly when we were creating the indicators that all of the indicators were important, so we decided to get them down on paper from the beginning.

As for a structure and surviving leadership changes, we were given these outcomes from the beginning. These outcome statements themselves were not well-defined. One of the other things we wanted, and got, was to have at least half of these indicators to be promotional and/or strength-based. That also ended up in a lot of work. We ended up using a lot of material from our Search Institute and other places. We felt strongly that we wanted a comprehensive picture. Again we wanted to take a look at all that was happening with kids and families.

Right now, specific to the work of the project, we are trying to look at data collection for next year, for further definitions or indicators. And we'd love to get county-level data, because right now we just have state-level data for a number of indicators, especially newer ones.

In terms of surviving leadership changes, some of the things we have been intentional about, this connection to the Children's Cabinet is one of them, and being broad in scope in terms of the number of indicators we are looking at. Again, there is something in there for everybody.

The Children's Cabinet is now established in statute. Within the Children's Cabinet is a council that has representatives from the legislature, the judicial branch, and other key stakeholders and policymakers within the state government. So these indicators have become their indicators. Hopefully that will be helpful over time.

We did look at outcome statements that were categorical in nature. We have a few indicators, that when you look at them, are programmatic. I don't know how that happened, but it did.

I think another thing we tried to do--and maintained well--is partnerships. Things couldn't have happened without Kids Count, and partnerships with the Maine Development Foundation were critical, because they had already established indicator work that people used. So our book was formatted to look exactly like their book. That was very intentional. We worked at the Maine School of Science and Mathematics. They helped us at the very beginning to look at these, to develop the web pages, and the connection to the university has been helpful as well.

A big task which has already been mentioned, and which we hope to learn more about, is driving this work down and across systems.

Becky Boober is here; she is amazing for region 3. They have already started within their Children's Cabinet to connect Maine Works to things like training and data usage. They'll use this as content for training and for use in grantmaking. So when people come to them for dollars, they are asking them to connect their thinking to these indicators and program development. As an evaluation work, we are trying to tie things back to this.

The other thing that snuck in and out of this process--and I was surprised it didn't happen more often--was that there were instances when we had to think of how this connected to performance-based budgeting. There are two performance-based government initiatives that had already been established in statute. So those questions got asked and we put together little charts that showed how things were lined up. And then it kind of went away, which was a good thing for me. But it will be back. I think you should be prepared for that, for strategic plans. I think in the future it is important that we remember that this is an iterative process. And I think when we talk about this as actual products that that is certainly not true.

I remember hearing a psychologist or psychiatrist talking about autism on the radio, saying how that problem used to be understood, and how it is understood today. Certainly our language about children is going to change. The other thing is that for me, it always bugs me when I flip on CNN and see the tickertape flipping across the bottom of the channel, and how we think of economic indicators in this country. Yet, we don't have data on teen suicides in Maine. The most current data we have is 1997, because we can't get schools to deal with this whole inundation they have, filling out forms and getting kids to fill out surveys. Right now, it is such a struggle to get people to pay attention to social indicators as important, if not more important, than economic indicators.

I was thinking in closing that is it very helpful to think of this in metaphors. For me, I think this information and these systems are like a painter's palette. Leaders mix paints to create their picture. But that palette and that paint is there forever. We want to make sure that the paint is of the highest quality. And look at that as a metaphor for our work.

The system has to be bigger than whoever that artist is.

Growing an Outcomes-Based Culture With Communities

This session was coordinated by Ada Skyles of Chapin Hall. The presenters were David Murphey, Senior Policy Analyst of the Vermont Agency of Human Services; Cherie Hammond, Coordinator of the Success by Six Council in Lamoille County, Vermont; Scott Johnson, Coordinator of People in Partnership, also in Lamoille County; and Larry Pasti, Community Program Specialist in the New York State Office of Children and Family Services.

The session structure: Murphey raised and commented on key points which were then addressed by the other panelists. He began by saying that creating a climate in which communities can take advantage of indicators requires activity both from the top-down and from the bottom-up. With the publication of Vermont's first Kids Count book, Vermont communities asked for community-level data, feeling that it would be more useful than the county-level data presented in the volume. The state wanted to respond to this because it sees communities as the locus of change.

Murphey presented the following key points he and the panel hoped to cover [See also full paper]:

Growing an Outcomes-Based Culture with Communities

  1. Get local, broad-based buy-in on the outcomes and indicators (conceptual level) (with flexibility).
  2. Encourage outcomes-based collaborations ("set the table"); avoid "turf" issues.
  3. "Hold up the mirror" of community indicators.
  4. Promote a rational local review of the indicators, leading to prioritization (requires a comfort-level with data).
  5. Foster strategies to measure program outcomes as well as community outcomes (e.g., logic models and associated evaluation).
  6. Identify "turn the curve" strategies with specific who/what/by when action-steps.
  7. Consider negotiating for greater funding flexibility in exchange for improved outcomes.
  8. Engage the local media around the outcomes and indicators.
  9. Keep "holding up the mirror." No "high stakes," but gentle reflection.
  10. Stay in this for the long haul.

Murphey said that "getting local, broad-based buy-in" means that people need to understand that outcomes are about having a common purpose, that they are bigger than any single agency. And they need to understand what indicators are--specific, measurable ways of understanding the progress that is being made toward those outcomes. Having that conceptual framework in place is critical, but within that framework there is flexibility. Having that framework in place is more important than the adoption of a particular indicator. For Vermont's part, it has adopted a list of nine specific outcomes. But it encourages communities to go beyond that list to develop outcomes and create and monitor indicators of particular use locally. In Vermont, this has happened. Some communities have taken on all nine outcomes, some have developed additional outcomes, and some have chosen a smaller number of outcomes on which to focus.

Communities are experts, Murphey pointed out, on many topics--an expertise that the state cannot duplicate. His second overhead pointed out some areas in which communities possess expertise:

What Are Communities Experts On?

  • Their assets
  • Their needs
  • The "story behind" the data
  • Their priorities (value-driven as well as data-driven)
  • Their "character" (e.g., aspects of class, race/ethnicity, history, religion, etc.)
  • Other local conditions or circumstances that affect:
    • Cohesion/collaboration
    • Access to and utilization of services and supports
    • Risk and protective factors
    • "Practice variation" issues
    • Community readiness to move ahead

Murphey asked the panel to comment on getting local, board-based buy-in. Johnson stressed the importance of creating an army of advocates engaged in a variety of tasks working to forward the common purpose. Hammond said that with the community of those working on early childhood concerns in Vermont, there has been advocacy statewide on a broad-based agenda of early childhood issues. Hammond also told how she and her organization had worked with legislative candidates before the election to raise their concerns regarding early childhood issues, and continued that relationship later, as the state legislature took up issues relevant to young children.

Pasti sketched an integrated county planning demonstration project begun two years ago in selected New York counties. The project had two objectives. The first objective was to help counties be more comprehensive in their planning by bringing together two separate planning projects (one from the local human services and the other from the county youth bureau). The second objective was the way to change how counties plan, to move away from planning based on deficits and services. Among the goals of this effort was to enhance local control of the process, tap into grassroots resources, to broaden the focus of planning, to work the concepts of the human development continuum--such as of health and wellness--into the process, and to focus on outcomes.

As part of this effort, the state required counties to come up with a vision of what they were to accomplish with youth, but did not mandate use of the New York State Touchstones model, even though that had been agreed on by the state's agency commissioners. Touchstones was merely offered to the counties as a model. Most of the counties in the process did choose to adopt an outcomes framework for organizing planning and those that did not choose outcomes directly modified outcomes for their own local conditions. This is an example of how New York is trying to use county government to promote the adoption of an outcomes approach and to get a broad stakeholder involvement.

Murphey said that the comments of the panel helped underscore the importance of the second point from the first overhead, to encourage outcomes-based collaborations and, as a corollary, avoid turf issues. Vermont has 12 regional partnerships. Johnson is the coordinator of one of those partnerships. Each of those partnerships is the keeper of the flame for outcomes within its region and each partnership, in turn, works with other partnerships with many issues and foci. Working at a state level, it is important to encourage those kinds of collaborations, and to help the situation remain flexible in order to avoid turf problems.

Johnson named some of the partnerships with which he is involved in his region and sketched some of the ways in which they interact. Pasti commented on collaborations at the county level. He finds value in the flexibility that New York has allowed counties by not mandating participation. Hammond cited the involvement of parents in their collaborations as a strength of their organization and pointed out some of the ways they encourage parents to participate--including paying stipends to parents and providing dinner and childcare at evening meetings. Murphey echoed the high value placed on citizen engagement in Vermont.

Murphey said that indicators provide a mirror in examining society. The way Vermont holds up a mirror to a community is through the use of community profiles. Now in their sixth years, the community profiles have taken hold and are enjoying increasing use. Interest is enhanced in them by the efforts of the state to frame the book within the context of each partnership area.

Hammond said that the Success by Six project must submit an annual plan for meeting the state's outcomes every year in order to be funded. This process supports and helps shape the work of the organizations in the partnership in a variety of ways.

Murphey said that it is not enough to publish the data. It is also critical that communities review rationally and engage with the data and it is up to the state to promote this engagement, review, and then planning. A first step is helping communities develop a comfort level with data.

Murphey uses this slide in working with community groups:

Why Use Data?

We already know what the problems are!

  1. To confirm/revise existing judgments
  2. To add credibility to your efforts
  3. To help prioritize efforts
  4. To provide a baseline

A member of the audience said that she was struggling with the problem of the language in which the findings are expressed interfere with the data's relevance. Hammond responded that some audiences were going to zero in on particular aspects of the indicator data while others will take a broader view. She illustrated her point by describing comparisons that can be made between low-birthweight babies and smoking by pregnant women by economic group. A comparison of this type can help legislators shape programs to reach pregnant smokers.

Using the slide below as an illustration, Murphey discussed the importance of using multiple sources of information.

Do We Have Multiple Sources of Information?

  • For an outcome: multiple indicators?
  • For an indicator: multiple types/sources of data?
    • Qualitative as well as quantitative?
    • Multiple "formats"? (e.g., focus groups, interviews, surveys, previously published statistics)
    • Multiple time-points? (where we've come from, where we seem to be going)
    • From multiple constituencies? (e.g., parents, students, service providers, "regular folks")

It's desirable not to rely on a single piece of information, but to have both multiple sources of information for an indicator and to use multiple indicators. Among these multiple sources, it is important to obtain the view of the community affected as to what the information that makes up the indicators mean. There was general agreement among members of the panel about the value of multiple sources of data. Murphey said that in Vermont state government tries to model the data use strategies it believes communities could find beneficial in using these data. The state Team for Children, Families, and Individuals meets once a month. One of the outcomes is the topic of one of their meetings. Every year, out of that process, comes a publication that addresses each outcome and highlights some "headline" outcomes, up to three heartening indicators and up to three troubling indicators. The second step to this process was then to identify some action steps.

Murphey said that the important point is not that there is one right answer on which indicators are the three best or worse, but the process of working together to choose some headline indicators that will motivate strategic action steps.

Murphey's next slides showed some of the questions that could be raised in the context of comparisons and asked questions about presenting data:

Compared to what?

  • On this indicator, how is my community doing:
  • Compared to the goals our community aspires to?
  • Compared to where we've been (timetrend)?
  • Compared to communities like ours?
  • Compared to our county's record?
  • Compared to the state's record?

Some Things to Consider in PRESENTING Data

  • Who is your audience?
  • What is your purpose?
  • Present both assets (strengths) and needs (gaps, troublesome areas)
  • Is the glass half-empty, or half-full?
  • What (carefully chosen) comparisons would help reinforce your points?
  • Be honest about the limitations inherent in all data.
  • Use simple charts to convey the information graphically.
  • In terms of communication power, often "less is more."

Johnson then discussed a regional youth project that involved repackaging community profile data on youth statuses and risk behaviors for particular purposes. Johnson said that their purpose is to engage the local media and to engage individuals with monthly advertisements in the local newspapers, distributed by mail, and sometimes reproduced for school use.

Murphey said that one of the anxieties created by the production of the first community profiles was that programs working in the areas addressed by the profiles did not understand that these profiles presented population data and were not a verdict on their program work.

To counter this, Vermont developed a language to describe these data to programs. One step, borrowing from the United Way, was to distinguish between the community-level outcomes to which program-level outcomes can make a contribution. In this effort, they work to identify the links between milestones--early indicators of success--and the ultimate indicators of success. This is new territory for many of the community folks.

Pasti said that this is a substantial shift in thinking for counties because they are used to measuring program outcomes. Johnson described a program in Vermont that is setting up youth councils that are involved in making grants. One of the ideas behind this is to teach young people about outcome-based planning and logic models.

Murphey moved to an example of the relationship between program-level outcomes and the ultimate population-level outcomes by presenting a slide on child welfare services in northeastern Vermont.

Vermont Department of Social & Rehabilitation Services, Northeast Region

Outcomes for Children in Foster Care

Permanency Indicators:

  • Length of stay
  • Average number of unplanned moves

Resiliency/ Social Skills Indicators:

  • Percentage attending school or have other educational plan
  • Percentage participating in structured after-school activities
  • Percentage earning wages
  • Percentage volunteering in the community
  • Percentage maintaining friendships with age-peers
  • Percentage with an adult "connection"

The state child welfare agency has developed a monthly reporting system so that the caseworkers can report on the status of children's progress toward these two indicators. This is an example of how the outcomes work has been translated into community program work, linking the program and the population-level data. In response to a question later in the session, Murphey noted that these indicators measure qualities of life important to all children, and are of particular importance to children in foster care.

Question. These numbers will sometimes go in very different directions, and will move incrementally. What tells you that you have a problem and it's not just a problem in statistics?

Answer. Murphey said, I think that the point here is to show continuous improvement. These are the things that you want to show progress in.

In response to a question on whether these are the right measures, Murphey explained that although this represented substantial thought, that all indicators lists are provisional and open to improvement. There was discussion about how to decide which measures are important in a local region and speakers on the panel and from the audience suggested strategies for selecting the right indicators in order to get at issues of community concern. There was general agreement that regions and communities have critical roles to play.

In response to how to deal with communities seeking value-laden indicators, such as church attendance, and how to steer such requests toward a research base, Murphey said that they encourage communities to develop their own theories of change, but to also look at what the literature says in an objective way. (With the understanding that research is an ongoing proposition and what we know today is going to be different from what we will know tomorrow.) So, a group that is interested in measuring the effect of church attendance must come up with a logic model through some collaborative, consensus-based project, and then collect data to test it.

Murphey displayed a United Way program outcome model overhead similar to the one below.

A General Model for Community Planning in Getting to Outcomes

image3.gif image4.gif image5.gif


He said that using three program logic models, as displayed in the overhead, was to indicate that it was likely that more than one program contributes to an outcome. Murphey also said that the shortcoming of the United Way logic model, in his view, is that it assumes there is a program. Vermont has been trying to encourage communities, wherever possible, to back up the process even more and base their selection of a program on a theory of change and on a data-driven assessment of the priorities for their communities.

This is an ideal model of how the world might work; it doesn't always work that way. Ideally, a community will have gone through a comprehensive assessment and will have developed some kind of a theory of change based on best practice, based on what the research literature says, based on what is known about a community and its unique characteristics, and so on. This is the grand scheme of how it fits together in terms of evaluation and program.

Once they have prioritized indicators, such as identifying the heartening and troubling indicators, the other half of that is to assign tasks to specific people. They have tried to hold people's feet to the fire, including everyone from commissioners to program staff, to say, in effect, "this is what you can do on this indicator and we expect to hear back from you the next time we address this outcome at the state team meeting a year from now and hear you report on what you have been able to accomplish."

Johnson described how this process played out in the Lamoille Valley Reads program as many groups used multiple strategies to promote literacy. Partners included the schools, Success by Six, and service providers. Media coverage was a part of their strategy.

Murphey added that, to keep all this work from being just talk, there is a point at which assignments need to be made and a reporting schedule need to be made--who is going to do what by when.

Pasti said that one of the difficulties they face in New York is that they have plans coming from primary county agencies in social services and youth work. These agencies typically submit plans for their own work. The state tells the agencies that it wants to know not only what any individual agency is doing, but how the strategy of any one agency includes what other agencies are doing. It helps make each agency accountable by requiring them to put the work of the other agencies in their plan. That's a shift from their traditional, service-driven planning to outcome-driven planning and accounting for the influences of multiple partners. Pasti noted that this illustrated the difference between accountability and responsibility.

Murphey added that one of the messages Vermont government gives is that programs are accountable for achieving outcomes with their clients and they are responsible for the community-level outcomes. There is a difference, but there has to be accountability when there are funds at stake. Plus, you need to be accountable to the people that you serve, as well as to your own staff. So, there is a need for accountability, but it is important to distinguish between accountability and responsibility.

Murphey said that inherent in the devolution bargain is that states and communities are given more flexibility to achieve outcomes. He said that there was a trend in the late 80s and early 90s for "unmanageable" kids in state custody to be placed in very expensive out-of-home and sometimes out-of-state placements. In the Lamoille County area, they experimented with trying to keep the kids closer to home or at home or keep them in-state in exchange for the use of some of the savings that the state child welfare agency would realize to provide additional services. Murphey said that the county did bring down the numbers of unmanageable kids and was allowed to retain half the savings that accrued to the child welfare agency as a result of the lower numbers.

In response to a question about creating projection models of savings that can be secured by averting expensive, out-of-state placement, Pasti said that New York has done a similar reinvestment. One difficulty they encountered was that some counties had already made substantial progress at the time the reinvestment program was implemented. In those counties, financial benefits were more modest than in places where less had been accomplished.

Local Media

Murphey said that an indicators report can be a tool in getting the local media to do a story on a particular indicator. One idea he mentioned was getting a local newspaper to run a story on a selected indicator each month.

Keep Holding Up A Mirror

Murphey cautioned against turning indicators work into a high stakes game or a game of "gotcha" with communities -- tying funding to a community's indicators. Vermont tries to recognize improvement whenever they find it.

Stay in This For the Long Haul

This is really long-term work, Murphey warned. It is a different way of thinking and working. It is much longer than any single administration and it is going to take time both at the community level and at the state level. People at the state level and at the community level are often impatient. They want to see results. They are eager to get the data every year, and, if those data don't show that they have made great progress, it can be demoralizing. It's important to remember that there is not a one-to-one relationship between effort and outcome. But success is seen in efforts in which stay the course.


Each of the panel members had a closing comment. Johnson said that he wanted to thank his state partners -- such as Murphey. He said that there is truly a state and local partnership and that there is a level of respect between the state and the local activities. Pasti echoed the idea that it is critical to stay involved for the long haul and that it is an evolving process. Hammond said that, although there is a lot of information available to her in the community profiles, she also seeks information on her own and she keeps a file on where information can be found.

The Role of Indicators in Policy and Implementation

What follows is a transcript of remarks made by Christine Ferguson and Elizabeth Burke Bryant of Rhode Island and by James Dimas of the Annie E. Casey Foundation. Also included, following her remarks, is Christine Ferguson's Powerpoint presentation.


Fred Wulczyn, Research Fellow, Chapin Hall

The challenge with indicator data is that it provides a lot of information about the past. As people in state agencies gather such data, the question becomes, what do these data tell us about the future? How can you best gather and use indicator data for guiding policy decision-making in the future? How can we engage the underlying policy-making processes proactively so that policy and programmatic decisions guide the future to a more desirable point?

This morning we will be discussing the challenges of creating that forward-looking view, the political challenges of selling that view, and the conceptual challenges of understanding the right policy lever for moving that view towards a conclusion that is desirable for families, children, and the community. To address these topics we have from Rhode Island "Ms. Inside" and "Ms. Outside." Christine Ferguson is director of Rhode Island's Department of Human Services. She has served in the highest levels of government for two decades. Her record reflects a clear commitment to using information indicators to make policy decisions. Elizabeth Burke Bryant is director of Rhode Island Kids Count. They're not exactly "Ms. Inside" and "Ms. Outside," though one looks in to and the other out from state government. One of the benefits of their respective positions is that they get to work together to create consensus around their viewpoints so that they can move forward together.

James Dimas will then speak with us on these issues. He has worked in public health in Washington, D.C., and for the Illinois Department of Public Aid in its welfare-to-work program. And he's now joining the Annie E. Casey Foundation, which is in the process of forming a group that will provide technical assistance to states in this very area of technical information, with its links to public policy and practice.

The View from Rhode Island, Part 1

Christine Ferguson, Director, Rhode Island Department of Human Services

"Agree on some measures; publish what you have; perfect the measurements over time. We don't have time for perfect measurements. People like me are only in the positions we're in for a very short time." (Ms. Ferguson's Powerpoint presentation follows the text of her remarks.)

Implementing Rhode Island's Vision

Rhode Island has developed a vision, and people in leadership roles in and out of government have been able to craft and implement that vision. I'd like to give you some perspective on what we've done. In Rhode Island in 1995-96, we identified four outcomes for all children that we want to work toward:

  • All children should enter school ready to learn.
  • All children should leave school ready to lead productive lives.
  • All children must be safe in their homes, communities, and schools.
  • All children should live in families that are self-sufficient yet interdependent.

Depending on what kind of thinker you are, these outcomes are either really great or way too big. But what they did was to begin to narrow people who are very fragmented in their thinking and to broaden people who are narrow in their thinking. And so this consensus gave us a way of saying in any one instance that, "This is an initiative around one of these four things." In response to these outcomes, we were able to implement three programs critical to kids.

RIte Care. RIte Care is Rhode Island's comprehensive health care program for uninsured low-income families, enacted in 1993. This is our Medicaid managed care program in Rhode Island.

The Family Independence Act. The Family Independence Act, Rhode Island's welfare reform legislation, was enacted in 1996. This act included an entitlement to childcare. In other words, if someone comes to the door and meets the eligibility criteria, we have to provide them with the subsidy. There is no waiting because it is an entitlement.

Starting RIte. Starting RIte, Rhode Island's early education and childcare initiative, was enacted in 1998. It's not one piece of legislation, but, rather, lots of concepts combined, including quality health insurance for childcare providers and stepped up eligibility criteria for the entitlement. Under these provisions, a licensed childcare provider taking one subsidized kid for six months receives 100 percent free health insurance.

Our Expanded Investment in Kids Showed Results that Were Measurable and that Were Clearly Illustrated by One Chart

So our expectation was to begin to really invest in kids. One of the things we did with respect to our health insurance was to have incremental expansions every few years, and we're now up to coverage for adults at 185 percent of the poverty level and for kids at 250 percent. Those benefits are the Medicaid benefits that include Early and Periodic Screening, Diagnosis, and Treatment (EPSDT). EPSDT benefits are wrapped around any private coverage that we subsidize. And we've been aggressive with respect to providing EPSDT coverage because it's connected to early education, early childhood kinds of issues. The result: improved primary care services, and a lot of other things.

But the reason I bring this up is this chart. The reason that we continue to have RIte Care in Rhode Island is this chart (slide 6). What this chart says is that those who got their insurance through Medicaid pre- and post-managed care, and those who got it pre- and post-RIte Care, virtually became the same in terms of the outcome of waiting eighteen months between children. That's important because it's an indicator of both maternal health and child health.

Usually I say the following things. What we found is that women who are low-income  working poor and very poor families  access health care in the same way and make the same decisions middle-income and upper-income women make when they have consistent medical services. I'm basing my conclusion on this and other data and on focus groups, and frankly on the observations of people I meet. What the data tell me is that when low-income women have a doctor or nurse practitioner or nurse midwife throughout pregnancy, they talk about contraceptives. When they deliver a baby in the emergency room, they don't, and probably none of you would either.

The Data Aren't Perfect, But, in Ten Years, I Could Be Dead

But my whole point is this: this is not peer review science. We've tried to publish some of these data in very strict journals and they wouldn't let it in because it's not drawn from large enough samples. So if you were pure you'd say to me, "But you can't draw those conclusions from this data," and I would say, "You're right, but you want to wait ten years to draw any of those conclusions and, guess what, ten years from now, I might be dead and this program might be dead." The reality is that when you're dealing with states or at the federal level, if you want to maintain a program or expand it, you have to be able to show either that nothing's gotten worse-because we guarantee the amount of money spent got worse  and ideally that something's improved.

The Health Care Numbers and Charts Are Great, Although the Data Aren't Perfect

We've also been able to show adequacy of prenatal care. Again the gap gets closed. If you take health insurance status as a proxy for income, this indicator shows that the working poor and poor women are responding to health care in very encouraging ways.

And ultimately, we say we're number one in the country for health insurance coverage now. Only 6.9 percent of our people are uninsured, compared to everyone else. According to the GAO data, we're second, but it doesn't really matter because the drop in uninsured has been tremendous in the past few years. The change is what's important. Universal coverage is not so much the indicator, as what's happened as a result of it.

So, over the past eight years we've shown progressive improvements in health status with RIte Care. I always say to people you can't take this to a scientist and have them say to you that this is perfect data. It's not perfect data. We cobble it together from health department data and we have very credible researchers doing this at Brown University. It's outside evaluation. But it's not perfect and it's never going to be perfect. There's no way we could ever collect enough data to be perfect.

The Childcare Numbers Are Also Good

The childcare numbers are also good, but expenditures for childcare have increased 270 percent and the data are not that useful yet. It's too soon to use these data. They do not explain the financial impact of the childcare measures for legislators.

The Family Independence Act and Starting RIte pulled together a series of provisions related to health insurance for family childcare providers. We provide health insurance to licensed family childcare providers on an entitlement basis. We have a series of interventions in effect around quality and eligibility. We have wrap-around childcare services.

The number of kids served has increased by 100 percent since 1997 from 6,000 to 12,000, roughly. That's a lot because we've got a population of about one million. There are about 150,000 kids under age eighteen. Our childcare and after-school care goes up to age sixteen. So subsidies are available to kids up to age sixteen. Our childcare placements have increased by 27 percent in two years. Our childcare providers accepting DHS subsidies are up from 61 to 76 percent in two years. Availability has increased and we have almost 1,000 people who are receiving health insurance because they're family childcare providers or related to family childcare providers. But here's the rub: our childcare subsidy expenditures have increased by 270 percent since 1997, from 18.6 to 68.8 million dollars. And it's all state dollars. The legislature has flipped out on a number of occasions and the budget officer in the governor's officer has flipped out on more than a number of occasions.

Our rate of increase has been steady, luckily. So now I can give you all these great outcomes: who accepts subsidies, how many kids of which ages receive the subsidies and how that's increased over time, our investment, administration (what we've spent has gone down at the same time the program's increased), how much we pay for each slot, all of those things.

But you'll notice I could do all this with health care, too, but I never give those slides because they're not particularly interesting. All it is is claims and financial data mixed around a couple different ways, but it doesn't tell me anything about whether what kids are getting is helping in any way.

The Data I Need: Do Kids Know Their Letters and Numbers?

Knowing whether kids know their letters and numbers will help ensure the future of these programs. If the only thing you worried about with childcare was helping families go back to work, the data we have now is okay. But it doesn't tell me the important thing, the critical thing for the future of these programs  whether kids enter school ready to learn. And I know this is a big issue but I want to know how many letters they know. I'm sorry, but that's important to me. It's important to the grandfathers that are on the finance committee, it's important to the governor  that's what they value. Can you tell me, are the kids aggressive or not aggressive and all those things? Yeah, I want to know those things too, but I really want to know, do they know their numbers? Can they read? Do they have books?

I need to be able to say, oh yeah, childcare is going to continue to grow at a rate of 100 percent per year, and next year it's going to be 68 million dollars and we're going up to 74 million and yeah, I know you want a car tax repeal and I know you want capital gains and I know you're really worried about the expenditure in health care. But these kids are in a better position to be reading in the fourth grade and to be graduating from high school because they have this after-school enrichment program and these Head Start-like comprehensive services, because they have childcare.

And it's not a bad thing for these moms to be working. You don't have to feel guilty because you've put the moms back to work because the kids are doing well. That's what we need to tell them because the things that mobilize interest in children's issues are usually all the negatives, all the bad cases. You'll always be able to find a bad case of lead poisoning or a bad childcare situation or something that'll send somebody off in that direction. It's really hard to focus on the whole picture. We really ought to be fixing this in the context of what we already have, so don't set up a new program, please.

Based On Preliminary Data, the Governor and Legislature Have Postponed Tax Cuts and Kept Spending on Child And Health Care

What our governor and our legislature have done is to really decide to make the investment in children and families. But the last two years have been really hard because our childcare spiked up one year by 25 million dollars and next health care expenditures went up by 80 million dollars. And they're looking at me going, "Hello, we really want to do these tax cuts and you're really preventing us from doing them." And what we were able to say back to them, is, "These are not our decisions. These are your decisions. Here are the choices. There's progress that's been made here and you can cut it back but you've got to understand there's probably going to be some impact." They've chosen not to cut back. So we've gone through three years of them making the decision not to cut back.

And the only reason is because, on the childcare side, we took all those pretty charts but we didn't show them until after we showed them the health care charts and they just assumed the childcare charts were as impressive as the health care charts. But I can't keep that up for very long, because I really am honest. And when they ask that question about childcare I say "I don't have it, I can't tell you, it's too soon anyway. It's only been three years."

I Need You To Work Faster to Get the Data to Support Decision Making, Because the Backlash is Coming

So what I say in Rhode Island is, I know you guys are working real hard but it's not fast enough! Because I know I'm going to get these questions pretty soon. They're going to say, "It's been four years. Did the kids we helped four years ago enter kindergarten knowing their letters and their numbers? Can you tell me that, Christine?" And if I can't, and I say, "You know, well, there's a real disagreement over whether that's a good indicator of good childcare." They'll say to me, "So the answer is you don't know." And I'll say, "Yup." And they'll say, "So, how much money will it take before you do know." And I'll say, "Well, we really haven't decided. We can't decide whether it's knowing their letters or social adaptability." So take them all. Pick 100 indicators and put them all together. Come up with some way of providing decision-making kinds of information, because the backlash is coming.

This is not an unthinkable cycle. This is what happens every time. You pull people back to work. They focus on upper-income women who go back to work and they have all the mommy wars in the suburbs. And I'm one of the women who work and I know the mommy wars and now we get: children shouldn't be in childcare and women should stay home. But we're not willing to pay people to stay home, so what should we do?

We've always missed one part of the cycle. We've missed the opportunity to say, it's okay for some kids to be in childcare and it's probably not great for other kids  it depends. But we like to have some basis to say those things and to have some sense that, even if they're not willing to say it, the reality is that the way to appeal to most of these people is: if you want a good work force twenty years from now you have to invest in these kids now  in health care, education, and childcare.

And as long as money's the most important thing in the US  and it is, that's what we value as a society, that's how we've evolved  as long as that's true, then you've got to talk about it in that language. You've got to say, these are investments worth making, not because the kid's a better social citizen, but because the kid's a better worker. You may not like it but maybe you can also ensure the kid has better social skills at the same time. And the latter's beginning to reflect in workforce training. You've all been through the Myers-Briggs test and teamwork and systems thinking. All of that is really based on the things the kids learned in kindergarten  you know, don't hit your friend over the head when he has the truck.

The Data I Need: A Closer Look

Track As Many Indicators As You Can. Understand That The People Who Make Decisions Have To Be Able To Justify Them To People Who Don't Care

So track as many indicators as you can. Understand that the people who make decisions have to be able to justify them to people who don't care about all the things that you care about, to the person in the street who is upset because they paid thirty percent of their income in taxes. They have to be able to demonstrate that their thirty percent went to something that really improves the future of the state.

It's Harder To Get Childcare Data Than Health Care Data

We have a lot of health care data to extrapolate from. A lot of people collect it. That's not true of early childhood data. We need to get data from childcare providers and schools. A critical issue for us is that we don't have a single student identifier. That makes it very difficult for us. And while we have three major health insurers to get data from, we have umpteen hundred childcare providers. That makes it more difficult.

We're working on all those things. People should be working on all these things at the state level. And regardless of which indicators you pick, you still have to be able to get the data once you pick them. So don't just focus on which indicators to pick. Once you tell me what you want, it's going to take me two years to figure out how to implement it  if I'm lucky. So we need to figure out what we need to put into place in state government, regardless of what kinds of indicators you pick, to be able to get information back from a childcare provider, from a school.

The Kinds Of Comparisons We'll Need To Make: Subsidized Versus Non-Subsidized Kids

We also need to be able to compare subsidized kids' outcomes with non-subsidized kids' outcomes. We need to compare by income levels. Our reimbursement rate is very high. Three years ago it caused a thirty million dollar increase in one year. They want to know that what they're paying for is a return that is giving them something other than a babysitter, which is how many grandparents think of childcare. (We try to focus on early education, but they still call it daycare in many places.)

Have Faith! If It's Worth Fighting For, It's Worth Measuring!

All these things are really scary if you're frightened of ending up with comparisons that are not so great. But get them! You have to have some faith! If you believe the services are worth fighting for, you have to believe the supporting data are worth measuring. What will be shown will be used by you to improve the situation  or to make a correction. If you don't believe they're worth measuring, then why are you fighting for them? So you have to be able to take the leap that what will be shown will be used by you to improve the situation or to make a correction. We may all be wrong. Maybe we should be back in caves. But those are kinds of value discussions that need to occur and they can't occur without some kind of reasonable information.

In Sum, They Need to Know: Are Poor Kids Reading Better, etc. (And ABCs ARE Important)

In sum, here's what they need to know. Are poor kids reading better? (And I use the word "poor" strategically, usually I say "working families," "the working poor" etc.) But the reality is, the guy sitting up there making the decision on the finance committee, the talk show radio host, the guy in his car sitting and listening to the talk show host, and most of your neighbors, think this way. And you have to be able to think this way in order to give them the information they need. You can change the way they think, but only gradually over time.

Do poor kids read better in the third grade? Are poor kids reading as well as suburban kids? Do poor kids know more when they go to school? Do they know as much as wealthier kids when they go to school? Those are the kinds of answers they want.

They're also interested in aggressive/nonaggressive, but frankly they are probably more interested in ABCs and 1-2-3-4. Maybe they don't care if we have smart kids that are mean. Maybe that's not worth it. Maybe the guy they work for is mean and maybe he's successful and they're not, so maybe meanness is also a value that we have. I don't know, but it's interesting to have the conversation. And if you have some data to talk about, it makes it even more interesting.

In Rhode Island, We're Looking at Everything We Have Access to, But It's Not Fast Enough. The Pressure For Tax Cuts Is Very Great

We're Looking At Everything We Can

In Rhode Island we have started the process of putting some questions into our SALT (School Accountability for Learning and Teaching) survey of schools, of looking at everything we have access to, and meeting my need of knowing whether the kids know their alphabet when they go to kindergarten and their need of making sure they're good kindergarten citizens and have good social skills and family situations. So we're getting there. I'm worried it's not fast enough.

The Pressure For Tax Cuts is Great

They want to see it fast. I don't know how much longer they'll continue to do this. The guy leading the finance committee, who is a great guy, has put off for one or two years his big centerpiece legislation, which is to give back property tax on cars. I don't understand the economics of that, but he's absolutely committed to it. He's delayed it for two years because of health and childcare expenditures! How many more years is he going to delay this? He can cut childcare and no one's going to think about it  except the childcare providers. Health care's a different story because he can see the results and it makes him nervous. But childcare? There's always this back and forth, back and forth cycling.

Don't Let The Perfect Be The Enemy Of the Good

So agree on some measures. Don't think you're going be published in peer review articles. Most people don't read them anyway. Publish in magazines. And talk about the results and perfect them over time. Don't let the perfect be the enemy of the good. Do longitudinal studies to verify what we're learning. But we don't have the time for perfect measurements. The reality is, people like me are in the positions we're in for a short period of time. Six years is a short period of time, but for someone in my position, that's a long period of time. Legislators need answers. And they need information translated for them in a way they can use. Otherwise it all comes down to, how much of an increase did we give this program last year and isn't it time for another program to get it.

The Basic Thing I Need To Be Able To Say Is: "The Money Spent On Health And Childcare Means Less Money Spent Elsewhere"

The basic thing is, I want to be able to say that your investment in health insurance and early education results in less money spent to correct other problems. I want to be able to say because you have health insurance and early education, your schools systems have less money going to special education. If I can say that, we will never have those programs cut. OK? So, get to work! Keep plugging. I feel like I'm holding up a dam and there's a whole bunch of people behind it just waiting to tear down the dam. And I'm just waiting for the people behind me to get the work done so I can hold up the chart and say, "Here! Look! This is what happens! You don't want to come through here!"

If you can do that for me, I will be eternally grateful.

Slide 1

Investing In Children: Convincing the World to Take the Plung and Rid ethte Waves

Slide 2

Rhode Island's Outcomes for Children

Slide 3

Rhode Island's Children And families Initiatives

Slide 4

Rhode Island's Children And families Initiatives

Slide 5

How Indicators have Shaped Rhode Island's Investments In Its Children

Slide 6

How Indicators have Shaped Rhode Island's Investments In Its Children

Slide 7

How Indicators have Shaped Rhode Island's Investments In Its Children

Slide 8

How Indicators have Shaped Rhode Island's Investments In Its Children

Slide 9

How Indicators have Shaped Rhode Island's Investments In Its Children

Slide 10

Rhode Island Children And families Initiatives

Slide 11

Rhode Island's Children And families Initiatives

Slide 12

Rhode Island's Children And families Initiatives

Slide 13

Indicators to Data For the Starting Right Early Education And Child Care Initiatives

Slide 14

Indicators to Data For the Starting Right Early Education And Child Care Initiatives

Slide 15

Indicators to Data For the Starting Right Early Education And Child Care Initiatives

  FY99 Actual FY00 CEC Estimate FY01 Gov Rec Estimate FY00 DHS 3/00 Estimate FY01 DHS 3/00 Estimate

Cash Assistance

3,566 4,064 4,304 4,137 4,304

Low Income

4,432 5,135 5,264 5,501 5,854

Starting Right

208 701 738* 786 1,025*


8,206 9,900 10,306 10,424 11,183

Starting Right included above:

Youth care:

26 73 110 70 87

200/225 FPL:

182 628 628 716 938

Not included above:


451 527 540 527 540

Child care disregard is based on the number and ge of the children. Children under age 2, maximum of $200 per month; other children up to $175 per month. No total maximum. Client must provide proof of expenses. Child care providers can be unregulated.
* Assumes passage of Article 10 which defers to 7/1/01 expanded eligibility up to 250% of FPL; 294 placements at an average cost of $5,745 for total savings of $1,689,000.

Slide 16

Indicators to Data For the Starting Right Early Education And Child Care Initiatives

Charts depict the growth in licensed child care placements in Rhode Island - subsidized and unsubsidized since Starting Right.
This chart describes growth in individual licensed placements - previous chart shows the increase in licensed providers
Outcome: provider rate increases, health care subsidies for providers, employees, development grants have been effective in building the system.

Slide 17

Indicators to Data For the Starting Right Early Education And Child Care Initiatives

Percent growth from 1999
DHS Center: 33.2%
DCYF Center: 26.%
DHS family : 57.9%
DCYF family: 24%
This chart shows growth in number of licensed providers (not placements). Also shows growth in number of providers who accept DHS children
Outcome: Starting Right has built more capacity for subsidized children and non- subsidized children
Total certified/licensed equal 1404: 988 certified family CC homes + 416 licensed center based programs
Accept DHS subsidy: 740 certified family CC homes + 317 licensed center-based programs
Since Feb. 1999 reimbursement rate increases took effect, 26.l4% of providers have increased the Number of slots within their programs that are available to DHS children-so since Starting Right Impact-increased number of programs and increased number of slots within programs

Slide 18

Indicators to Data For the Starting Right Early Education And Child Care Initiatives

Slide 19

Indicators to Data For the Starting Right Early Education And Child Care Initiatives

Slide 20

How Indicators Have Shaped Rhode Island's Investments In Its Children

Slide 21

How Indicators Have Shaped Rhode Island's Investments In Its Children

Slide 22

How Indicators Have Shaped Rhode Island's Investments In Its Children

Slide 23

How Indicators Have Shaped Rhode Island's Investments In Its Children

Slide 24

Lessons Learned From the Rite Care Experience

Slide 25

Lessons Learned From the Rite Care Experience

Slide 26

Lessons Learned From the Rite Care Experience

Slide 27

Developing Indicators To Inform Early Education And child Care Policy

Slide 28

Developing Indicators To Inform Early Education And child Care Policy

Slide 29

Developing Indicators To Inform Early Education And child Care Policy

Slide 30

Developing Indicators To Inform Early Education And child Care Policy

Slide 31

Developing Indicators To Inform Early Education And child Care Policy

Slide 32

Developing Indicators To Inform Early Education And child Care Policy

Slide 33

Developing Indicators To Inform Early Education And child Care Policy

Slide 34

Developing Indicators To Inform Early Education And child Care Policy

Slide 35

Developing Indicators To Inform Early Education And child Care Policy

Slide 36

Developing Indicators To Inform Early Education And child Care Policy

Slide 37

Developing Indicators To Inform Early Education And child Care Policy

The View from Rhode Island, Part 2

Elizabeth Burke Bryant
Rhode Island Kids Count

Strengthening Partnerships Inside and Outside of Government

What's allowed for this very strong inside-outside partnership? We don't make a move at Rhode Island Kids Count without making sure that the senior level data policy staff at the state's Department of Human Services at least knows what we're doing and understand what approach we might be taking. We don't always necessarily need their sign-off, but we always know we need a professional coming-together. We always communicate about what we're doing with data that belongs to their department and about how we're releasing it.

About six or seven years ago when we released our first fact book, Dr. Barry Zuckerman came into Rhode Island as our first keynote speaker, which was a little dangerous because it meant bringing a doctor from Boston across state lines. But we decided we'd go for it because we really loved Barry Zuckerman. And Dr. Zuckerman said, "As a health professional, I'm here to tell you that the number one indicator of child well-being is fourth grade reading scores. And what we have to do is keep our eyes on the prize. We have to understand that children's health and a lot of other things going wrong in their lives can be measured by whether or not they're reading at a fourth grade level, and we have to absolutely focus like a laser beam on that."

It was a really strong message. His notion that we need world-class fourth graders in Rhode Island became a kind of rallying cry. And from that the four indicators were developed, and the one that we're talking about today is that all Rhode Island children shall enter school ready to learn. We were able to use our indicators work through Kids Count to become the clearinghouse for identifying, "How will we know if we got there?"

So we started with our first fact book with the ten indicators the Casey Foundation required, plus thirteen more, and then over the last six years we upped them to about forty-three indicators, all about children's health, education, economic well-being, and safety, plus an initial section on demographics. But this whole ready-to-learn opportunity, incredibly enhanced by the Chapin Hall work on the ASPE grant we're all in, has been absolutely essential to the research work going on behind the scenes.

Helping the Press Put Childcare on the Front Page

The inside/outside magic also involves the press. Politicians pay attention to what's in the newspaper. We don't consider our role just in terms of a data function. That's the foundation of what we do  we're a research and educational organization. But if we didn't have an incredibly savvy media component, it wouldn't matter. Because, if it's not in the Providence Journal, it hasn't happened. If it's not on Channel 10 and Channel 12, it didn't occur. So what we've tried to get better at doing every year is to make sure everything we release is very pick-upable, very colorful, something that would be very appropriate in the highest corporate-level board room. Because for too long kids have had to exist with their advocacy materials on cheap-looking handouts, and they deserve better. And that got people's attention.

We also had to be there with all the facts and figures whenever the press needed them. As a result we've cultivated a very positive relationship with the press. They've put childcare on the front page. We have resources from a Starting Point grant from Carnegie enabling us to do more than we would have been able to afford to do ourselves. That has been an incredibly important thing. Because when politicians read these things on the front page of the paper, it makes them want to act. When they read that Parents and Working Mother Magazine are recognizing Rhode Island's policy gains, it makes them even happier.

Working Together Towards Accountability

We're almost over the finish line. We were absolutely committed to the multidimensional view about what is a ready-for-school child. We absolutely understand Christine's point about numbers and letters. I think she's absolutely right. We sometimes, I think, get drawn into a politically correct sort of thing where we think, gee, that's really putting kids in a pigeon hole with that one measure. But frankly, we're not talking about little whiz kids. We're talking about what would we expect our own kids to feel comfortable with when they get into that first formal setting. Why should we expect less from poor kids?

It's time to kiss that way of thinking goodbye. We have enough of these general areas that everyone's satisfied with, like mental health, access issues, numbers, and letters. As long as you're looking at categories and are going to end with how we categorize things, really, as long as you touch all those bases, it's okay to pick a few indicators and have that be your picture.

The other thing that's hung us up and that takes some guts to talk about is the need to professionalize early education. It's kind of been going along in a mediocre way, with people saying, "We're doing the lord's work  don't ask us about our outcomes. All kids are different." But, as Christine says, as the numbers go up from $12 million to $68 million dollars, as our rates for childcare providers go up as they should be going up every two years so that there was a 67 percent increase in rates in one year, I've lost patience with friends saying, "You've got to wait, we're back-filling from ten years of flat rates and we can't show you program changes yet."

As advocates we are going up there every day fighting for these programs, and if we can't have some accountability, then the very best early child educators and care providers are going to be brought down with the very worst. All you need is an exposé of a really bad childcare provider or a really bad center. And we've had staff go out to centers to consider them for their own kids and they've not liked what they've seen. We'd better be able to correct that, to institute some four-star rating systems. We better be able to follow that Starting RIte money and be able to say, "They did increase wages for those childcare providers who were underpaid; the money didn't go into an untrackable hole." So we're really pushing all of those accountability things with Christine's department for the next two years. I think that all of these expectations are sort of fair and square exchanges for going from $12 million to $68 million dollars.

Tracking the Individual Child

As we strive for accountability, we need to remember to use a few tools that are already in place. What really hung us up, what was really frustrating, was what happened after we got past the access data, which is administrative data that you have access to from your state departments of human services or childcare agencies. We got past the early child health data but then we got to those hard things  how do you track those major state expenditures for childcare down to the individual child and individual center? Without the student identifier number and with specific strings attached, we couldn't do that.

But what we did do, and what I'm so pleased to be able to tell Christine about, is that on June 13 at the Children's Cabinet meeting, the presentation that you've all been waiting for is going to happen. We have been able to tap our SALT (School Accountability for Learning and Teaching) survey, which is the survey the Department of Education invested in. Every parent, every public school teacher, every public school kid is interviewed. It's an enormous reservoir of data that sometimes doesn't see the light of day. We've been working for a year and a half and we've been successful thanks to Cathy Walsh, our program director, in getting them to take K-through-3 aggregate data for the past three years and "disagg it" down to K. So we're not going to be able to give Christine everything she wants but I think it's very exciting to let you know that by "disagging" out K and having it for three years prior to this one, we've been able to create some preliminary charts.

One chart that we have created shows children working independently, being self-directed, age appropriate pre-literacy skills, numerical skills, and reasoning and problem solving skills. [This chart is not included in this working paper.] As Christine says, we're not going out for a long walk on a short pier for causality, but you can bet your life that, when we give that presentation, we're going to say that, in 1997, when the childcare investment was $12 million, this is what the SALT data showed about those what kids knew as they enter kindergarten. And we'll be able to say three years later, this is what that they know now. Hopefully the trend will be going in the right direction. We're going to report it as we see it, obviously. At least that will be the beginning of something that is going to get at what we really want. So that's how we're trying to overcome the hurdles a lot of you are experiencing.

Communicating about Common Sense Indicators

The ASPE group from Rhode Island has been able to go through and categorize things in what I think Christine would agree are common sense areas. Children in kindergarten should have the language or literacy skills they need and the knowledge and cognitive skills. There are mental health indicators and child health indicators. So that's the road we're going down.

There have got to be lots of different ways of getting this information in people's hands. And now we have courage to have a press conference for every issue brief we release because the press is really starting to come to our events. So we're very excited about that. Plus CBS is underwriting some of the costs of producing the issue briefs, which shows you can get some businesses involved.

We've partnered with Brown University to create "Ideas that Work," a quick-and-dirty two-pager on what promotes early school success. It incorporates a little bit of data and explains the program description and why it's working to promote early school success. Our trademark is a bee and on our web site, the bee spins.

I just want to close by saying what a privilege it's been to work with Christine and that we really absolutely need to keep going. We haven't given up family time, family life, blood, sweat, and tears only to see this stuff at this critical moment suddenly seem like it's too much. And I totally agree with Christine, if you have some common sense answers in progress  that's not something you're going to want to undo.

Questions and Answers

Q. Were there any up-front commitments made to get the entitlements passed?

A. Christine: The only guarantee that I had to make to the governor's office and the legislature was that it would be budget neutral. In other words, the reduction in the caseload would offset the increase in childcare. Which it did until the legislature passed the increased rates and we had a deficit of $28 million. But I've been able to get through that on the legislative side. The administrative side's ticked at me. They feel it wasn't budget neutral. But things change. And that was the only thing I had to comply with.

We didn't have a lot of these health care outcomes until after welfare reform, interestingly. I didn't expect them, frankly. I thought it would take a lot longer to get the outcomes on the health care side. It wasn't until after we got the outcomes on the health care side and saw that the response after every crisis was not to cut that I began to really understand the magnitude of the importance of having those indicators, though nobody was really focused on it back then.

Q. What is the percentage of kids in unlicensed care?

A. Christine: A very small percentage  only about 10 percent.

Q. How do you make that so small?

A. Christine: In Rhode Island, 69 percent of the subsidized kids are in licensed centers, 20 percent in family childcare providers, and 10 percent are with relatives. We have more reliance on licensed centers than other places do. A family childcare provider who takes one subsidized kid for at least six months out of the year receives 100 percent free health insurance. But they have to be licensed and they have to take subsidized kids. So that's a trade off a lot of them make.

Q. What was the nature of the market response to the entitlement? Did you get a lot of small providers, for-profits providers, existing centers ramping up their slots? What was the response to entitlement and to the availability of health care?

A. Christine: A little bit of everything. When I came home to Rhode Island, I had a four-year-old. I could not find childcare. It just wasn't there. Over the past six years, there's been a tremendous growth in capacity. I would say the pick-up rate has been slow because a lot of family care providers have had awful problems with the department. Not because they're bad people, but because if you look at the administrative line, at the same time we've had this tremendous growth, we've had fewer and fewer people doing it. And we've had to do everything manually. There are people who didn't get paid for six months. I have people who live near me who won't take subsidized kids because they wouldn't get paid for six months. In one case it was a year.

What we've done is, we've worked with the childcare community to change the way we do business. So in mid-July we're going to have Web enrollment. The childcare provider and the individual will be able to go right to the Web. So we won't be involved directly anymore.

That could cause another spike in growth. I'm a little concerned about that, frankly. We are so good at outreach. The inside/outside operation is just so awesome. We brought 25,000 people into our health insurance program in a year, which almost broke everybody's back. I'm afraid the Web enrollment could have the same impact. So we're going to monitor that pretty closely. More than anything else, it's that trade-off: is the health insurance option worth the hassle of waiting to get paid by the state. It depends on how sick you are. And that probably makes a difference in whether you are able to do the job or not.

Q. Are you using indicators to track the impact on youth? And what can the childcare subsidy be used for?

A. Christine: Here's my next big problem. I've been trying to separate out the licensing requirements so that the after-school programs can include a basketball program, theater program, and an art or computer program  three different programs in a week. That's the goal. We haven't gotten there yet. One of the biggest problems is getting parents to apply for the subsidy, ironically. Even if you get the program licensed, the kids and the parents get a little bit uncertain around whether or not they want to apply for this. And there are things we need to do. We need to get attendance taken. We need to know if the person was there. So it's a question of culture change all the way around. The growth in that age group has been steep but in the context of the number of all the kids that are out there, it's probably relatively low. So we have a lot of work left to do.

My goal was first, get the entitlement in place because once you have the spikes starting, you're not going to get more eligibility expansion. Get the eligibility expansion in place while programs really aren't that good. People won't want to take advantage of them. Then start to improve the programs. Try to do it so you don't have these huge peaks. But you can't always control that. We are doing indicators for older kids. We're getting there. By no means is it perfect. We have the foundation in place; now we're building the house.

Q. A question about the SALT survey: are you able to link child outcomes at kindergarten age to children's status pre-kindergarten? Are you assigning the identifier that lets you link kids who received these Starting RItes?

A. Christine: No, it's going to have to be done on an aggregate basis, by income or some other factor. We don't have a way to connect a specific child with a specific kindergarten. We're working on it. I would say that in general Rhode Island is like a river. We go around the boulders. Nothing stops us. We couldn't get the single child identifier so we tried an approach that was a little bit different. The idea is, don't let the perfect be the enemy of the good. Just do the best you can. Ultimately, you may get to perfection. But you need to keep moving to get there.

So I think it's a testament to the Children's Cabinet and the interdepartmental team and the outside groups that every time we reach something that really ought to be a dam, they kind of take it apart piece by piece and eventually they get through it and on to the next thing. And that's what's really important in Rhode Island. There's no one person or group that's really trying to keep that dam up. They all can understand where we need to get. They may not agree with each bend, but they know that ultimately there's enough good will and intention and purpose and they know the risks and stakes are high so they're willing to make the kinds of compromises that they may not be willing to make elsewhere. And I think all of our agencies and Kids Count have been really good about that. Compared to a lot of other places, we're light years ahead.

We never say, "Oh no, this is perfect, there isn't a problem." We always say, "Oh yes, that's a good question, that's a big problem, we haven't solved that yet." But it's all out in the open in terms of what has to be done.

James T. Dimas,
Senior Associate,
Casey Strategic Consulting

On Establishing Credibility

First: Develop a Good Reputation.

If policy makers question your veracity or the validity of your analysis, and if you develop that reputation, you might as well be driving a cab for all the influence you'll have on policy. That was the situation I saw first-hand in Illinois. I was credited recently as being from the Department of Public Aid. Not that they're bad people, but I was from the Department of Human Services. And the Department of Public Aid, in contrast, had a huge problem in the state house and in the governor's office with respect to the quality of information that they use and try to influence policy with. It made it very very hard for them to be effective. So I offer you three things to keep in mind on the subject of maintaining credibility on data.

Be Proactive. If you're in the role of data provider, especially if you're state agency staff, don't wait for the secretary or the director to call you after a story's in the paper to say, "What do we have on that?" If you want to have influence it's all about relationships, just like in business or any other walk of life. You've got to establish a relationship with the policy makers you're interested in influencing, and it has to be based on credibility and trust.

And a way to get there is by hustling. When you see a story in the paper, go back without anybody asking you to and figure out what you have that can shed some light on that, and how you can help with the response. I can tell you from personal experience that nine out of ten times what you work so hard to produce isn't going to get used. But maybe the tenth time it will. And when people start to use that you'll be recognized as someone who's an asset and someone who has value to add, and then it begins to snowball. So you have to just soldier through the lean times, the nights you stay up putting together a letter to the editor in response to a negative story. Even if it doesn't get used, you have to still be content that you're on the right track. And just keep on doing this, because eventually you'll break through and that begins a pattern of credibility with policy makers.

Don't overcook the numbers. Anyone with any data savvy can see the telltale sings of a statistic that's been tortured too long to tell the right story. It's not worth it. You're debasing your own credibility. A good rule of thumb is, think about a neighbor or a family member who is just not that interested in the information. And if it takes more than five minutes to explain to them what an index or table or graphic means, it's probably overcooked, and you're probably better off not having anything on that particular issue than rushing in with something that's been on the rack all night.

Check to ensure there is no credible, contradictory data. Do an internal check against overcooking. Before you send something up the chain of command, ask yourself, "Is there a chance a credible source of information might offer something that contradicts this?" If there is, you're probably better off not doing it. Because all it takes is another group or agency coming up with data on the same issue. If yours is more attenuated because you've worked so hard to get something relevant to this issue, you end up being the loser.

You'll know you're on the right track when your data gets the benefit of the doubt. And that feels good. When there is an issue where somebody else has an opposing point of view that's supported by data, and your secretary or senate finance committee chairman trusts your data over theirs, you've established the credibility you need to be effective.

Second: Focus on the Insight Data Provides, Not on Causality

My premise is that, whether we like it or not, in making public policy what we're looking for is insight telling us whether a reasonable hypothesis is on target and whether it's worthy of being acted upon. We're not trying to prove causality. There's a role for that. That's important work, but it's not our work. We need to be careful that we don't get confused about that or refrain from acting on information that does provide insight.

Let me show you an example of how that can work. This was something from 1997. You remember that back in the early nineties the federal jobs program was created. This was the precursor to the whole welfare-to-work movement. Illinois operationalized this new funding source by concluding that this was separate work. It established separate offices for people who wanted to go to work. So when a TANF or AFDC client expressed an interest in going to work, we would have to send them to wherever that office was. On its face that didn't make sense. The reasonable hypothesis was, if we really want to have people go to work, wouldn't it make sense to have jobs and income maintenance in the same place? And another thing you need to know is that from the beginning of the program the staff that ran the jobs program always set targets for local offices regarding the number of job slots they were expected to fill. No local office in the five years that program was operating had ever met its monthly target.

So we decided to try something different based on that reasonable hypothesis and based on very little more than, gee, it makes more sense to have those things integrated. So we took twenty-four local offices, some of them in Cook County and some of them downstate, and divided them into two groups which we called blitz offices and non-blitz offices. We physically moved the jobs staff in with the income maintenance staff in the blitz offices, and within the space of one week. And within four weeks, we did a briefing for the secretary, and showed him a graphic. Within one month most of the blitz offices had achieved their target for the very first time, and conversely the others fell short as usual. We didn't have a p value, but that was enough for our secretary to say, okay, let's do this statewide  which we did.

And within about one month, a lot of the non-blitz offices started hitting their targets or coming close to it, which validated that step. But an important point is, the secretary didn't wait and we didn't wait to take the step until we had something more conclusive. I think that if you're working off a reasonable hypothesis, that's a responsible way to go in human services. There's too much at stake and the window is so short for acting, that you'll have a change of leadership if you don't act on less-than-conclusive evidence.

I almost forgot the most important point of the story. I had the dubious pleasure two years later in 1999 of attending a conference in Washington, D.C. It was sponsored by ASPE in part. A colleague from Illinois in a division that ran the separate jobs program presented the results that MDRC had gotten with two other states and Illinois. They concluded it didn't make sense to have the offices separate. And they demonstrated it to a .0001 level of significance. And that was great, but if we had waited for a study before we made this move, there would have been no way we would have won two awards, totaling almost $40 million. This willingness to act sooner ended up being an important step in the success that we had.

Third: Always Work Against a Hypothesis

I've seen young folks especially start out with a spreadsheet and try to mine it to see what's there. That's not real productive without a hypothesis to work against. A hypothesis provides a context that helps turn the data into information and if you don't have that, you won't be as focused as you should be.

Another really important way to have an effect with your data is to engage policy makers in the formulation of those hypotheses. That's what we did in Illinois. It gives them some "skin in the game" and allows them to couple their accumulated experience, wisdom, and insight with your ability to provide data and information. And it also has the ancillary benefit, that, well, it's their hypothesis, they've got some ownership of it, and they're also the people who happen to be providing you with the authority and resources you need to pilot test interventions that are driven by that hypothesis. It's an easier proposition if you're trying to get someone who has some ownership of the hypothesis to give you the time and money and talent available to test interventions that relate to it.

Using indicators isn't a spectator sport. There's not much payoff in being an observer. You can get a lot farther if you can engage a policy maker in the formulation of reasonable hypotheses that are testable.

Once you have a reasonable, testable hypothesis, to continue to maintain your credibility and influence you need to be able to mobilize pretty quickly to test interventions driven by that hypothesis. What you're looking for there is to either substantiate the hypothesis or disprove it. I usually go at it from the perspective of trying to disprove it if it's really inherently feasible. Disproving is easier. If you can't come up with something that challenges the credibility of the hypothesis, then it's probably worth going ahead and doing a pilot test on an intervention that responds to it.

And then it becomes a recursive cycle. You can use your indicators to help with the formulation of a hypothesis. You can work with program and leadership people to design interventions that respond to that hypothesis and then use your indicators to get in the back way to evaluate whether the intervention had the desired effect. And it's a real good way of closing that book and making your indicators real and vital to policy makers.

Techniques for Establishing Credibility

And that leads me to the second and last part of this presentation. I want to show you three techniques useful to developing influential information.

Pilot Testing

One of these is pilot testing; I showed you something on that earlier. But I wanted to show you another thing that's more on point with respect to family and child outcomes and the work ASPE is doing with respect to service integration. When we started DHS in Illinois in 1997, the whole concept was that we were going to do integrated services delivery. That was going to help us achieve our federal work requirements and improve outcomes for families and children. The reality is that in the first couple of years that we did this, we got so consumed with the burning platform that those work requirements represented that we didn't really attend as carefully as we said we would to service integration.

When we reached a point where we had a little breathing room on work requirements, we decided we needed to do a pilot test on a service integration model that would feature co-located staff  substance abuse, mental health, domestic violence counselors. This involved co-locating them in our local offices and seeing how that improved the rate at which people were referred for treatment and follow-up services.

We didn't want caseworkers that had come out of an income maintenance background and eligibility determination trying to make those calls. We weren't trying to do treatment in our offices. We were just trying to do case findings and get referrals made and we knew that wasn't happening very well at the site. So we did a pilot test using eleven local offices, six of them in Chicago and five of them downstate. And we tracked what happened in the offices before we did co-location and what happened afterwards and made a chart. There's no statistical test here. But the important thing is, if you've taken the steps to establish credibility with policy makers, if you've worked with policy makers on the formulation of hypotheses, they'll trust their eyes, they'll trust their instincts and common sense and they'll know when a picture is telling them something. In this case this picture said, yeah, there were clearly more referrals happening, especially in Chicago in substance abuse, after we did the co-location. Likewise the same kind of relationship held for mental health. There was no real debate that something seemed to have been changed and it seemed to have improved things and it was improving the rate at which referrals were made.

We learned something else interesting on domestic violence. And that was that the same relationship held but look what happened in downstate Illinois: the rate remained really flat. And that gave us a pregnant opportunity to mine that data further and find out what was going on in downstate Illinois that was different from what was going on in Chicago.

Field Work

The second technique I want to recommend to you is fieldwork. In Georgia we are working with folks in their Department of Family and Child Services to try to help them with some poor child welfare outcomes they're getting there. We are looking at the substantiated cases of abuse and neglect as a percent of all cases reported. Georgia has 159 counties, which is actually a nice large number if you want to do this kind of quasi-experimental approach. There are counties with 40 percent or greater rates of substantiated abuse among all cases reported. You've also got a number that are at 10 or 15 percent or below. And that doesn't tell you anything specifically except that it tells you where to look for some answers.

We don't know if this difference lies in people's propensities to report or in the quality of the investigations done. Both of these are reasonable and interesting hypotheses. And what it sets us up to do is to send in a team on the ground to poke around in local offices in the counties and develop a more robust hypothesis based on that kind of field work. What we see on the ground will help us formulate a hypothesis that we then can test with the right kind of pilot structure, and then we can test different interventions that we hope will produce a different result.

An Epidemiological Approach

I started out in public health and that's where a lot of my perspective on this comes from. I encourage you to think about a kind of epidemiological approach where you just look for clusters.

For example, looking at the percentage of substantiated abuse or neglect per thousand population plotted against the percentage of children in poverty shows a couple of interesting clusters. Some counties have a very high percentage of poverty, but relatively low substantiated abuse. That suggests that these counties are doing something right, something that we need to learn more about. Other counties have a lower rate of poverty but much higher substantiated abuse. And so one of the things we're proposing to do is again, send a field team in on the ground to look at those counties to review case records and interview case workers using an assessment protocol. We may find something about the different practices or the different characteristics of those counties that might support the development of a testable hypothesis that could perhaps shed some light on this and suggest some promising interventions.

The second cluster that's worth looking at is the counties that have a comparable rate of poverty, but are really all over the board in their rates of substantiated abuse. And I'm dying to know what it is that people are doing differently in different counties. Again we would like to sharpen our hypotheses and identify promising interventions that we can test to see how they might impact the outcomes.

This is a process that is pretty easy to engage policy makers in because they want to know, too. And if you lay it out this way, they get "skin in the game" and then getting the resources you need to do pilot testing of interventions becomes a much less daunting proposition.

So I'd encourage you to use your indicators to gain insight and also to engage policy makers in the formulation of reasonable hypotheses suggested by what you can tease out of the indicators. And then work with program folks and policy staff to identify pilot interventions that you can test to respond to those reasonable hypotheses. And you can then close the loop by using indicators to assess whether those pilot interventions had the desired impact on the outcomes you're interested in.

Questions and Answers

Q. Do you have experience using GPS mapping data?

A. We've done a little bit of that and I think it's really useful. It's especially useful for focusing on the mal-distribution of resources. If, for instance, you can plot where community health centers are located and compare that to the incidence of preventable diseases or teenage pregnancy, that's something that would make a mal-distribution of resources glaringly apparent.

Q. I struggle with the language we use to talk about this. My understanding of social indicators is that the language is very broad. When you use "test," "prove," "hypothesis," it becomes confusing to folks without that background.

A. I concur. When I talk with policy makers, I don't ever use the word "hypothesis." Then they think they're in for a discussion about data. I try to keep it more focused on "hypothesis," but I don't say that to them. I talk about "insight into what the right work is." That's the way I like to talk about it. That's something with greater resonance with people. I use "hypothesis" with people in this room. I'd be very careful using that term outside this room. To borrow a phrase, I encourage you to "Think quantitatively and act qualitatively." By which I mean, the indicators are good for zeroing in on a possible hypothesis, but the real knowledge comes from going in on the ground and looking through case records and talking with case workers and trying to figure out in the real world what seems to be contributing to the disparities.

Q. When you present data like this, do you discriminate between population-based data and service-driven data for policy makers? Second: when I saw that graph on referrals, two questions came to mind: 1) was there a difference in resource availability in terms of who would be referred and 2) when you pop up with early data, what's your response in terms of being prepared to answer those kinds of questions?

A. You kind of have to stay light on your feet. I use whatever works that doesn't run afoul of the credibility issue. Sometimes mixing administrative data and population based data gets you there. Sometimes you can just go with population-based data to establish an insight. If I'm trying to get back to the real world that a policy maker's involved with, I sometimes use administrative data. I kind of mix and match as I need to, but I try not to leave myself vulnerable to criticism or do it in a way that would cost me credibility.

On the referrals point: yeah, we concluded that part of what we saw was the result of fewer providers downstate and also of transportation barriers. But those were things we didn't really know until we looked at it that way. And then we raised the next questions.

Q. Do you share data across departments in your state, and if so, how?

A. I do and I have the scars to prove it. This is a good opportunity to put in a plug for our hosts. The right thing is to say people should share information. I pounded my head against that wall a lot and only moved it imperceptibly. So I finally decided to, like the Rhode Island folks, flow a different way around that rock. I go to the governor's office and others and tell them to provide data to Chapin Hall. What we run up against is, people raise concerns about confidentiality and privacy that are legitimate but when we look at this hard in Illinois, the 80/20 rule applies. Eighty percent of the reasons they don't share information is related to about 20 percent of the reality about what the law requires. The laws aren't nearly as restrictive as our culture and folklore make them out to be. When you dig into that one exception that's provided even in that most restrictive setting  substance abuse data  you can use that information without identifiers to support research. We used Chapin Hall as a repository that would get data with identifiers and then give it back to us without identifiers. And we were then able to use the data in an integrative way that we wouldn't have been able to otherwise.

Q: I've found that some of the confidentiality requirements can be gotten around, but that some of the political people don't want to even be compared across departments.

A: You're absolutely right.

Q: In Hawaii we had the opposite result with respect to integrated offices. We found that if you tell staff you want more referrals, you'll tend to get them. It doesn't really matter if they're integrated or not. But the other factor that enters into it is you have to do the work on the treatment side. You can get these higher referrals but then the question is, what percentage of those cases is accepted for treatment? And in our case we had to work out arrangements with the managed care providers who provided substance abuse counseling as well as medical treatment and get the directors of treatment within those plans to agree that yes, they were going to accept those patients and allow them to be treated under the plan so there wasn't a separate cost. So sometimes I think you can get what appear to be the results you're looking for by hypothesizing that if you do this you'll get that. But if you look a little deeper, or try different results, you may discover you'll get the same thing without doing it.

A. That's absolutely true that the stuff that gets measured gets done. So you have to be careful about what it is that you're measuring, because people will do the wrong thing. It's really clear that you need to retain your focus on what is the right work. When I know that what we're about is doing the right work and I know that we're going to do the case work behind it, I'm not above using something even if I have to hold my nose a little bit to get a policy maker to say, "Okay, yeah, we should do that." If you're not a person of good conscience or you're not prepared to remain focused on what the right work is, you're absolutely right, it can be dangerous.

Use of Census 2000 and the American Community Survey for Indicators at the State and Local Levels

The principal speaker at the session was Cynthia Taeuber, Program Policy Advisor of the University of Baltimore and the U.S. Bureau of the Census. The session was coordinated by Allen Harden of Chapin Hall. The purpose of this presentation was to give an overview of Census 2000 and the American Community Survey and review how they can be used to build child and family well-being indicators. Ms. Taeuber's Powerpoint presentation is attached following text summary and the slide numbers mentioned in the text refer to that presentation.

Census 2000 and the American Community Survey

The speaker began by comparing and contrasting the Census 2000 and the American Community Survey. While the main purpose of Census 2000 is to count the population every 10 years, the main purpose of the American Community Survey is to provide yearly updated information on the characteristics of the population in small geographic areas and also, for relatively small population groups in larger geographic regions. The questions on the American Community Survey provide indicators that are similar to those of the Census 2000 long form (described below).

The Census 2000 short form asks seven questions (figure 11 of handout) of every person and housing unit in the U.S. about age, race, Hispanic origin, gender, household relationship, and housing tenure (owner or rented). Field staff determine characteristics of vacant housing units. Additional questions are asked in the long form of a sample of every person in 1 in 6 housing units and of 1 in 6 people living in group quarters (a national average). Population statistics are provided on a range of topics (see figures 4-13 of handout) including marital status, place of birth/citizenship, disability, ancestry, migration, language spoken at home and ability to speak English, school enrollment and educational attainment, grandparents as caregivers, place of work and journey to work, occupation, industry and class of worker, work status in the week before the census or the last year in which the person worked, and income in 1999. Housing statistics based on the long form include number of rooms and bedrooms, plumbing and kitchen facilities, the age and value of the housing unit, and questions to indicate housing affordability including the cost and type of utilities, mortgage/rent paid, and taxes and insurance. Results are available for geographic levels, including the Block (short form information only), Block Group, Census Tract, County, Metropolitan area, state, and national levels.

Ms. Taeuber provided handouts that described options for comparing racial categories from the 1990 and 2000 censuses (see slides 13-15, 18-21) and detailed Census 2000 products that are available currently or in the near future (slides 16-17, 22-37). Information about the products, documentation, and the product release schedule are found on the Census Bureau's website:

Selected helpful sites for Census 2000:

The American Community Survey

The American Community Survey, once the sample is fully implemented in every county (planned to start in 2003, slide 49), will provide annual-average estimates of demographic, housing, social, and economic characteristics updated every year for the nation, all states, and as well as for all jurisdictions of 65,000 or more people such as cities, counties, metropolitan areas, and for large population groups. Statistics for smaller jurisdictions, geographic sub-areas, and smaller population groups will be updated for multi-year averages (3-year averages for areas of 20,000 -64,999 and 5-year averages for areas of less than 20,000 people).

Ms. Taeuber outlined several new opportunities for information that the American Community Survey will provide. First, because the survey is updated every year, it allows measurement of the level and direction of change for small areas and population groups (slides 38-43 of handout) on topics such as unemployment and poverty. Second, migration patterns can be better analyzed through this survey data (slide 44). Also, the American Community Survey will improve the ability to develop performance measures for local programs (slide 45). Finally, informed strategic decision making will be made possible by providing the community context for the assessment of needs and resources (slides 46-47).

The speaker emphasized that the American Community Survey is a bridge between Census 2000 and the future (slide 48). The Census Bureau plans to replace the Census long form with the American Community Survey for the 2010 census. She noted, however, that researchers will need to pay attention to the range of error in the estimates (that is, the confidence intervals - slide 51) because, like the decennial census long form, the data are from a sample of the population. For smaller areas, the sample will be accumulated over multiple years to achieve sufficient sample to approximate the sampling error of the decennial census long form. For example, areas of 20,000 to 64,999 can use data averaged over three years starting in 2006, and every year thereafter. For rural areas and city neighborhoods or population groups of less than 20,000 people, starting in 2008 and every year thereafter, a 5-year accumulation of sample will provide estimates similar to those of the decennial census long form. These averages will be updated every year, so that eventually, it will be possible to measure changes over time for small areas and population groups.

The American Community Survey is currently in its development stage. The Census Bureau plans that, beginning in 2003, the American Community Survey will be implemented in all counties across the country of the United States if Congress allocates the necessary funding (slide 49). The fully implemented survey would include three million addresses (households and group quarters). Data are collected by mail with follow-up calls and visits from Census Bureau staff if a household does not respond.

Selected Helpful Sites for ACS (slide 50):

Census 2000: Changes between 1990 and 2000: Several changes were made in the Census 2000 that have implications for indicator development. One new question asked about grandparents as caregivers for dependent children (slide 5). Grandparents who had grandchildren living in their households were asked if they are responsible for the basic needs of the children in the household and for what time period. Another change involved a revision of the question on disability status (slide 6). In Census 2000, the question specifically asks about vision or hearing impairments as well as conditions that limit learning or remembering. And, for the first time, the respondent can select one or more races (slide 13). As in past Censuses, there is a separate question on Hispanic origin (slide 12). In Census 2000, this question was asked before the revised question on race.

The additional choices that the revised question on race allowed meant that tabulations of race and Hispanic origin are more complicated for Census 2000 than for past Censuses. According to Ms. Taeuber, less than 2 percent of the total US population marked two or more races (slides 18-19) although the percentage is higher among children. There are 126 race and Hispanic origin categories in some Census products (slide 15). Most products, however, show only the counts of those who reported six single racial groups and "two or more races" (slide 14). The speaker also provided a paper by Sharon M. Lee entitled Using the New Racial Categories in the 2000 Census for further discussion of the implications of the new categories. This paper was funded by the Annie E. Casey Foundation and the Population Reference Bureau March 2000 (

Caution on Simplistic Comparisons of Survey Data and Administrative Data

Ms. Taeuber cautioned the audience about overly simplified direct comparisons of survey data, such as from the American Community Survey or the decennial census, with administrative data. There are crucial differences in concepts and data collection methods among data sets. As such, estimates of population characteristics from surveys such as the decennial census and the American Community Survey will differ. For example, Ms. Taeuber headed a study of differences in 1990 Census long-form estimates for Maryland of poor children and poor families, and counts of AFDC recipients and cases. The most important factor in the differences was the sampling error. Sampling error is present in all surveys but not in administrative records (this accounted for most of the apparent discrepancy among children except in the city of Baltimore). The other two factors of significance were undercounting people in the 1990 census and differences in the definition and reporting of "income" between the two data sets.

Challenges the Census Bureau Faces

The remainder of the discussion focused on some of the challenges that the Census Bureau faces. The Congress approves questions on the decennial census and the American Community Survey. They have approved only those questions mandated or required by federal legislation or court cases. That presents considerable challenges to adding new questions to the American Community Survey or the next Census. If data are collected but not presented in a Census product that meets the researcher's needs, it is possible to request special tabulations of Census data. The speaker cautioned that this usually requires considerable time and money (slide 35). There are other alternatives such as the Public Use Microdata files if the limitations on geographic area and sample size are not a problem (slide 33). The speaker encourages researchers to report their needs to the Census Bureau so they can consider these for future American Community Survey or Census products (slides 67-68).

Slide 1

Census 2000 and the American Community Survey

Slide 2

Meeting your needs for Indicators of Child and Family Well-Being

Slide 3

Child and Family Welfare Indicators are similar

Slide 4

Community, Neighborhood

Slide 5

Grandparents As Caregivers, New Question

Slide 6

Disabilty-Revised Question

Slide 7

Community, Neighborhood

Slide 8

Community, Neighborhood

Slide 9

Community, Neighborhood

Slide 10

The American Community Survey: A New Way to Collect Laong Form Data

Slide 11

Census 2000 Short form and The American Community Survey

Slide 12

Question on Hispanic Origin

Changes in Census 2000 question from 1990-same question on American Community Survey:  "Hispanic or Latino" asked before race
Every respondent to Census 2000 was asked to respond to the Hispanic origin question.
Those who were notof Hispanic origin marked the box "No, not Spanish/ Hispanic/Latino."
People who were of Hispanic origin mark the box indicating the specific group they belong to: Mexican, Puerto Rican, Cuban, or other Spanish, Hispanic, Latino, such as Spanish, Honduran, or Venezuelan.
People of Hispanic origin may be of any race.

Slide 13

Question On Race

Changes in Census 2000 question from 1990-same question on American Community Survey: Respondents may select one or more races
Asian and Pacific Islander category split:
Native Hawaiian and Other Pacific Islander

Slide 14

Mutually Exclusive Tabulation Categories for Race and Hispanic Origin

For the first time ever Respondents to the decennial census were allowed to mark more than 1 race category. Race tabulations include six "Alone" categories.
Black or African American
American Indian or Alaska Native
Native Hawaiian and Other Pacific Islander, and
Some other race

There are also 57 possibilities of "Two or more races."
15 combinations of 2 races
20 combinations of 3 races
15 combinations of 4 races
6 combinations of 5 races
1 combination of 6 races

The race question will also supply information on 36 American Indian groups, 6 Native Alaskan groups, 17 Asian groups, and 13 Pacific Islander groups.

Slide 15

Comparing Racial Categories-1990 and 2000

Slide 16


Slide 17

First Census 2000 Product: Redistricting file

The Redistricting Summary File was the first Census 2000 product released. It contains the data from the Census short form that is needed for redistricting. Redistricting is the process of revising the geographic boundaries within a state from which people elected to the U.S. House of Representatives, state legislatures, county and city political subdivisions, school boards, and other voting districts.
This file is available on the Internet and CD-ROM.
The statistical summaries contain population totals and the population 18 years and older. You can subtract to obtain counts of children under 18 for every block in the country. There are summaries for geographic areas, including states, counties, voting districts, county subdivisions, American Indian/Alaska Native/Native Hawaiian areas, census tracts, block groups, and blocks.
There are summaries for the total population, but not for age groups, by race, Hispanic origin, and voting age for geographic areas down to blocks. Because Census 2000 allowed respondents to check one or more race categories, the race tabulations are verydetailed.
This file contains block level data showing 63 race categories and Hispanic origin.

Slide 18

Population Distribution by Race

In Census 2000, nearly all respondents reported only one race. White alone, accounted for 75 percent of all people living in the United States. The African American alone represented 12 percent. American Indian and Alaska Natives alone represented just under 1 percent of the total. Approximately 4 percent of respondents indicated Asian only. The smallest race group was the Native Hawaiian and other Pacific Islander population alone which represented 0.1 percent of the population.

Almost 6 percent of all respondents indicated that they were Some other race. And about 2 percent of all respondents reported two or more races.

Slide 19

Two or More Race Combination-National Results

Of the nearly 7 million respondents in this category, 93 percent reported exactly two races.
16 percent were White AND American Indian and Alaska Native.
13 percent were White AND Asian.
11 percent White AND African American.
Of all respondents reporting exactly two races, 47 percent included some other race as one of the two races.

Slide 20

Percentage of Hispanic Origin

About 13 percent of the population (35 million people) are Latino, according to Census 2000.
About 59 percent of Hispanics were of Mexican origin.
The next largest group was Puerto Rican, accounting for about 10 percent of all Hispanics.
Cubans were the third largest group, making up just 4 percent of the total Hispanic population.
Half of all Hispanics live in just two states: California and Texas.
There are seven states with Hispanic populations of more than one million: California, Texas, New York, Florida, Illinois, Arizona, & New Jersey
New Mexico had the highest share of residents who were Hispanic, 42 percent.
Hispanics may be of any race.

Slide 21

Race reported by the Hispanic Origin Population: 2000

About 48 percent of the Latino population reported that they were White alone.
Forty-two percent said that they were some other race alone. Only 2 percent reported Black only. One 1 percent were American Indian and Alaska Native only. About 6 percent were two or more races.

Slide 22

Data Products: Traditional and New for Census 2000

Slide 23

Printed Reports: Demographic Profiles

Slide 24

Census 2000 Product Available Now: Demographic profiles

Slide 25

Printed Reports

Slide 26

Electronic Summary Files

Slide 27

Summary File 1: 100 Percent Characteristics

Slide 28

Summary File 2: 100 Percent Characteristics

Slide 29

Summary File 3: 100 Percent Characteristics

Slide 30

Summary File 4: Detailed Long-Form Characteristics

Slide 31

Quick Tables

Slide 32

Geographic Comparision Tables

Slide 33

Tabulations Defined by Data User

Slide 34

Tabulations Defined by Data User

Slide 35

Tabulations Defined by Data User

Slide 36

Release of Census 2000 Data Products

Slide 37

Cost of Data Products

Slide 38

The American Community Servey: What do you get?

Slide 39

The American Community Servey: Helps to Fill Information Gaps

Slide 40

The American Community Servey: Data Products Updated Every Year

Slide 41

The American Community Servey:Types of Products

Slide 42

The American Community Servey:What Are the New Opportunities?

Slide 43

Percent Change of Children in Poverty: 1990-1996

Slide 44

The American Community Servey: Migration Patterns

Slide 45

The American Community Servey: Performance Measures

Slide 46

The American Community Servey: Informed strategic Decisionmaking

Slide 47

Percent Change of Singer Parents: 1990-1996

Slide 48

The American Community Servey: 1999-2000

Slide 49

Future Stages in the American Community Survey Plan

Slide 50

Where Do You Find Data Products?

Slide 51

Summary Tabulations Show the Confidence Interval

Slide 52

Tell the Census Bureau What You Need

Slide 53

Census 2000 Supplementary Survey: What it mean to you

Slide 54

Census 2000 Supplementary Survey Briefs

Slide 55

Census Long Form Transitional Database: 2001, 2002

Slide 56

2003 and Beyond: Full Implementation of the American Community Survey

Slide 57

The American Community Survey: Planned Data Release Dates

Slide 58

Data Issues

Slide 59

Issue: Multi-year Averages

Slide 60

Issue: Data Quality and Disclosure Avoidance

Slide 61

Issue: Geographic Boundaries

Slide 62

The American Community Survey: Examples of Federal Uses: Welfare

Slide 63

The American Community Survey: Examples of Federal Uses: Education

Slide 64

Data Partners in Springfield, Massachusetts

Slide 65

New Products Planned: American Community Survey and Census 2000 Supplementary Survey

Slide 66

Draft Plan for Tables of Characteristics Repeated by Race and Hispanic Origin

Slide 67

For More Information on the American Community Survey

Slide 68

Information on Census 2000

Legal and Ethical Issues in Data Linking

The moderator was Robert Goerge of Chapin Hall. The session focused on the ethical and attendant legal barriers to sharing data. The first speaker was Loretta Fuddy of Hawaii. Slides from a Powerpoint used by Ms. Fuddy follow the text.

Ms. Fuddy discussed some of the difficulties of sharing data, using as an example Hawaii's long-term plan to link birth certificate data to other data as part of a perinatal data system. Barriers to the creation of the system include these:

  • Because birth certificates are legal documents, they cannot be released without department of health approval
  • Certain sensitive data are protected by federal law
  • The Health Insurance Portability & Accountability Act (HIPAA) creates its own privacy challenges

Participants from other states, and Chapin Hall staff members, noted similar difficulties with HIPAA data, and, in some states, with TANF and Medicaid data.

Debbykay Peterson and John Oswald sketched Minnesota's data linkage initiative. Minnesota has an extremely comprehensive privacy law. The state attorney general, in particular, favors limiting use of health data in response, in part, to the activities of health maintenance organizations. Key questions include:

  • Are removing identifiers from linked data enough, or must other information, such as race and county information, be removed as well? In some states with small populations, identifying race and county of residence might make individuals identifiable.
  • Is passive consent sufficient or is active consent required?

Strategies for protecting privacy among shared data include

  • One-way sharing--data go to a statistical agency but do not return
  • Third-party module linking
  • Data inflation or salting--the addition of confounding cases

A memorandum of understanding among users can be a powerful protector of all parties.

Slide 1.

Building a Perintal Data System- Legal & Ethical Issues

Slide 2

The MCh program in Hawaii

Slide 3

Advantages of MCH Program Data

Slide 4

Advantages of MCH Program Data

Slide 5

Disadvantages of MCH Program Data

Slide 6

Perinatal Data Issues

Slide 7

Data Utilization & Enhancement

Slide 8.

Hawaii's Long-term Perinatal Data Goal

Slide 9

Future Plans

Slide 10

Building a Perinatal Data System requires a collaborative effort

Slide 11


Slide 12

Hawaii Health Information Corp.

Slide 13

Legal Barriers to Data Linkage

Slide 14

Health Insurance Portability & Accountability Act-1996

Slide 15

Who isi Impacted

Slide 16


Slide 17

Patient Rights

Slide 18

Ethical Considerations

Slide 19

Open Discussion

How to Train Community Partners to Use data and How to Identify and Deal With Pitfalls

This session was coordinated by Bong Joo Lee of Chapin Hall. The principal speaker was David Murphey of the Vermont Agency of Human Services. Representatives from Utah, Georgia, New York, and Vermont talked about how they have addressed the issue in each state, then led a discussion on some of the common key issues. In introducing the session, Murphey said that all states participating were behind the idea of a community curricula for data, although perspectives on what that meant varied. Murphey also pointed to key issues that states have identified that he expected to be discussed, such as the small numbers that arise when data are taken down to the community level, program outcomes versus community outcomes, and issues around the idea of making comparisons.


Terry Haven from the child advocacy organization Utah Children spoke first. Utah Children is the state's Kids Count grantee. They gather community indicators by zip code, working with the Department of Health, and use the data to advocate at the state level. Utah is very conservative and decisions are strongly influenced by conservative religious beliefs. Within the state, powerful conservative groups have strong lobbying organizations. Communities need to get data that they can advocate on their own behalf.

Utah Children runs an Advocacy Academy for nonaffiliated community people to attend. It is a three-day training workshop, free of charge. Transportation, meals, and accommodations are paid for. Participants learn to do things like deal with the media and organize at the grassroots level using Kids Count data. They are required to do at least one data presentation in their community and one community-based organizing advocacy effort in their community. Among Utah's special circumstances is that the percentage of kids in the population is the highest in the U.S., so even a relatively small percentage of children in adverse circumstances is still a large number. It is critical to present these data to the layperson so that people understand that small percentages may mask large numbers.


Rebekah Hudgins from Family Connection presented. Family Connection is a statewide network of community collaboratives that was started with support from Pew, Casey, and other foundations. Each of the 159 counties in Georgia now has a collaborative made up of people from different areas that work to make changes in decision making, service integration, and ultimately the well-being of children and families. There are 26 benchmarks that the collaboratives are working toward. Statewide, data have been collected on 19 of these benchmarks. All of the counties already have available to them, on a web site, county-level data related to all the state benchmarks.

State and federal money support a state-level infrastructure to provide technical assistance (TA) and support work at the county level. In 1991, there were 14 counties involved. In 1996, there were 40. This year is the first year that all 159 counties are involved. All counties receive the same level of state resources. The resources available to support this work have stayed constant, although the number of collaboratives has grown. TA teams were created to serve targeted purposes, such as evaluation or planning. Twelve regional consultants act as liaisons between the counties and the state and those consultants help counties identify their TA needs and structure TA teams to meet those needs.

The teams began with a state-level approach and, using the resources of several consultants, developed a handbook that gave basic background information on a variety of topics, such as defining evaluation, how to use evaluation, and how to use data. The handbook also provides sample data collection instruments. Also available were workshops within regions on how to use the handbook, how to use the web site, and how to provide feedback on self-assessment. The self-assessment is a tool completed by each county every fiscal year to provide feedback.

New York

The New York Council on Children and Families is a state agency that plays a coordinating role only. The Council thinks of itself as a membership agency, the members being the heads of the relevant state agencies.

New York has not yet done any training on how to use data, but expects to. The Council is at work developing a curriculum with the goal of enhancing the ability of state, regional, and local planners to effectively use health and well-being indicator data to guide health education. They have a web site, called the Kids Well-being Indicators Clearinghouse (KWIC) that will soon be available for members to access KidsCount data. In designing that web site, they asked data users and technology people what would be useful in a web site. It includes information on how to prepare communities to use data, and how much data you need to support good decisions.

The training session, as currently envisioned, would take place over one to two days, depending on the amount of technical information required, and would be in a traditional format and led by an instructor. The state has used both technical and community advisory groups to help design this training and curriculum.

This activity has run into a number of challenges in curriculum design and logistics and in piecing together funding from members. The Council has tried to address a number of troublesome issues, including the difficulty of moving from objectives to measures and helping people understand why data is being collected and how it can be used.


Vermont is similar to Georgia in that they have twelve regional partnerships. Each partnership receives $1600/year to buy training and TA from a menu of options. The menu approach allows people to get information when they need it.

Recurring Themes Across States

Language. It is important to agree on a common language within the state (For example, how to talk about outcomes vs. results vs. indicators.)

Accessing available data. Communities often don't know what is already available.

Collecting new data. States need to develop skills in different data collection methods.

Interpreting data. States need to know what questions to ask and how to recognize/interpret less obvious factors. For example, if in-hospital data shows fewer injuries result in death, is this because there are fewer injuries, or because of managed care and better outpatient care?

Making comparisons. Comparisons with the same data element over time are the most valid, but comparisons with peer groups or like communities is more difficult. People who work with indicators may need help identifying those communities.

Statistical significance testing. Those presenting data need to always provide confidence intervals and instructions on using and understanding the data and to temper the statistical information with explanations of practical significance and limitations.

The reasons for having indicators are to inspire change. It is important to choose priorities and to know what works and actually tackle it. The necessary resources to address a problem are more than just money; political will is important too.

Use of Indicators to Track Welfare Reform

Session coordinator was Larry Aber of the National Center for Children in Poverty (NCCP) at Columbia University. Speakers included Aber; Catherine Walsh, Program Director of Rhode Island Kids Count; and, Martha Moorehouse of ASPE.

Aber used two Powerpoint presentations, one on the work of the Research Forum on Children, Families, and the New Federalism, which is headed by Barbara Blum, and the other on indicators of social exclusion among children. These follow Aber's section of the text, coming before the summaries of the other speakers.

Larry Aber

The Research Forum on Children, Families, and the New Federalism, housed at the NCCP, is a repository of information on outcomes for families under TANF. A goal of centralizing information at the Forum is to promote syntheses of information to inform midcourse corrections. Aber expects the debate over reauthorization of TANF to emphasize two of the goals  preventing and reducing non-marital pregnancy, and encouraging the formation and maintenance of two-parent families. There is not much research about these areas.

Caseload Dynamics

There are many possible explanations for the decline in welfare caseloads under TANF. Movements in the work force, departures due to sanctions, reduced entries to welfare, and the strong economy  all could help account for the drop in caseloads. Welfare waiver experiments allowed the study of some of these factors. Nonexperimental administrative and survey data do not.

Findings about the way welfare programs have changed under TANF so far:

  • Cash assistance, use of food stamps, and use of Medicaid have diminished.
  • The legislative intent to promote job entry and work seems to be achieved. Many issues related to job retention and the adequacy of income remain.
  • The decline in nonmarital pregnancy and divorce rates, which began prior to Personal Responsibility and Work Opportunity Reconciliation Act (PRWORA), continues. It is important not to credit TANF solely with these drops.

Little is known about effects on children, immigrants, or other populations that were outside the studies' principal foci. We don't know what will happen under a soft economy.

Research studies from the 1980s indicated that modest investments correlate with modest improvements and that caseload dynamics are related to education and employment histories. Three early and sobering controlled experiments  New Chance, TPD, & LEAP(1)  showed difficulties in improving education, employment income, and childbearing outcomes for young mothers. The New Chance study showed in part the prevalence and severity of physical and developmental problems for children of teenage mothers.

Waiver studies in the 1990s have been documenting increases in employment and reductions in caseloads at sites where time limits and sanctions were imposed. Incentive programs can reduce caseloads, improve employment and earnings, and reduce poverty at the same time. Some of these have effects on child development as well, highlighting a key issue  how do we improve income when people make the transition from welfare to work? It is expected that two research initiatives, the Project on Devolution and Urban Change and the Three City Studies are going to yield information derived from multiple sources  surveys, administrative data, and ethnographic research. As reauthorization approaches, there is an opportunity to apply what we have learned to national and state policy and local practice activity. From Aber's perspective, most of the compelling data come from the experimental welfare waiver studies.

A Study by the National Center's Social Science Research Unit

Aber and his colleagues have made an effort to use nonexperimental trend data to study welfare reform on family economic well-being at the national level.

Research Questions

  • What has been the effect of TANF (not pre-TANF waivers) on the economic well-being of children and how has that effect differed from effects associated with waivers?
  • How much of the effect of TANF is dependent on parent characteristics?
  • How do different parts of the TANF package affect economic well-being.

Study Data

The NCCP team has taken advantage of the Current Population Survey design that reinterviews half the sample at 12 months, allowing the analysis of short-term, one-year changes in income that can be related to policy changes. A central methodological difficulty is finding some of the respondents for reinterview. Upwards of 35 percent of respondents might be not locatable. This creates selection bias and poses the question  are these 35 percent different and if so, how? The NCCP team has analyzed that bias and found that those unlocatable for the second interview are more likely to be Hispanics, high school dropouts, people on welfare or below the poverty line, and people in mother-only families. The NCCP team believes that their work to compensate for this selection bias has yielded a dataset suitable for analyses to drive policy.


In examining changes in families' income-to-need ratios at different periods of time from 1988 to 1999 the two major policy variables were:

  • Were the respondents in a state implementing a welfare waiver?
  • Were the respondents in a state implementing TANF?

They found that TANF and waivers both affected people with less than high school education differently than they affect those with high school education. And waivers seem to have a bigger effect on family income than TANF did. Aber suggests that one reason for this difference is that states customized their programs under waivers to their particular circumstances. Aber thinks these findings confirm their prediction that TANF has advantaged the relatively advantaged among the poor and relatively disadvantaged the most disadvantaged among the poor. Also, they don't find that some provisions of welfare reform, such as sanctions, terminations, and family caps have an effect on family income. What has had an effect on income is earnings disregards. Aber suggested, in response to a question, that health or state population survey data might allow for similar analyses within states.

Catherine Walsh

At the beginning of her presentation, Catherine Walsh noted the legislative intent of the Rhode Island's Family Independence Act (FIA) and identified some indicators that reflect on those intentions and supporting data. Those intentions  or expected outcomes  for FIA are:

  • Increase in family income through employment
  • Gradual decrease in the level of cash assistance to employed families
  • Gradual decrease in state expenditures for cash assistance for all families
  • Reinvestment of cash assistance savings from family earnings into health care, child care, education, literacy, and skills training
  • Enhancement of family cohesion and stable living environment for all children

The FIA goals do not include caseload reduction. Although Rhode Island has shown smaller caseload reductions than some other states, they feel this is due to the income disregard provision of the Act. The costs of cases to the states have declined while family incomes have risen. Walsh showed how indicators are presented in context in the Rhode Island's Kids Count Fact Book. It is key that the reader understand why each indicator is important. As an example, she discussed the way they group and report the indicators for the five Rhode Island communities that have 25 percent or more of their children living in poverty (they call these the core cities), so that their circumstances aren't overlooked.

Rhode Island uses welfare reform indicators to describe child, family, and community conditions; inform planning and policymaking at the state and local level; measure progress in improving child and family outcomes; improve programs; and monitor the impact of policy choices. She said that the first point, to describe the conditions, is very important. Rhode Island tracks all indicators at the local community level. Management of data from these services is easy because a single state agency, the Department of Human Services, is responsible for implementing and overseeing the state's Family Independence Program (FIP). Activities under the program include case management, cash assistance, childcare subsidies, and health care coverage.

Three principles guide Rhode Island's welfare reform efforts: 1) Poor children should be no worse off than they were before welfare reform; 2) Adults should be able to access education and training if they need it before they are required to work; and 3) The program of cash assistance and supports for families should be cost neutral during the first two years (1997 and 1998).

Indicators and Outcomes

Progress toward the outcomes for FIA specified above can be tracked by a number of indicators. Walsh identified some of these.

Increase in family income through employment.

  • Average wage at job entry
  • Wages of "FIP leavers" versus "on FIP and working"
  • Job retention rates
  • Increases in earnings over time
  • Access to work, education, and training for public housing residents
  • Work participation rates

Gradual decrease in the level of cash assistance to employed families.

  • Percentage of cases with adults working
  • Average monthly cash benefit cost per case
  • Percentage of newly eligible cases working
  • Percentage of FIP cases with no employment history while receiving cash assistance

Gradual decrease in state expenditures for cash assistance for all families.

  • Annual federal and state expenditures for cash assistance

Reinvestment of cash assistance savings from family earnings into health care, child care, education, literacy, and skills training.

  • Annual federal and state investments in health care
  • Number of children in low-income working families and FIP families enrolled in Rite Care
  • Annual federal and state investments in child care
  • Number of children in low-income working families and FIP families receiving child care
  • Number of available child care slots

Enhancement of family cohesion and stable living environment for all children

  • Number of minor teen heads of household enrolled in FIP
  • Percentage of FIP caseload in two-parent households

Regarding the last goal, enhancement of family indicators, Walsh wishes to place the child development concerns centrally in the area of influence for this indicator so that it is part of the welfare conversation at the policy level. She wrapped up her presentation by pointing out some areas for indicator development and data gathering. These included indicators on how welfare reform:

  • Assists families in obtaining sustainable jobs that move them out of poverty and into economic self sufficiency
  • Supports the healthy development of children
  • Provides the range of supports and services needed by low-income families

She then provided a brief list of resources that provide models for indicator development and possible questions for statewide surveys. These included the National Survey of American Families and America's Children National Indicators.

Martha Moorehouse

Martha Moorehouse began by noting that the indicators project began because ASPE wanted to ensure that, in the tracking of welfare reform, children were part of the story  and not just children in welfare families, but other low-income children as well. Indicators need to be in place to capture the experiences of both groups and to explain how low-income children are faring generally, with welfare reform as backdrop rather than foreground.

Her feeling from looking over available presentations from studies on welfare reform is that including children is now the norm. But she has not seen much on children at the state level. Moorehouse said that analyses of data from experimental studies of welfare reform were presented at the New World of Welfare Conference. Some of those papers are available at the website of the University of Michigan's Gerald R. Ford School of Public Policy ( She went on to comment on a particular paper from that conference, "Welfare Reform and Child Well-Being" by Lindsay Chase-Lansdale and Greg Duncan. She commented in particular on a section of the paper which focused on achievement by children and adolescents in participating families. Regarding children, welfare experiments that provided families with generous earning supplements yielded higher achievement among children than those with no earning supplements. As a corollary, she pointed out that the Minnesota experiment, which showed some of the higher rates of achievement among children, did not continue the generous earning supplement under TANF. Moving on to adolescents, more problem indicators seemed to be emerging. Moorehouse concluded by saying that, so far, welfare reform is not doing for children what it should, but that there seems to be new interest in what needs to be done for kids in welfare families and the broader low-income population. States should pursue data that will allow them to decide whether other interventions are in order.


1.  The New Chance demonstration project provided education, training, and other services to women who had children as teenagers and dropped out of high school. New Chance was intended to increase the long-term self-sufficiency and well-being. It was evaluated by Manpower Demonstration Research Corporation, which released its final report in 1997. Ohio's Learning, Earning, and Parenting Program (LEAP) used financial incentives and penalties, combined with case management and support services, to promote school attendance by pregnant and parenting teenagers on welfare. MDRC evaluated LEAP, issuing a report in 1997. The Teen Parent Demonstration Program was a case management and mandatory education, job training, or employment program for first-time teen parents (or other pregnant teens in some jurisdictions) on welfare. TPD was evaluated by Mathematica Policy Research, which issued a report in 1998.

The State of Welfare Reform Research

Slide 1.

The State of Welfare Reform Research

Slide 2

Research Forum on Children, Families, and The New Federalism

Slide 3

Research Forum Purposes

Slide 4

Welfare Research Perspectives: Past, Present, and Future, 2000 Edition

Slide 5


Slide 6.


Slide 7


Slide 8


Slide 9


Slide 10


Slide 11

Changes in Welfare Program

Slide 12

Changes in Welfare Program

Slide 13

Reasons for Changes in the Welfare Caseload

Slide 14

The Extent to Which Welfare Programs have changed

Slide 15

The Extent to Which Welfare Programs have changed

Slide 16

The Extent to Which Welfare Programs have changed

Slide 17

The Extent to Which Welfare Programs have changed

Slide 18

PRWORA Reauthorization Issues

Slide 19

PRWORA Reauthorization Issues

Slide 20

PRWORA Reauthorization Issues

Slide 21

PRWORA Reauthorization Issues

Slide 22

PRWORA Reauthorization Issues

Slide 23

PRWORA Reauthorization Issues

Slide 24

PRWORA Reauthorization Issues

Slide 25

PRWORA Reauthorization Issues

Slide 26

Research Findings: 1970s to early 1990s

Slide 27

Research Findings: 1970s to early 1990s

Slide 28

Research Findings: 1970s to early 1990s

Slide 29

Research Findings: 1970s to early 1990s

Slide 30

Resent and Emerging Research Findings

Slide 31

Resent and Emerging Research Findings

Slide 32

Resent and Emerging Research Findings

Slide 33

Resent and Emerging Research Findings

Slide 34

Resent and Emerging Research Findings

Slide 35

Resent and Emerging Research Findings

Slide 36

Questions still Unanswered

Slide 37

Questions still Unanswered

Slide 38

Questions still Unanswered

Slide 39

Questions still Unanswered

Slide 40

Questions still Unanswered

Slide 41

Questions still Unanswered

Slide 42

Research Methodology Issues

Slide 43

Research Methodology Issues

Slide 44

Research Methodology Issues

Slide 45

Research Methodology Issues

Slide 46

Research Methodology Issues

Slide 47

Research Methodology Issues

Slide 48


Social Exclusion of Children in the U.S.: Identifying Potential Indicators

Slide 1

Social Exclusion of Children in the U.S.:Identifying Potential Indicators

Slide 2

Potential Contributions of the Concept of the Social Exclusion of Children

Slide 3

In Identifying Indicators of Social Exclusion, Our Goals are to:

Slide 4

Exclusion of Children vs. Exclusion of Adults

Slide 5

Generation of Potential Indicators

Slide 6

Generation of Potential Indicators

Slide 7

Risk Factors for of the Social Exclusion of Children

Slide 8

Eight Domains of the Social Exclusion of Children

Slide 9

Basic Living

Slide 10

Family Economic Participation

Slide 11


Slide 12


Slide 13


Slide 14

Public Space


Social Participation

Slide 16

Social Participation

Slide 17

Subjective Experience of Social Exclusion


Three Suggested sub-Domains of the Social Excluison of Children

Slide 19

Basic of Living

Slide 20

How Can Indicators of the Social Exclusion of Children Be Used and Improved?

Slide 21

Validity Studies

Slide 22

Validity Studies

Slide 23

Development and Use of Core Set of Indicators

Slide 24

Development and Use of Core Set of Indicators

Slide 25

Development and Use of Core Set of Indicators

Slide 26

Development and Use of Core Set of Indicators

Slide 27

The Most Important Uses of Indicators of the Social Exclusion odf Children

Slide 28

Future Directions

Slide 29

Food for Thought:

Slide 30

Components of Socioeconomic Disadvantage as a function of a Family's 1998 Income to Needs Ratio in the ECLS-K

Slide 31

Socioeconomic Disadvantage as a Cross-Sectional

Slide 32

Same Model with Unstandardized Coefficients

International Indicators Update

The presenters were Robert Goerge and Mairéad Reidy of Chapin Hall, Steve Heasley of the West Virginia Governor's Cabinet on Children and Families, and Larry Aber of the National Center for Children in Poverty (NCCP) at Columbia University. The purpose of the session was to update the states on the international project "Measuring and Monitoring Children's Well-Being."

Robert Goerge

Robert Goerge introduced the session. He mentioned a new book from Kluwer, Measuring and Monitoring Children's Well-Being, that he wrote with Asher Ben-Arieh and others. Ben-Arieh, of Israel's National Council for the Child, is a leader in the international effort. He addressed the indicators group at an earlier meeting.

Steve Heasley

At the last meeting, representatives of five states--Vermont, New York, Georgia, West Virginia, and Minnesota--met with Ben-Arieh. These states continue to be interested in working with the international project, but lack the time to do so immediately.

Robert Goerge

Goerge went on to say that the international project has gone on to refine what it wants to examine and to raise money. In February, representatives of Chapin Hall, NCCP, and the National Council for the Child met with European researchers in Vienna. Attendees agreed that Chapin Hall and NCCP would lead the U.S. portion of the international effort. They also decided that Ben-Arieh would conduct a survey in Israel that would attempt to measure some of the agreed upon indicators. The German Youth Institute will try to collect data on the relevant indicators as part of their large survey of youth slated for the following year. Other data collections and pilot tests are sought.

Mairéad Reidy

Reidy summarized the work of the Vienna meeting. In general,

  • The initial focus will be on children 6 to 14 years of age, to be later extended to 18-year-olds
  • Data from the children's perspective will be preferred
  • Data will be eligible for consideration if at least one domain is explored
  • Data at all levels (city, county, etc.), are welcome
  • Longitudinal commitment is desirable but not mandatory
  • Data will be owned by original researchers but will be available through a clearinghouse mechanism to others for international and cross-site comparisons

The international group seeks valid measures for indicators in five domains (see below). Of particular value are measures that have been validated across cultures. They anticipate the need to innovate and revise their protocol in response to the measures and research programs they uncover.

The Five Domains

Reidy sketched the five domains, the potential indicators that might provide information within those domains, and other conclusions regarding them. These are presented in her Powerpoint presentation, which follows. Reidy said that much work has been done already in different countries relevant to some domains. For example, the personal life domain, particularly the academic skills and resources section, figures into a lot of international research on literacy, numeracy, technical knowledge, and general knowledge. She cited a number of relevant studies, including the Definition and Selection of Competencies Project, now underway, that aims to identify those competencies needed for individuals to lead a successful and responsible life.


Reidy noted some of the challenges facing the project. These included the development of proxy measures, ensuring that the relevant information can be obtained affordably, and getting child-specific context for indicators. Reidy stressed that the Vienna meeting forced participants to think about how they can find new measures and definitions. As an example, she said that measures of literacy might need to be expanded to include media competency or technical literacy.

Slide 1

Vienna Planning Meeting of the Multi-National Projects: Measuring and Monitoring Children's Well-Being. February 9th to 11th 2001: General Resolutions Regardign the project

Slide 2

Vienna Planning Meeting of the Multi-National Projects: Measuring and Monitoring Children's Well-Being. February 9th to 11th 2001: Next Steps: Protocol Development

Slide 3

Vienna Planning Meeting of the Multi-National Projects: Measuring and Monitoring Children's Well-Being. February 9th to 11th 2001: Final List of Indicators

Slide 4

Final List of Indicators Cont...

Slide 5

Final List of Indicators Cont...

Slide 6

Final List of Indicators Cont...

Slide 7

Final List of Indicators Cont...

Child Labor

In response to a question on how the international group would track child labor, Reidy replied that child labor is part of the economic contributions and resources domain. Aber added that the issue, although important, did not get special attention from those working on the project, in part because they tend to come from countries with advanced economies. He went on to say that UNICEF is very concerned about this issue. Goerge pointed out that the group was formed in part to do what UNICEF is not doing, that is, to look at children in developed nations.

Zero to Five Years

When asked why the population of interest begins with six-year-olds rather than those younger, Goerge said that they wanted information from the children themselves and felt that it might be difficult to get information from children younger than six.

Sense of Connection with School and Family

A questioner wanted to know if the international work would build on the findings from an adolescent health study in the U.S. that indicated the importance of a sense of connection with school and family. Aber replied that this idea, although not part of current plans, was welcome.

Are Other Nations Using Indicators in Policy?

In response to a question about international appetites for the use of indicators in policy, Aber said that his sense was that Europe is behind the U.S. in measuring things and ahead of the U.S. in doing things. Mairéad added that the European Community has put a lot of resources into collecting data across countries and there is a lot of publicity when those data come out. Aber and Reidy discussed the availability of data on populations in Ireland and the U.K.

Government and Non-government Involvement

In answer to a question about the involvement of government researchers, Aber said that any chance of bringing this effort to scale would require national data collections. The questioner asked if the European partners included government staff members. He was told, "No," but that the effort aimed to develop products compelling to governments.

J. Lawrence Aber

Aber said that, in part, his involvement in the project grew from a desire to use the tools to be developed and the comparisons possible to help understand change in this country. Goerge added that he felt most of those involved in the project were concerned first with situations in their own countries. He said this difference was underlined by the name "multi-national" that the project is beginning to use, rather than "international."

Update on School Readiness Indicators and the Use of Indicators in Early Childhood Initiatives

June 1, 2001

The session coordinator was Mairéad Reidy of Chapin Hall who opened by explaining that the development of school readiness indicators has been a very important component of this project and that many attendees have done much work and made significant advances in this area. She indicated that there is very strong support for developing school readiness indicators that not only tell about the readiness of the children themselves but also get at the status of the family, the community, and the early childhood factors and supports that influence the child's readiness.

Reidy introduced the guest speaker, John Love, a Senior Fellow at the Mathematica Policy Research Institute, who has worked for the last 30 years conducting research program evaluations, policy studies with the early care, and education family programs. He is currently co-directing the National Evaluation of the Early Head Start Program for the Administration on Children, Youth, and Families. What follows is a transcript of Love's remarks. The slides from his Powerpoint presentation follow the text. After Love finished speaking, five other people spoke. They were Elizabeth Burke Bryant and Catherine Walsh of Rhode Island, Rebecca Hudgins of Georgia, Steve Heasley of West Virginia, and Debra McLaughlin of Massachusetts.

Readiness Indicators in Early Childhood Initiatives: The Ideal, the Practical, the Essential

About eleven years ago or so when the governors established this first national goal, we were filled with idealism about what this would mean for our work and for children and for society. But how do we go from those ideals to what is really practical in the work that we are doing? Over the years and seeing all the work that has been accomplished and what certain people consider to be practical, I think that we need to think about another step and that is go back and think about what is really essential. If we compare what is ideal and what we had been able to do so far, what are the elements of both that we should try to really do?

I have three themes in my remarks this morning and I want to go back to the goal as a starting point. And then look at some recent early childhood data collections, early child initiatives, the theme of this session, and particularly focusing on some program evaluations and what they might have to say about our work on indicators of readiness. And then conclude it with a discussion of what we might consider some of the essential elements of our readiness work.

What ever happened to Goal One?

We all can recite it by heart. "By the Year 2000, all children in America will start school ready to learn." Fourteen words carefully crafted by the fifty governors, probably the most succinct the governors have ever been in their lives. And it has inspired an enormous amount of work, and reflection and controversy. Just consider the amount of work that is represented in the three days of this meeting--work that is going on all over the country. But there is still a lot of controversy. And part of the controversy is, what does learning mean? Is it schoolwork? Is it reading? Different people have different focuses on what learning means. Part of the controversy, of course, is on what readiness means, controversy about what does it mean to start school--are we talking about kindergarten? Or is it first grade, or, in some cases, preschool? Are we really concerned about all children in America? Are there concerns for certain groups of children? Fortunately, there is no debate about the first phrase--the year 2000 has come and gone. And we don't know the answer because we are still grappling with all the issues around this phrase. I would like to consider what some early childhood initiatives can contribute to the work that we are all doing.

What Is The Relevance Of These Early Childhood Initiatives?

What I am thinking about in this framework of early childhood initiatives are large-scale data collections, large-scale surveys like the Early Childhood Longitudinal Study (ECLS) kindergarten cohort, which now has data from kindergarten and first grade. I am thinking about the ECLS birth cohort, which is just beginning data collection this year which will start and follow kids from birth into school. I am thinking about the longitudinal study of children with special needs that the Department of Education is doing. I am thinking about the Head Start Family and Child Experiences Survey (FACES) project, which has a nationally representative sample of Head Start programs and has descriptive data about the programs and about the children's development. I am thinking about the Head Start Impact Study that is just getting underway that we will look at--impact of the regular Head Start program for the first time on a nationally representative sample. I am thinking about the Early Head Start study that I have been associated with for the last 5 or 6 years.

These projects show us research that has been considered practical in terms of program evaluations and in terms of the dimensions of learning and development. They show the conceptualizing and operationalizing of readiness. They focus on the relevance of the schools because, for the most part, they are either school-based studies, like ECLS-K (kindergarten cohort), or they are studies that are concerned about children who are going into school, something like Head Start or Early Head Start. I think they provide some information about how and to what extent they have been successful in providing information about all children who are in these programs. And, in general, they provide a lot of experience on dealing with these issues that is useful for us.

The Role of Program Evaluation

I think the program evaluations are important because they have a special process. Often they go into greater depth in defining readiness, they often are seeing greater and greater periods of change. They certainly spend a lot of time trying to conceptualize and measure what they consider to be important about children's learning and development. Because they are evaluations, they exemplify information about process and the outcomes and ways of linking the two. And also because in evaluations they use experimental designs, they provide some causal information that will allow us to understand what might be really contributing to the outcomes of the children, the readiness indicators.

Typical Definitions of Readiness

Readiness for what? What are we getting the children ready for? We think about learning, school success, ready for reading, but I think the evaluations provide a different perspective because you can think about readiness as an outcome of the experience that children have from birth to school age. I would particularly emphasize the importance from birth to school age period, and including the prenatal period. All those experiences are important to prepare the child to be successful in school.

Lessons from Recent Evaluations

I am going to illustrate some of those points from the Early Head Start Evaluation. Here is a picture of an Early Head Start Center, but programs also exist in home-based approaches and some programs have a mixed approach, which includes center-based and home-based which meets different types of family needs. There is a summary report that is now available. It came out in January 2001. The full technical report has been sent to Congress, and it is now on the web. It is really interesting to think about different ways of doing research, but the Early Head Start Evaluation was set up with the national evaluation where you typically see a national evaluation doing data collection all over the country. At the same time they funded 15 local universities to do research at the local level with the programs that were participating in the national evaluation. So also in this report there you will see little snippets of the work that some of the local researchers have been doing. There is also some information from the programs directly so we can show that this work is being done by a whole consortium of people.

Martha Moorehouse: I think if you just read the summary report and not the full report you will miss some important findings. There are things that are in that technical report that are not in the interim report at all. Some of the things that are covered in the longer report are more on father's experiences with these programs and a lot more detail than we traditionally see in an early childhood program about what it means to work with the fathers. This study was implemented just as welfare reform took hold in 1996. And the study addresses what it means to try to serve families with very young children with these new affectations around welfare reform.

John: Because the study was an experimental design with a randomized control group, not only can you look at what difference the program made, and make those causal inferences about whether it was really the program that made the difference, but you can just look at the control group data and say what is sort of typical for infants and toddlers in low income families in this country. You can look at the demographics and judge for yourself whether these are typical low-income families, and you see such things as they get a lot of health services, and not much of anything else. Early Head Start doesn't have much of an impact on the few health outcomes that we looked at, because the program group wasn't getting much more than the control group was getting. But in other areas we see a lot of impact, but I will get to that.

We called this report Building a Future because we think these findings are doing just that or at least promise a different future. We think the main message is that we get a broad range of impacts, we refer to them as modest, some people would call them small even, but there are a lot of them and they cover all the dimension of children's development and learning. And they cover parenting and they cover the home environment and in aggregate it gives you a sense that these kids' lives could be quite different.

Programs Make a Difference

In addition to that overall message, there are some other nuances that seem particularly important when thinking about what is it about children's experiences that makes a difference, that helps establish whether or not they are ready to succeed in school. All of the program approaches that I talked about do have positive effects, they do so in different ways. So the center-based programs you'll see have their major impact on children's cognitive development. The home-based and the mixed approach programs don't have any impact on cognitive development as we measured it, but do on social-emotional development and on the parenting and home environment, whereas the center-based programs don't affect those things as much. And so the kinds of strategies that programs adopt can make a difference on the kinds of effects they have on the children and their development and their opportunities, and what future you might be building.

Standards and Quality Matter

The Head Start Performance Standards for 25 years now have been developed and refined and so forth to reflect the wisdom of practice and research on what should make a good, comprehensive, and high quality program. This is a study of the process or implementation of programs, and when the Early Head Start programs implemented those standards more completely and earlier in their development, then they had larger impacts on the children and families. So it does make a difference to apply those standards. In the technical report, you can go through tables that illustrate this point. So it does make a difference if you have standards and if you hold people to it, if you do monitoring, and if you find ways of assuring that those standards are being met.

Flexibility To Meet Family Needs Is Important

The third point is one of flexibility which comes from the fact that these mixed approach programs that have both home-based and center-based services in different mixes, have generally a stronger pattern of impact than the other two. And we don't really know why yet but our hypothesis is that those were the needs of the programs and that is why they grew up. They found it necessary to add other components to meet the needs of the families.

What is Ideal? What is Practical? What is Essential?

It seems there are ten aspects of our work--definitions, dimensions, community supports, indicators, assessments, measures, strategies, interpretations, involving stakeholders, and ages of children. I think the ideal age now is 0-5, and what is practical is preschool through five years old and essential is birth through age 5.

The program approaches (discussed above) show the importance of these kinds of supports . . . quality programs like this for infants and toddlers can make a difference. [Slides 12 & 13 show] some of the dimensions that I thought might take our discussion a little from the ideal to what is really important and I suspect it will differ across the states. I don't know that it is necessary to have a national goal that declares what is essential but it would be interesting to aggregate all that you do over the next year and see what becomes of some of these considerations.

Thank you. Reports can be found at:, click on the Early Childhood page, then the Early Head Start hyperlink, and there you will find a list of reports.

Slide 1

Readiness Indicators in Early childhood Initiatives: The ideal, the Proatical, the Essential

Slide 2

Moving From the Ideal to the Essential

Slide 3

Whatever Happened to Gosl One?

Slide 4

What Is the relevance of Early childhood Initiatives?

Slide 5

The Role of Program Evaluation

Slide 6

Defining Readiness

Slide 7

Typical Definitions

Slide 8

A New Perspective: Readiness As Outcome of Experience

Slide 9

Lessons From Recent Evaluations

Slide 10

Early Head Start Impacts

Slide 11

Major Messages

Slide 12

What is Ideal? What Is Pratical? What Is Essential?

Slide 13

What is Ideal? What Is Pratical? What Is Essential?

Elizabeth Burke Bryant and Catherine Walsh

Reidy then introduced Elizabeth Burke Bryant and Catherine Walsh from Rhode Island Kids Count. What follows is a transcription of Bryant's and Walsh's remarks.

Elizabeth Burke Bryant

It is a real pleasure to be here 2 and 1/2 years into this exciting work that we've done as part of the Child Indicators Work representing the Rhode Island Children's Cabinet. About 18 months ago, we held a meeting in Rhode Island where it really blossomed into a national conversation and work session on how do we define, identify, develop and talk about a set of early childhood school readiness indicators that can really resonate in a way that affects public policy. The important work that Ann Segal and Martha Moorehouse spearheaded with this particular grant initiative, what really came home to me that day is that when you have people like John Love at your disposal, having John's work really set the stage 18 months ago, we have been privy through this network of ours to a lot of the readiness work that is going on not just in Rhode Island, but in Vermont, in states like Maine where we are hearing great things happening with their Ready to Learn Agenda. Massachusetts played a key role in that meeting in Providence 18 months ago. And what we have to do now is to look at what's developed since then. What can we all share nationally, and what are some of the next exciting goals to set in moving this work forward?

I always start with the real foundation of our work which you heard a lot about yesterday but the fact that we had a common language that was set forth by our Children's Cabinet directors, which are the five mega-departments that deal with children's issues--the Department of Health, Department of Elementary and Secondary Education, Department of Mental Health, Department of Human Services, and Department of Children, Youth and Families. When we reflect on what has happened over the past two years, what is very exciting is that we worked with the Children's Cabinet, they worked with these goals, everything they've done interdepartmentally in the last 2 and 1/2 years has revolved around these goals and the indicators that have gone with them. Thanks to this Child Indicators project, we have the special technical assistance required to go deeply into school readiness indicators. And just to tell you the end of the story, thanks to a lot of the work we've done here, a lot of the sharing among the states represented here, we are going to go back and prepare for an exciting presentation on June 13 where all of our work that we've done with this project, with our state department agency partners, will be presented in a special 2 1/2 hour meeting with the Children's Cabinet, a meeting just dedicated to how are we doing in terms of ready to learn.

One of the things that we have tried to do as a state partnership of Children's Cabinet agencies is to really ask ourselves, "Why do all this work if it is going to stay on a shelf or in somebody's in box?" We have paid a lot of attention to the strategic dissemination of data and information. Our two ground rules are: 1) to select data that will push a public policy response forward and, 2) to choose your moment. Put the data in the hands of the policy makers when they most need it.

We also work with our state department partners to make sure that everything gets out there with a public engagement strategy and a data dissemination strategy that is really thought through in advance. All of our state department directors and state liaisons we obviously work with as partners all the time. We create panels on early childhood development so there is an academic/public/private laying of hands on some of the work that we are doing together to give it credibility. We have community leadership councils so we have parents and other people at the table to give us their first hand view of what it is like to be a parent of a young child in Rhode Island. We always have policy breakfasts or other events where we invite the media and we have the full range of people that you need, to give something credibility in a community. People always say of our events that it is everyone from our U.S. Senators to parents to people who are working as educators in our schools.

Our General Assembly likes to have hot off the press information, so if you are releasing anything you will want to work with your state departments on a strategy to get it to your General Assembly leaders. We recommend you cut a deal with the people who decide what goes on the desks of the legislators and what doesn't, that is one of those relationships that you want to cultivate. Our legislators don't want a lot of materials all year long, but when we release our annual fact book in April, it is on the desk of every legislator at the very next session.

Targeted mailings go out immediately. We put everything up on our website. We basically work with our state departments at the highest level of the policy and data staff. Levels that really do have strict requirements about the press but by "learning by doing" we figured out as the outside group working very closely with our partners in state department, they have been briefed ahead of time, they know what the messages are and that your directors are ready to comment. We have also cultivated a group of reporters that we call the children's beat reporters. In all cities you will find reporters that are passionate about this and would welcome the courtesy of a heads up phone call saying that you are going to release a report on childhood indicators next week, and can we send you a copy? Those kinds of phone calls bring real relationships that you can cultivate over time.

Why do we care so much about school readiness indicators? It has really been a privilege to work with this group because there is so much exciting work going on with the states. As we reflect on our work, it helps to ground our work to look at what is at stake for young children. And reflecting on a hallway conversation with Ann Segal, she said child indicators mean a lot to her because they are nothing less than the ability to say what do young children in this country need and how will you know if you got there. I think the work that we are doing with our state governments will really define that menu of what do we say as a country that a child absolutely needs and deserves and how do we hold ourselves and our elected officials accountable for knowing that they got what they needed.

Cathy Walsh will go through our buckets of indicators and where we are.

Catherine Walsh, Rhode Island

I am going to pass out the indicators list (see below) and I would like to walk you through it. The categories and things and some of the legends are around data collection. As we have been developing the indicators that are on that list, I think the most critical thing to know is that it says "near final" on the top for a reason. It is really very much of a working list that always gets changed and always gets adapted based on input from various groups. We have talked to kindergarten teachers and early childhood educators about it. The list is a constantly evolving thing. It really helps us stay focused on the child, that what we are talking about is what we want for all of our kids. We want kids to be engaged, curious, confident, and ready to learn. But it's not just about the child; we also want to know about community supports for families.

Rhode Island School Readiness Indicators (Near Final)

Please note, data sources indicators in italics are still pending. Data sources are available for indicators in bold.

Infants born healthy

Percentage of women with late or no prenatal care
Percentage of infants born low birthweight
Percentage of infants born very low birthweight
Infant mortality rate

Young children have healthy growth and development

Percentage of children with developmental screening at age 3
Child injury hospitalization rate for children 0-5
Child asthma hospitalization rate

Young children enrolled in public insurance have healthy growth and development

Percentage of children under 6 on Rite Care screened for lead poisoning
Percentage of children on Rite Care with dental exam by age 5
Percentage of children under 6 on Rite Care with regular, timely well-child visits
Percentage of children under 6 on Rite Care who have up-to-date immunizations
Percentage of children under 6 on Rite Care who have accessed mental health services

Children enter kindergarten healthy

Percentage of children with up-to-date immunizations (at K entry)
Children with a history of lead poisoning (at K entry)
Percentage of children without health insurance (at K entry)
Percentage of children with untreated dental problems (at K entry)
Percentage of children with untreated hearing or vision problems (at K entry)
Percentage of children with undetected disability/developmental problem that requires special education services (at K entry)
Percentage of children with IEPs, pre-k, K, 1-3, 4-6, 7-12

Children live in safe and stable families

Child abuse and neglect rate for children under age 5
Births to teens ages 15 to 17
Number of children under 6 in foster care

Percentage of children under 6 in foster care who are placed in a permanent home
Percentage of children under 6 in DCYF care who had multiple placements
Parents with mental health problems
Parents with substance abuse problems
Children born into families with unstable living situations

Family environments support early learning

Percentage of families with preschool children that read to their preschool child every day
Percentage of families with preschool children that regularly take their children on outings
Percentage of families with preschool children that regulate television viewing/computer use
Percentage of children (pre-K and K-3) with children's books at home and/or read to child
Percentage of children whose parents take them to the library sometimes.

Children have access to early care and education programs

Number of early care and education slots per 100 children ages birth to 3 in need of care
Number of early care and education slots per 100 children ages 3 to 5 in need of care
Percentage of income-eligible families using child care subsidies (i.e., child care subsidy "take-up rate")

Percentage of children enrolled in an early care and education program the year prior to school entrym

Children at high developmental or social risk receive early intervention

Percentage of low-income children in comprehensive child care program/Head Start
Percentage of eligible children enrolled in comprehensive birth to 3 program (i.e., Early Head Start, Early Start
Percentage of Family Independence Program (FIP) enrolled children participating in early care and education program
Percentage of eligible children enrolled in Early Intervention
Percentage of "at-risk" children enrolled in early care and education programs prior to school entry

Early care and education programs are of high quality

Percentage of child care center slots in accredited programs
Percentage of family child care slots in accredited programs

Percentage of child care center staff with early childhood education degree
Percentage of family child care staff with early childhood training
Percentage of early care and education slots in programs without health and safety violations
Percentage of early care and education programs of high quality versus poor quality
Percentage of child care center slots in programs with low staff turnover

Schools are ready for all children

Percentage of children in accredited kindergarten programs
Percentage of children in full-day kindergarten programs
Average daily attendance of kindergarten children
Average class size K-3 classrooms

Percentage of kindergarten (or K-3) teachers with a degree in early childhood education

Kindergarten children have the language and literacy skills needed to succeed in school

Percentage of children in K-3 who are at or above grade level in reading/language arts
Percentage of children who use their primary language appropriately to community needs and wants
Percentage of children who have age-and culturally-appropriate vocabulary
Percentage of children with age-appropriate familiarity and skills with books and print
Percentage of children with age-appropriate letter recognition
Percentage of children with age-appropriate literacy in primary language

Kindergarten children have the knowledge and cognitive skills to succeed in school

Percentage of children with difficulty learning academic subjects
Percentage of children with poor concentration or limited attention
Percentage of children with difficulty following directions
Percentage of children with working independently and being self-directed
Percentage of children K-1 or above grade level in mathematics

Percentage of children with age-appropriate preliteracy skills (numbers, letters, writing, language)
Percentage of children with age-appropriate numerical skills
Percentage of children with age-appropriate reasoning and problem-solving skills

Kindergarten children have the social/emotional competencies to succeed in school

Percentage of children with difficulty working with other students
Percentage of children who are disruptive in class
Percentage of children who constantly seek attention
Percentage of children who are overly aggressive to peers
Percentage of children who are anxious or worried
Percentage of children who are unhappy, depressed, sad

We have things on our list like program indicators because we think that if we really are talking about government and government funding and programs, we have to be talking about the programs that we are investing in. So you will see on our list we have things like Early Head Start and Head Start, early intervention, and special education. Those are there deliberately because we think we need better systems. How are those resources being allocated across communities? And the other piece that guides our work is are we talking about all kids or some kids? We really have to pay attention to the inequities -- that there are differences by communities and access in terms of basic life experience that happens and we have to pay attention to that in terms of support.

Again, we need to emphasize informing policy and it's about information. What this does for me, and as you think of all the data in your state that is potential and all the research data and all the work of Head Start and the work of every study that's been done on welfare reform, what we always come back to is--what are you using it for? And how are you going to use it? And I always point to describing, because very often people try to leap to measuring progress and to improving programs and to monitoring impact, but if you haven't done the work of describing what's up, how many Head Start kids are enrolled in each community in your state, how many are potentially eligible, you can't really do the monitoring impact work very well. And you wouldn't know what to do with it in terms of policy context because you wouldn't know which communities are under-resourced for specific programs and which communities are doing OK. So by describing basic information about what is happening with kids and families in your communities, the work in these other areas of measuring progress, improving programs, and monitoring impact, is going to be much easier and much more powerful in terms of your policy response. Don't minimize it.

Again, as you do your school readiness indicator work, the list will never look exactly like ours because it is the result of a process that involves lots of community input, what are your values, where are your priorities. This list is really meant to be a guide as to some of the categories. And how we decide what makes a good indicator involves much information, it needs to reflect an important child outcome. It has to be something people care about. Something people in your community care about, as opposed to the larger world. The other thing that helped guide the conversation as we walk through some of the indicators, is we are data piranhas. We look for data anywhere we can find it. And we are always looking for more. That mining of existing data is an incredible piece of the work and you need to be aggressive but kind as you pursue the data from various departments. We use a lot of administrative data from state departments. We are talking more about adding to existing surveys. We use lots of different strategies to get what we might want. And then the last thing is if you talk to your neighbor over the fence about an indicator, will they have a clue about what you are talking about.

Again, the two things that needs to guide your work are (1) what do you care about enough to measure and track, and (2) what do policy makers want to know?

The framework for indicator information: We really have framed our school readiness discussion in Rhode Island as a birth-to-fourth grade issue and we intentionally went to fourth grade, even though a lot of the early conversations were about birth to five years because we really thought we had to engage the education system. Many business leaders and many political leaders really care about K-12. The early years are not on the radar screen. So if you start at kindergarten you are really losing that opportunity to engage early childhood educators in the public school systems and to engage that whole school reform standards movement. So we have to do more at an earlier age if we are going to get there. So we have defined it as birth to 4th grade. We sort of double check our indicators list against some other buckets, the readiness of children, child health in all its various ramifications (mental health, hospitalizations, asthma), family factors, access to quality programs, community and neighborhood factors, etc.

Keeping the big frame is really critical to do our work. You need to say where do we start in our state? What might be possible in our state? Keep the big list running, even if it is a placeholder that says family structure, at least it is there and you can fill it in over time. It also stops people from saying you are being narrow minded, or narrowly focused. If you just start with what you can get, you're not expressing the whole picture. And when you go out in public to talk about it, the people who are in the child welfare system don't see themselves. Or the people who are in the schools, don't see themselves. So having a fully developed list even if you don't have all the data is useful.

Let me walk you through a little bit of this. In our state, we always highlight the core cities, the five communities in our state with the highest poverty rates. And we always keep that on the page because when you look at the state numbers for indicators, it can be a wash. And it can look pretty good. But then when you look at the poorest communities in your state, you see really huge differences. And so we do the core cities work to keep that on the page, and it shows major differences. Full-day kindergarten was one of our ready school indicators. We started tracking that about two years ago. And we have seen significant increases around full-day kindergarten access. And then we also mine our hospital discharge data. There is a wealth of data in your hospitals. That is a good entry point to find out what is happening with kids.

If you look at this school readiness indicators list, I am going to focus on a couple of pieces that I think are the most important or the most interesting. The top ones I think everyone has lots of access to. The third category down, enrolled in public insurance-this is a new category that we have just added, because we really think that while health insurance coverage in our states is very high (we have about 92 percent of kids in our state that are covered by health insurance), access and continuity of care is an issue. So we started to look more specifically at that subpopulation group. So within your indicators, trying to balance what you need for the whole population and what you might need to look at more deeply for subpopulations is another question you have to answer for yourselves.

Children enter kindergarten healthy. This is an effort to step back from the whole population and say, when a child walks into the kindergarten door, what is going on. And we haven't been that successful here. You'll see we have some information about lead poisoning, some around immunizations, but we think there is more work that needs to be done. This is an area that we are going to pursue over the next couple of years to really get the school's engaged in seeing that as part of their job.

Walsh concluded by citing some areas in which further work needs to be done and stressing the important connection between school readiness and early childhood initiatives. She introduced Rebecca Hudgins from Georgia and Steve Heasley from West Virginia.

Rebecca Hudgins from Georgia

Hudgins provided a quick summary of Georgia's early childhood initiative pilot project. This three-year, five-site pilot began in October 2000, in north Georgia. It looks at the combination of services around parent education, universal contact, intensive home visiting, adult education and developmental childcare. The way components are put together is different across the sites but each site is using the same set of core measurements or indicators. These measures are divided into population level or community level indicators and system changes indicators. Georgia is keen to bring together new partners, for example, bringing economic development people together with the social services people. We don't talk the same language. But we need to be talking the same language. She concluded by noting that one of the project's end measures, is the state-wide kindergarten assessment program that will allow county- and school-level assessments of what happens in kindergarten.

Steve Heasley, West Virginia

Heasley said that West Virginia has just begun a five-site pilot initiative focused on improving the quality of early childhood education programs, increasing accessibility, and creating better linkages. One part of this initiative addresses how well early childhood programs at the local level can work together to increase quality and accessibility. The West Virginia legislature is requiring an extensive evaluation of the initiative that is "looking both at some quantitative and qualitative measures of what happens to the kids at each sites." Graduate students are gathering data at each of these sites and will conduct case studies.

Debra McLaughlin from Massachusetts

McLaughlin noted that Massachusetts had difficulty developing a common agenda for birth-to-five issues and now that that has been accomplished is building on that effort. She credited Massachusetts Kids Count for the development of issue-oriented indicators and thought that Massachusetts had benefited from good technical assistance from HHS.

Generating New Knowledge from Linked Administrative Data

Bong Joo Lee of Chapin Hall introduced the presenters. A Powerpoint presentation used by Ms. Hoglund of Minnesota follows the text.


Hawaii doesn't see itself as funding programs. Instead, it sees itself as funding outcomes or results. These results are brought into being by the efforts of multiple partners. They find it easier to track results when they control the funding, rather than when the federal government was separately funding some efforts. To serve as a baseline, Hawaii created school and community profiles. The profiles are to be built from 2000 Census data aggregated at the state high school complex level. School and community profiles link 50 core indicators that track such areas of concern as school readiness, child health, substance abuse prevention, and child safety. To address goals in these areas, Hawaii uses what it calls performance partnerships that link multiple government and community agencies that work together toward the shared goals.


Kath Hoglund, the Data Warehouse Administrator of the Minnesota Department of Human Services said that Minnesota's goal is to use its organized information to create new knowledge.

Currently, the Minnesota warehouse links:

  • TANF, Medicaid, Housing, Employment, and Child Support data both for the TANF federal report and for use in a TANF longitudinal study
  • Medicaid data to Public Health data to help study diabetes and asthma
  • Medicaid data to Social Security data to identify disabled children who are not receiving services
  • Statewide TANF performance measures

They have learned

  • To approach development incrementally
  • That, when feasible, to link the data in the source systems prior to extracting data to the warehouse
  • That the source data is most reliable when it is part of the purpose of the system
  • That similar data submitted by disparate systems often yield unreliable comparisons
  • That technology is no longer the issue; key issues are legal and political

Slide 1

Minnesota Department of Human Services Data Warehouse

Slide 2

Presenter Info

Slide 3

Linking Data provides a founation for creating new knowledge

Slide 4

Minnesota department of human Services was looking for a system to encompass the entire business....


Minnesota's solution: A Data Warehouse

Slide 6

What is a Data Warehouse

Slide 7

What is an Execitive Information System(EIS)?

Slide 8

What is Data Mining?

Slide 9

Minnesota DHS Data Warehouse

Slide 10

Minnesota DHS Data Warehouse

Slide 11

Logical Evolution of Reporting Using Data Warehousing

Slide 12

Incremental Approach

Slide 13

Minnesota Data Warehouse's Technology

Slide 14

DHS Configuration

Slide 15

Vital Statistics

Slide 16

Software Tools Accessing Our Data Warehouse

Slide 17

State of Minnesota department of Human Services Data Welhouse Logical Data Model

Slide 18

Technology is no longer the issue, the issues are legal and political

Slide 19

What have we done?

Slide 20

What have we learned?

Slide 21

Where are we going?

How to Use the Web to Collect and Distribute Indicators

Friday, June 1, 2001

The session coordinator was Fred Wulczyn of Chapin Hall and the principal speakers were Toni Lang of the New York State Council on Children and Families, Wulczyn himself, and Dean Duncan of the University of North Carolina.

Toni Lang

The New York State Council on Children and Families, is composed of 13 health, education, and human service commissioners and directors. It is charged with acting as a neutral body that coordinates these systems to ensure all children and families in New York have the opportunity to reach their potential. In the mid-1990s, the commissioner of education, at a Council meeting, identified the need for a common set of goals and objectives, and the parallel need for indicators to measure progress toward those goals. This led to the development of the New York Touchstones framework. The 1998 state data book was the first to present these well-being indicators using that framework. Following publication, New York developed their web site, the Kids Well-Being Clearinghouse to further disseminate these data.

Touchstones has two teams, an executive level guidance team and a data team. The first team is responsible for direct communication with the commissioners. The data team is responsible for finding out which data are available. The teams were convened to help develop the goals for the data web site. Among the objectives developed for the site by these teams and by surveying potential users were that:

  • The site increase community-level use of the data
  • The site be interactive to help interpret data
  • The system be simple and user friendly
  • That the data be accurate and timely

The site developed met these goals. It groups data in a variety of ways, such as annual totals and three-year averages. Additionally, the site provides substantial background information to the users. After it was deployed, it was revised to make it further accessible for users. Lang demonstrated some of the capabilities of the site for the audience. Simple steps allow the creation of tables and some regional analyses. Key definitions are linked to the table creation process in order to enhance clarity.

Overall, the site seems to be increasing efficiency of use and expanding access to current data while simultaneously tailoring data to individual users needs and expanding the number of indicators available.

Fred Wulczyn

Wulczyn discussed a new information system that helps the foster care contractors of New York City's Administration for Children's Services (ACS) use information to help further permanency outcomes for children. ACS thought that if it could get away from a per diem reimbursement system, that could influence contract agency performance. They further recognized that for agencies to function effectively outside the per diem reimbursement system, they needed reliable information on their performance. Since agencies are differentially able to access and utilize data, ACS decided to consolidate and distribute indicator data in ways useful to all agencies, effectively leveling the performance playing field. This effort was also intended to support decision-making at the clinical level. Contractors are able to access individual-level and aggregate data and to dynamically query the database to identify unique groupings of children.

Wulczyn then opened the password-protected site and demonstrated how an agency could access their own data (but no one else's) in aggregate and at the client level. Also included is other relevant information for agencies. Wulczyn demonstrated how the agency managers can use the system to think about agency performance, plan for children's future time in the system, and structure decision making.

Dean Duncan

North Carolina has developed a site to track indicators in relation to its welfare reform ("Work First") program. The site, which runs off a server at the North Carolina social work school, includes both state and county level data.

Duncan demonstrated the functionality of the site, noting in particular how generating charts and graphs also generates an explanation to help keep data in context. He further discussed some of the problems created by real-world use of service system data.

Closing Remarks

Each state spoke, touching on their ongoing efforts and identifying areas in which further work is likely to take place. The work and approaches are tailored to each state's needs and circumstances, but there were common themes. Among these was a sense that many states had baseline information in place in usable forms.

Martha Moorehouse asked about sustainability. She asked the participants for a wishlist of products that would make the effort more sustainable. Speakers pointed to the need for information on welfare reform, not only on the economic issues but also on such issues as school readiness. Other areas of interest included promotional or strength-based indicators, mental health, juvenile justice, and child welfare. There is also interest in program performance indicators.

Product Type
Location- & Geography-Based Data
State Data