A Summary of the Meeting of May 30-June 1, 2001. Comments on Factors and Contexts That Influence how Indicators Get Used -- With Reflection on Surviving Changes in Leadership


Christine Johnson, Florida Education Policy Studies, Learning Systems Institute, Florida State University

Political context: Changes in power and leadership affect commitment to indicators projects, which by their nature, need to be long-term:

Changes in administration and political party control that affect public policy agendas and priorities. In general, newly elected officials want to "make their own mark." This is particularly when there's a change in political parties. Projects started under a prior administration may not be continued at all or may change dramatically in their form or emphasis. Also, differences emerge in the perception of government's role not only in solving problems, but also in how these problems are measured. In Florida, these issues were manifested as follows:

  1. A shift from democratic to republication control in both the Governor's Office and the state legislature since 1994.
  2. With that change, more emphasis is being placed on limiting the size and role of state government and devolving responsibility and control to the local level. Also, education, workforce development and growth management are higher policy/funding priorities at this point; human/social services have relatively less emphasis than in the past.
  3. The effects of these changes are: (a) less interest in funding or sustaining indicator initiatives (at least two major state indicator initiatives have died since 1998); (b) more interest in tracking indicators in targeted policy areas, such as education, where the state has a key role in holding local school districts accountable; (c) more interest in "getting things done" through a corporate management style (less emphasis on public engagement and participation which by it's nature is more time-consuming and process oriented). This translates into collecting information that will be of immediate use to policymakers as opposed to keeping the public engaged and informed.

Term limits: In Florida, over half of state House and Senate members were new this year because of the beginning of term limits. As a result, there is less potential for continuing prior commitments and initiatives. New legislators don't necessarily understand the history of why indicator or other initiatives were developed - or buy into the idea. Also, they bring different ideas about how to set policy and run government.

Organizational Structure and Change

  • The State of Florida does not have a Children's Cabinet or similar structure to organize or support a state-initiated community indicators project focusing on children. Functions (e.g., education, child welfare, health, child support) are "siloed" in different agencies, which, for the most part, stay within their own boundaries as far as indicators and performance measures are concerned. Also, the state's concern is primarily on use of measures for accountability - a more intermediate level of measurement--rather than on broad measures of well-being. Some exceptions are measures related to educational achievement and maternal/child health - because public schools and public health by their mission, focus on the entire population of children. However, the state's focus on accountability - has to a large extent put indicators and measurement, in general, to punitive use. It immediately conjures up the question "who do we blame?" rather than "what will it take at the state and local level to make trends move in the right direction?"
  • Major organizational changes (e.g., in workforce development) also can affect indicator initiatives--in positive or negative ways. For example, Florida has or is currently undergoing major changes in the organizational structure for education, workforce development, school readiness, and child/family services. These changes usually mean changes in what gets measured and how--which can interfere with comparisons across time. However, these changes also present opportunities for improving measures, integrating information systems and making measures more meaningful.

Demographic context: Demographics may play a part in the type of indicator systems developed and how public buy-in is achieved. Florida has an ethnically diverse population and a large percentage of seniors (age 65 and older) relative to the rest of the country. Most indicator systems developed at the state and local level do not focus on specific subpopulations (e.g., children only, African Americans only). They include indicators relevant to these subgroups - but embedded in a broader range of indicators geared toward the general population. Within this framework, subpopulations are addressed through specific indicators (e.g., low birthweight) or breakdowns by race, age, region or other factor.

Local context: Large urban areas have the capacity to start and continue indicator projects on their own resources--without depending on state support. This has certainly been true in Jacksonville, Palm Beach, Orlando, Fort Lauderdale and other metropolitan areas of Florida. In the long-term, these projects may be more stable because of their independence and closeness to the issues of greatest importance at the local level. On the other hand, rural communities with fewer resources, tend to be left out of the picture.

Leadership for indicator projects, at least in Florida, appears to be more stable at the local than the state level - although the same political uncertainties apply. The Jacksonville Community Council, Inc. indicators project, for example, has experienced substantial fluctuations in annual funding depending on support coming from the mayor's office or the chairman of the local chamber of commerce. What has helped to sustain this 15-year initiative through these funding ups and downs is (1) the participation and buy-in at the community level which was strong from the start, (2) the use of indicators to do studies of local issues and problems, (3) the public's trust in the quality of data and (4) the public's continuing expectation of data availability.

Public and institutional reaction: Reaction seems to be generally positive to indicator systems that seek to "generally inform"--without consequences. Problems usually arise when these systems actually get used, for example, to cut funding for programs or to grade schools "A" to "F" on student achievement. When decisions "hit close to home" reactions are likely to be most intense. For example, based on anecdotal evidence, there seems to be a backlash among parents of high-achieving students in public schools in Florida, who are concerned about how their children will be affected by what they perceive to be "high stakes" testing. Likewise, schools, agencies and other institutions can get defensive about what indicators say - particularly if the public or the media use the data to criticize them. Developing buy-in and creating a climate of cooperation can mitigate these effects but are difficult to achieve, particularly when public opinion is divisive or state-local relationships are mistrustful.

Measurement context: The range and availability of data are continuing to increase at the state and local level--a real plus for indicator projects. Data collection itself is improving, as is the technology for analyzing and distributing information to policymakers and other consumers. However, we still haven't mastered the art of interpreting this information meaningfully in all of its complexity. For example, people still tend to focus on individual measures without understanding problems on a broader, more complex level - a difficult task given the quantity of information out there and the interconnections of programs and outcomes. How do we keep it simple enough to understand, yet avoid misleading policymakers and the public into simplistic solutions?

Policymakers typically want to know more than broad outcomes. They want diagnostic information so they know "what to fix." The challenge is communicating this information - including cause-and-effect relationships - in a clear, concise, timely and useful way. Indicator systems are useful signaling and analysis tools, but at least for policymakers, there is demand for broader synthesis of information from a variety of sources - indicators, research studies, surveys, etc. - to guide decision-making on a host of issues within a relatively short time frame, such as a legislative session. Policymakers are certainly not the only major users of indicator data - but they do have an important influence on state and local capacity to solve problems and improve the well-being of the population.

Michel Lahti, Manager of Evaluation Services, Institute for Public Sector Innovation, Edmund S. Muskie School of Public Service, University of Southern Maine

Our work in Maine has resulted in a project called Maine Marks. Our first publication came out in February. I want to give you a little context and focus on what we have incorporated into the process. Because we have not experienced a change in governorship during the period of this process, we still have three more years with the current governor, who is a big supporter of this. I want to talk about how we've tried to create a structure for this to continue, regardless of what happens.

This has to be bigger than any one person or program. I often think what we are doing is telling a story. I keep thinking about people sitting around a fire at the gates of their community.

You walk up to the first ones and they tell you what is happening in the community. The stories we want to hear become important, not necessarily who the king is.

The first publication for Maine Marks project came out in February 2001. It took us about 2 years to develop. This grant was a huge influence on that--as well as Kids Count--they are a very significant partner.

What happened in Maine is that a governor's Children's Cabinet was established five years ago, not in statute but through a Danforth Foundation grant and the vision of the current governor to bring these agencies together. The group established a set of 12 outcome statements for kids, families, and communities in Maine.

We were given the outcome statements right from the beginning. Our tack was to look at indicators that could relate with those outcome statements. We ended up with 80 indicators, which is a big number. We had a lot of conversations about that number. I think back to a meeting here where David Murphey was talking about not being too afraid of big numbers because then a lot of people can find a place to sit.

I think we have a lot of indicators that are not well defined. But people felt strongly when we were creating the indicators that all of the indicators were important, so we decided to get them down on paper from the beginning.

As for a structure and surviving leadership changes, we were given these outcomes from the beginning. These outcome statements themselves were not well-defined. One of the other things we wanted, and got, was to have at least half of these indicators to be promotional and/or strength-based. That also ended up in a lot of work. We ended up using a lot of material from our Search Institute and other places. We felt strongly that we wanted a comprehensive picture. Again we wanted to take a look at all that was happening with kids and families.

Right now, specific to the work of the project, we are trying to look at data collection for next year, for further definitions or indicators. And we'd love to get county-level data, because right now we just have state-level data for a number of indicators, especially newer ones.

In terms of surviving leadership changes, some of the things we have been intentional about, this connection to the Children's Cabinet is one of them, and being broad in scope in terms of the number of indicators we are looking at. Again, there is something in there for everybody.

The Children's Cabinet is now established in statute. Within the Children's Cabinet is a council that has representatives from the legislature, the judicial branch, and other key stakeholders and policymakers within the state government. So these indicators have become their indicators. Hopefully that will be helpful over time.

We did look at outcome statements that were categorical in nature. We have a few indicators, that when you look at them, are programmatic. I don't know how that happened, but it did.

I think another thing we tried to do--and maintained well--is partnerships. Things couldn't have happened without Kids Count, and partnerships with the Maine Development Foundation were critical, because they had already established indicator work that people used. So our book was formatted to look exactly like their book. That was very intentional. We worked at the Maine School of Science and Mathematics. They helped us at the very beginning to look at these, to develop the web pages, and the connection to the university has been helpful as well.

A big task which has already been mentioned, and which we hope to learn more about, is driving this work down and across systems.

Becky Boober is here; she is amazing for region 3. They have already started within their Children's Cabinet to connect Maine Works to things like training and data usage. They'll use this as content for training and for use in grantmaking. So when people come to them for dollars, they are asking them to connect their thinking to these indicators and program development. As an evaluation work, we are trying to tie things back to this.

The other thing that snuck in and out of this process--and I was surprised it didn't happen more often--was that there were instances when we had to think of how this connected to performance-based budgeting. There are two performance-based government initiatives that had already been established in statute. So those questions got asked and we put together little charts that showed how things were lined up. And then it kind of went away, which was a good thing for me. But it will be back. I think you should be prepared for that, for strategic plans. I think in the future it is important that we remember that this is an iterative process. And I think when we talk about this as actual products that that is certainly not true.

I remember hearing a psychologist or psychiatrist talking about autism on the radio, saying how that problem used to be understood, and how it is understood today. Certainly our language about children is going to change. The other thing is that for me, it always bugs me when I flip on CNN and see the tickertape flipping across the bottom of the channel, and how we think of economic indicators in this country. Yet, we don't have data on teen suicides in Maine. The most current data we have is 1997, because we can't get schools to deal with this whole inundation they have, filling out forms and getting kids to fill out surveys. Right now, it is such a struggle to get people to pay attention to social indicators as important, if not more important, than economic indicators.

I was thinking in closing that is it very helpful to think of this in metaphors. For me, I think this information and these systems are like a painter's palette. Leaders mix paints to create their picture. But that palette and that paint is there forever. We want to make sure that the paint is of the highest quality. And look at that as a metaphor for our work.

The system has to be bigger than whoever that artist is.