A Summary of the Meeting of May 30-June 1, 2001. The View from Rhode Island, Part 2


Elizabeth Burke Bryant
Rhode Island Kids Count

Strengthening Partnerships Inside and Outside of Government

What's allowed for this very strong inside-outside partnership? We don't make a move at Rhode Island Kids Count without making sure that the senior level data policy staff at the state's Department of Human Services at least knows what we're doing and understand what approach we might be taking. We don't always necessarily need their sign-off, but we always know we need a professional coming-together. We always communicate about what we're doing with data that belongs to their department and about how we're releasing it.

About six or seven years ago when we released our first fact book, Dr. Barry Zuckerman came into Rhode Island as our first keynote speaker, which was a little dangerous because it meant bringing a doctor from Boston across state lines. But we decided we'd go for it because we really loved Barry Zuckerman. And Dr. Zuckerman said, "As a health professional, I'm here to tell you that the number one indicator of child well-being is fourth grade reading scores. And what we have to do is keep our eyes on the prize. We have to understand that children's health and a lot of other things going wrong in their lives can be measured by whether or not they're reading at a fourth grade level, and we have to absolutely focus like a laser beam on that."

It was a really strong message. His notion that we need world-class fourth graders in Rhode Island became a kind of rallying cry. And from that the four indicators were developed, and the one that we're talking about today is that all Rhode Island children shall enter school ready to learn. We were able to use our indicators work through Kids Count to become the clearinghouse for identifying, "How will we know if we got there?"

So we started with our first fact book with the ten indicators the Casey Foundation required, plus thirteen more, and then over the last six years we upped them to about forty-three indicators, all about children's health, education, economic well-being, and safety, plus an initial section on demographics. But this whole ready-to-learn opportunity, incredibly enhanced by the Chapin Hall work on the ASPE grant we're all in, has been absolutely essential to the research work going on behind the scenes.

Helping the Press Put Childcare on the Front Page

The inside/outside magic also involves the press. Politicians pay attention to what's in the newspaper. We don't consider our role just in terms of a data function. That's the foundation of what we do  we're a research and educational organization. But if we didn't have an incredibly savvy media component, it wouldn't matter. Because, if it's not in the Providence Journal, it hasn't happened. If it's not on Channel 10 and Channel 12, it didn't occur. So what we've tried to get better at doing every year is to make sure everything we release is very pick-upable, very colorful, something that would be very appropriate in the highest corporate-level board room. Because for too long kids have had to exist with their advocacy materials on cheap-looking handouts, and they deserve better. And that got people's attention.

We also had to be there with all the facts and figures whenever the press needed them. As a result we've cultivated a very positive relationship with the press. They've put childcare on the front page. We have resources from a Starting Point grant from Carnegie enabling us to do more than we would have been able to afford to do ourselves. That has been an incredibly important thing. Because when politicians read these things on the front page of the paper, it makes them want to act. When they read that Parents and Working Mother Magazine are recognizing Rhode Island's policy gains, it makes them even happier.

Working Together Towards Accountability

We're almost over the finish line. We were absolutely committed to the multidimensional view about what is a ready-for-school child. We absolutely understand Christine's point about numbers and letters. I think she's absolutely right. We sometimes, I think, get drawn into a politically correct sort of thing where we think, gee, that's really putting kids in a pigeon hole with that one measure. But frankly, we're not talking about little whiz kids. We're talking about what would we expect our own kids to feel comfortable with when they get into that first formal setting. Why should we expect less from poor kids?

It's time to kiss that way of thinking goodbye. We have enough of these general areas that everyone's satisfied with, like mental health, access issues, numbers, and letters. As long as you're looking at categories and are going to end with how we categorize things, really, as long as you touch all those bases, it's okay to pick a few indicators and have that be your picture.

The other thing that's hung us up and that takes some guts to talk about is the need to professionalize early education. It's kind of been going along in a mediocre way, with people saying, "We're doing the lord's work  don't ask us about our outcomes. All kids are different." But, as Christine says, as the numbers go up from $12 million to $68 million dollars, as our rates for childcare providers go up as they should be going up every two years so that there was a 67 percent increase in rates in one year, I've lost patience with friends saying, "You've got to wait, we're back-filling from ten years of flat rates and we can't show you program changes yet."

As advocates we are going up there every day fighting for these programs, and if we can't have some accountability, then the very best early child educators and care providers are going to be brought down with the very worst. All you need is an exposé of a really bad childcare provider or a really bad center. And we've had staff go out to centers to consider them for their own kids and they've not liked what they've seen. We'd better be able to correct that, to institute some four-star rating systems. We better be able to follow that Starting RIte money and be able to say, "They did increase wages for those childcare providers who were underpaid; the money didn't go into an untrackable hole." So we're really pushing all of those accountability things with Christine's department for the next two years. I think that all of these expectations are sort of fair and square exchanges for going from $12 million to $68 million dollars.

Tracking the Individual Child

As we strive for accountability, we need to remember to use a few tools that are already in place. What really hung us up, what was really frustrating, was what happened after we got past the access data, which is administrative data that you have access to from your state departments of human services or childcare agencies. We got past the early child health data but then we got to those hard things  how do you track those major state expenditures for childcare down to the individual child and individual center? Without the student identifier number and with specific strings attached, we couldn't do that.

But what we did do, and what I'm so pleased to be able to tell Christine about, is that on June 13 at the Children's Cabinet meeting, the presentation that you've all been waiting for is going to happen. We have been able to tap our SALT (School Accountability for Learning and Teaching) survey, which is the survey the Department of Education invested in. Every parent, every public school teacher, every public school kid is interviewed. It's an enormous reservoir of data that sometimes doesn't see the light of day. We've been working for a year and a half and we've been successful thanks to Cathy Walsh, our program director, in getting them to take K-through-3 aggregate data for the past three years and "disagg it" down to K. So we're not going to be able to give Christine everything she wants but I think it's very exciting to let you know that by "disagging" out K and having it for three years prior to this one, we've been able to create some preliminary charts.

One chart that we have created shows children working independently, being self-directed, age appropriate pre-literacy skills, numerical skills, and reasoning and problem solving skills. [This chart is not included in this working paper.] As Christine says, we're not going out for a long walk on a short pier for causality, but you can bet your life that, when we give that presentation, we're going to say that, in 1997, when the childcare investment was $12 million, this is what the SALT data showed about those what kids knew as they enter kindergarten. And we'll be able to say three years later, this is what that they know now. Hopefully the trend will be going in the right direction. We're going to report it as we see it, obviously. At least that will be the beginning of something that is going to get at what we really want. So that's how we're trying to overcome the hurdles a lot of you are experiencing.

Communicating about Common Sense Indicators

The ASPE group from Rhode Island has been able to go through and categorize things in what I think Christine would agree are common sense areas. Children in kindergarten should have the language or literacy skills they need and the knowledge and cognitive skills. There are mental health indicators and child health indicators. So that's the road we're going down.

There have got to be lots of different ways of getting this information in people's hands. And now we have courage to have a press conference for every issue brief we release because the press is really starting to come to our events. So we're very excited about that. Plus CBS is underwriting some of the costs of producing the issue briefs, which shows you can get some businesses involved.

We've partnered with Brown University to create "Ideas that Work," a quick-and-dirty two-pager on what promotes early school success. It incorporates a little bit of data and explains the program description and why it's working to promote early school success. Our trademark is a bee and on our web site, the bee spins.

I just want to close by saying what a privilege it's been to work with Christine and that we really absolutely need to keep going. We haven't given up family time, family life, blood, sweat, and tears only to see this stuff at this critical moment suddenly seem like it's too much. And I totally agree with Christine, if you have some common sense answers in progress  that's not something you're going to want to undo.

Questions and Answers

Q. Were there any up-front commitments made to get the entitlements passed?

A. Christine: The only guarantee that I had to make to the governor's office and the legislature was that it would be budget neutral. In other words, the reduction in the caseload would offset the increase in childcare. Which it did until the legislature passed the increased rates and we had a deficit of $28 million. But I've been able to get through that on the legislative side. The administrative side's ticked at me. They feel it wasn't budget neutral. But things change. And that was the only thing I had to comply with.

We didn't have a lot of these health care outcomes until after welfare reform, interestingly. I didn't expect them, frankly. I thought it would take a lot longer to get the outcomes on the health care side. It wasn't until after we got the outcomes on the health care side and saw that the response after every crisis was not to cut that I began to really understand the magnitude of the importance of having those indicators, though nobody was really focused on it back then.

Q. What is the percentage of kids in unlicensed care?

A. Christine: A very small percentage  only about 10 percent.

Q. How do you make that so small?

A. Christine: In Rhode Island, 69 percent of the subsidized kids are in licensed centers, 20 percent in family childcare providers, and 10 percent are with relatives. We have more reliance on licensed centers than other places do. A family childcare provider who takes one subsidized kid for at least six months out of the year receives 100 percent free health insurance. But they have to be licensed and they have to take subsidized kids. So that's a trade off a lot of them make.

Q. What was the nature of the market response to the entitlement? Did you get a lot of small providers, for-profits providers, existing centers ramping up their slots? What was the response to entitlement and to the availability of health care?

A. Christine: A little bit of everything. When I came home to Rhode Island, I had a four-year-old. I could not find childcare. It just wasn't there. Over the past six years, there's been a tremendous growth in capacity. I would say the pick-up rate has been slow because a lot of family care providers have had awful problems with the department. Not because they're bad people, but because if you look at the administrative line, at the same time we've had this tremendous growth, we've had fewer and fewer people doing it. And we've had to do everything manually. There are people who didn't get paid for six months. I have people who live near me who won't take subsidized kids because they wouldn't get paid for six months. In one case it was a year.

What we've done is, we've worked with the childcare community to change the way we do business. So in mid-July we're going to have Web enrollment. The childcare provider and the individual will be able to go right to the Web. So we won't be involved directly anymore.

That could cause another spike in growth. I'm a little concerned about that, frankly. We are so good at outreach. The inside/outside operation is just so awesome. We brought 25,000 people into our health insurance program in a year, which almost broke everybody's back. I'm afraid the Web enrollment could have the same impact. So we're going to monitor that pretty closely. More than anything else, it's that trade-off: is the health insurance option worth the hassle of waiting to get paid by the state. It depends on how sick you are. And that probably makes a difference in whether you are able to do the job or not.

Q. Are you using indicators to track the impact on youth? And what can the childcare subsidy be used for?

A. Christine: Here's my next big problem. I've been trying to separate out the licensing requirements so that the after-school programs can include a basketball program, theater program, and an art or computer program  three different programs in a week. That's the goal. We haven't gotten there yet. One of the biggest problems is getting parents to apply for the subsidy, ironically. Even if you get the program licensed, the kids and the parents get a little bit uncertain around whether or not they want to apply for this. And there are things we need to do. We need to get attendance taken. We need to know if the person was there. So it's a question of culture change all the way around. The growth in that age group has been steep but in the context of the number of all the kids that are out there, it's probably relatively low. So we have a lot of work left to do.

My goal was first, get the entitlement in place because once you have the spikes starting, you're not going to get more eligibility expansion. Get the eligibility expansion in place while programs really aren't that good. People won't want to take advantage of them. Then start to improve the programs. Try to do it so you don't have these huge peaks. But you can't always control that. We are doing indicators for older kids. We're getting there. By no means is it perfect. We have the foundation in place; now we're building the house.

Q. A question about the SALT survey: are you able to link child outcomes at kindergarten age to children's status pre-kindergarten? Are you assigning the identifier that lets you link kids who received these Starting RItes?

A. Christine: No, it's going to have to be done on an aggregate basis, by income or some other factor. We don't have a way to connect a specific child with a specific kindergarten. We're working on it. I would say that in general Rhode Island is like a river. We go around the boulders. Nothing stops us. We couldn't get the single child identifier so we tried an approach that was a little bit different. The idea is, don't let the perfect be the enemy of the good. Just do the best you can. Ultimately, you may get to perfection. But you need to keep moving to get there.

So I think it's a testament to the Children's Cabinet and the interdepartmental team and the outside groups that every time we reach something that really ought to be a dam, they kind of take it apart piece by piece and eventually they get through it and on to the next thing. And that's what's really important in Rhode Island. There's no one person or group that's really trying to keep that dam up. They all can understand where we need to get. They may not agree with each bend, but they know that ultimately there's enough good will and intention and purpose and they know the risks and stakes are high so they're willing to make the kinds of compromises that they may not be willing to make elsewhere. And I think all of our agencies and Kids Count have been really good about that. Compared to a lot of other places, we're light years ahead.

We never say, "Oh no, this is perfect, there isn't a problem." We always say, "Oh yes, that's a good question, that's a big problem, we haven't solved that yet." But it's all out in the open in terms of what has to be done.

James T. Dimas,
Senior Associate,
Casey Strategic Consulting

On Establishing Credibility

First: Develop a Good Reputation.

If policy makers question your veracity or the validity of your analysis, and if you develop that reputation, you might as well be driving a cab for all the influence you'll have on policy. That was the situation I saw first-hand in Illinois. I was credited recently as being from the Department of Public Aid. Not that they're bad people, but I was from the Department of Human Services. And the Department of Public Aid, in contrast, had a huge problem in the state house and in the governor's office with respect to the quality of information that they use and try to influence policy with. It made it very very hard for them to be effective. So I offer you three things to keep in mind on the subject of maintaining credibility on data.

Be Proactive. If you're in the role of data provider, especially if you're state agency staff, don't wait for the secretary or the director to call you after a story's in the paper to say, "What do we have on that?" If you want to have influence it's all about relationships, just like in business or any other walk of life. You've got to establish a relationship with the policy makers you're interested in influencing, and it has to be based on credibility and trust.

And a way to get there is by hustling. When you see a story in the paper, go back without anybody asking you to and figure out what you have that can shed some light on that, and how you can help with the response. I can tell you from personal experience that nine out of ten times what you work so hard to produce isn't going to get used. But maybe the tenth time it will. And when people start to use that you'll be recognized as someone who's an asset and someone who has value to add, and then it begins to snowball. So you have to just soldier through the lean times, the nights you stay up putting together a letter to the editor in response to a negative story. Even if it doesn't get used, you have to still be content that you're on the right track. And just keep on doing this, because eventually you'll break through and that begins a pattern of credibility with policy makers.

Don't overcook the numbers. Anyone with any data savvy can see the telltale sings of a statistic that's been tortured too long to tell the right story. It's not worth it. You're debasing your own credibility. A good rule of thumb is, think about a neighbor or a family member who is just not that interested in the information. And if it takes more than five minutes to explain to them what an index or table or graphic means, it's probably overcooked, and you're probably better off not having anything on that particular issue than rushing in with something that's been on the rack all night.

Check to ensure there is no credible, contradictory data. Do an internal check against overcooking. Before you send something up the chain of command, ask yourself, "Is there a chance a credible source of information might offer something that contradicts this?" If there is, you're probably better off not doing it. Because all it takes is another group or agency coming up with data on the same issue. If yours is more attenuated because you've worked so hard to get something relevant to this issue, you end up being the loser.

You'll know you're on the right track when your data gets the benefit of the doubt. And that feels good. When there is an issue where somebody else has an opposing point of view that's supported by data, and your secretary or senate finance committee chairman trusts your data over theirs, you've established the credibility you need to be effective.

Second: Focus on the Insight Data Provides, Not on Causality

My premise is that, whether we like it or not, in making public policy what we're looking for is insight telling us whether a reasonable hypothesis is on target and whether it's worthy of being acted upon. We're not trying to prove causality. There's a role for that. That's important work, but it's not our work. We need to be careful that we don't get confused about that or refrain from acting on information that does provide insight.

Let me show you an example of how that can work. This was something from 1997. You remember that back in the early nineties the federal jobs program was created. This was the precursor to the whole welfare-to-work movement. Illinois operationalized this new funding source by concluding that this was separate work. It established separate offices for people who wanted to go to work. So when a TANF or AFDC client expressed an interest in going to work, we would have to send them to wherever that office was. On its face that didn't make sense. The reasonable hypothesis was, if we really want to have people go to work, wouldn't it make sense to have jobs and income maintenance in the same place? And another thing you need to know is that from the beginning of the program the staff that ran the jobs program always set targets for local offices regarding the number of job slots they were expected to fill. No local office in the five years that program was operating had ever met its monthly target.

So we decided to try something different based on that reasonable hypothesis and based on very little more than, gee, it makes more sense to have those things integrated. So we took twenty-four local offices, some of them in Cook County and some of them downstate, and divided them into two groups which we called blitz offices and non-blitz offices. We physically moved the jobs staff in with the income maintenance staff in the blitz offices, and within the space of one week. And within four weeks, we did a briefing for the secretary, and showed him a graphic. Within one month most of the blitz offices had achieved their target for the very first time, and conversely the others fell short as usual. We didn't have a p value, but that was enough for our secretary to say, okay, let's do this statewide  which we did.

And within about one month, a lot of the non-blitz offices started hitting their targets or coming close to it, which validated that step. But an important point is, the secretary didn't wait and we didn't wait to take the step until we had something more conclusive. I think that if you're working off a reasonable hypothesis, that's a responsible way to go in human services. There's too much at stake and the window is so short for acting, that you'll have a change of leadership if you don't act on less-than-conclusive evidence.

I almost forgot the most important point of the story. I had the dubious pleasure two years later in 1999 of attending a conference in Washington, D.C. It was sponsored by ASPE in part. A colleague from Illinois in a division that ran the separate jobs program presented the results that MDRC had gotten with two other states and Illinois. They concluded it didn't make sense to have the offices separate. And they demonstrated it to a .0001 level of significance. And that was great, but if we had waited for a study before we made this move, there would have been no way we would have won two awards, totaling almost $40 million. This willingness to act sooner ended up being an important step in the success that we had.

Third: Always Work Against a Hypothesis

I've seen young folks especially start out with a spreadsheet and try to mine it to see what's there. That's not real productive without a hypothesis to work against. A hypothesis provides a context that helps turn the data into information and if you don't have that, you won't be as focused as you should be.

Another really important way to have an effect with your data is to engage policy makers in the formulation of those hypotheses. That's what we did in Illinois. It gives them some "skin in the game" and allows them to couple their accumulated experience, wisdom, and insight with your ability to provide data and information. And it also has the ancillary benefit, that, well, it's their hypothesis, they've got some ownership of it, and they're also the people who happen to be providing you with the authority and resources you need to pilot test interventions that are driven by that hypothesis. It's an easier proposition if you're trying to get someone who has some ownership of the hypothesis to give you the time and money and talent available to test interventions that relate to it.

Using indicators isn't a spectator sport. There's not much payoff in being an observer. You can get a lot farther if you can engage a policy maker in the formulation of reasonable hypotheses that are testable.

Once you have a reasonable, testable hypothesis, to continue to maintain your credibility and influence you need to be able to mobilize pretty quickly to test interventions driven by that hypothesis. What you're looking for there is to either substantiate the hypothesis or disprove it. I usually go at it from the perspective of trying to disprove it if it's really inherently feasible. Disproving is easier. If you can't come up with something that challenges the credibility of the hypothesis, then it's probably worth going ahead and doing a pilot test on an intervention that responds to it.

And then it becomes a recursive cycle. You can use your indicators to help with the formulation of a hypothesis. You can work with program and leadership people to design interventions that respond to that hypothesis and then use your indicators to get in the back way to evaluate whether the intervention had the desired effect. And it's a real good way of closing that book and making your indicators real and vital to policy makers.

Techniques for Establishing Credibility

And that leads me to the second and last part of this presentation. I want to show you three techniques useful to developing influential information.

Pilot Testing

One of these is pilot testing; I showed you something on that earlier. But I wanted to show you another thing that's more on point with respect to family and child outcomes and the work ASPE is doing with respect to service integration. When we started DHS in Illinois in 1997, the whole concept was that we were going to do integrated services delivery. That was going to help us achieve our federal work requirements and improve outcomes for families and children. The reality is that in the first couple of years that we did this, we got so consumed with the burning platform that those work requirements represented that we didn't really attend as carefully as we said we would to service integration.

When we reached a point where we had a little breathing room on work requirements, we decided we needed to do a pilot test on a service integration model that would feature co-located staff  substance abuse, mental health, domestic violence counselors. This involved co-locating them in our local offices and seeing how that improved the rate at which people were referred for treatment and follow-up services.

We didn't want caseworkers that had come out of an income maintenance background and eligibility determination trying to make those calls. We weren't trying to do treatment in our offices. We were just trying to do case findings and get referrals made and we knew that wasn't happening very well at the site. So we did a pilot test using eleven local offices, six of them in Chicago and five of them downstate. And we tracked what happened in the offices before we did co-location and what happened afterwards and made a chart. There's no statistical test here. But the important thing is, if you've taken the steps to establish credibility with policy makers, if you've worked with policy makers on the formulation of hypotheses, they'll trust their eyes, they'll trust their instincts and common sense and they'll know when a picture is telling them something. In this case this picture said, yeah, there were clearly more referrals happening, especially in Chicago in substance abuse, after we did the co-location. Likewise the same kind of relationship held for mental health. There was no real debate that something seemed to have been changed and it seemed to have improved things and it was improving the rate at which referrals were made.

We learned something else interesting on domestic violence. And that was that the same relationship held but look what happened in downstate Illinois: the rate remained really flat. And that gave us a pregnant opportunity to mine that data further and find out what was going on in downstate Illinois that was different from what was going on in Chicago.

Field Work

The second technique I want to recommend to you is fieldwork. In Georgia we are working with folks in their Department of Family and Child Services to try to help them with some poor child welfare outcomes they're getting there. We are looking at the substantiated cases of abuse and neglect as a percent of all cases reported. Georgia has 159 counties, which is actually a nice large number if you want to do this kind of quasi-experimental approach. There are counties with 40 percent or greater rates of substantiated abuse among all cases reported. You've also got a number that are at 10 or 15 percent or below. And that doesn't tell you anything specifically except that it tells you where to look for some answers.

We don't know if this difference lies in people's propensities to report or in the quality of the investigations done. Both of these are reasonable and interesting hypotheses. And what it sets us up to do is to send in a team on the ground to poke around in local offices in the counties and develop a more robust hypothesis based on that kind of field work. What we see on the ground will help us formulate a hypothesis that we then can test with the right kind of pilot structure, and then we can test different interventions that we hope will produce a different result.

An Epidemiological Approach

I started out in public health and that's where a lot of my perspective on this comes from. I encourage you to think about a kind of epidemiological approach where you just look for clusters.

For example, looking at the percentage of substantiated abuse or neglect per thousand population plotted against the percentage of children in poverty shows a couple of interesting clusters. Some counties have a very high percentage of poverty, but relatively low substantiated abuse. That suggests that these counties are doing something right, something that we need to learn more about. Other counties have a lower rate of poverty but much higher substantiated abuse. And so one of the things we're proposing to do is again, send a field team in on the ground to look at those counties to review case records and interview case workers using an assessment protocol. We may find something about the different practices or the different characteristics of those counties that might support the development of a testable hypothesis that could perhaps shed some light on this and suggest some promising interventions.

The second cluster that's worth looking at is the counties that have a comparable rate of poverty, but are really all over the board in their rates of substantiated abuse. And I'm dying to know what it is that people are doing differently in different counties. Again we would like to sharpen our hypotheses and identify promising interventions that we can test to see how they might impact the outcomes.

This is a process that is pretty easy to engage policy makers in because they want to know, too. And if you lay it out this way, they get "skin in the game" and then getting the resources you need to do pilot testing of interventions becomes a much less daunting proposition.

So I'd encourage you to use your indicators to gain insight and also to engage policy makers in the formulation of reasonable hypotheses suggested by what you can tease out of the indicators. And then work with program folks and policy staff to identify pilot interventions that you can test to respond to those reasonable hypotheses. And you can then close the loop by using indicators to assess whether those pilot interventions had the desired impact on the outcomes you're interested in.

Questions and Answers

Q. Do you have experience using GPS mapping data?

A. We've done a little bit of that and I think it's really useful. It's especially useful for focusing on the mal-distribution of resources. If, for instance, you can plot where community health centers are located and compare that to the incidence of preventable diseases or teenage pregnancy, that's something that would make a mal-distribution of resources glaringly apparent.

Q. I struggle with the language we use to talk about this. My understanding of social indicators is that the language is very broad. When you use "test," "prove," "hypothesis," it becomes confusing to folks without that background.

A. I concur. When I talk with policy makers, I don't ever use the word "hypothesis." Then they think they're in for a discussion about data. I try to keep it more focused on "hypothesis," but I don't say that to them. I talk about "insight into what the right work is." That's the way I like to talk about it. That's something with greater resonance with people. I use "hypothesis" with people in this room. I'd be very careful using that term outside this room. To borrow a phrase, I encourage you to "Think quantitatively and act qualitatively." By which I mean, the indicators are good for zeroing in on a possible hypothesis, but the real knowledge comes from going in on the ground and looking through case records and talking with case workers and trying to figure out in the real world what seems to be contributing to the disparities.

Q. When you present data like this, do you discriminate between population-based data and service-driven data for policy makers? Second: when I saw that graph on referrals, two questions came to mind: 1) was there a difference in resource availability in terms of who would be referred and 2) when you pop up with early data, what's your response in terms of being prepared to answer those kinds of questions?

A. You kind of have to stay light on your feet. I use whatever works that doesn't run afoul of the credibility issue. Sometimes mixing administrative data and population based data gets you there. Sometimes you can just go with population-based data to establish an insight. If I'm trying to get back to the real world that a policy maker's involved with, I sometimes use administrative data. I kind of mix and match as I need to, but I try not to leave myself vulnerable to criticism or do it in a way that would cost me credibility.

On the referrals point: yeah, we concluded that part of what we saw was the result of fewer providers downstate and also of transportation barriers. But those were things we didn't really know until we looked at it that way. And then we raised the next questions.

Q. Do you share data across departments in your state, and if so, how?

A. I do and I have the scars to prove it. This is a good opportunity to put in a plug for our hosts. The right thing is to say people should share information. I pounded my head against that wall a lot and only moved it imperceptibly. So I finally decided to, like the Rhode Island folks, flow a different way around that rock. I go to the governor's office and others and tell them to provide data to Chapin Hall. What we run up against is, people raise concerns about confidentiality and privacy that are legitimate but when we look at this hard in Illinois, the 80/20 rule applies. Eighty percent of the reasons they don't share information is related to about 20 percent of the reality about what the law requires. The laws aren't nearly as restrictive as our culture and folklore make them out to be. When you dig into that one exception that's provided even in that most restrictive setting  substance abuse data  you can use that information without identifiers to support research. We used Chapin Hall as a repository that would get data with identifiers and then give it back to us without identifiers. And we were then able to use the data in an integrative way that we wouldn't have been able to otherwise.

Q: I've found that some of the confidentiality requirements can be gotten around, but that some of the political people don't want to even be compared across departments.

A: You're absolutely right.

Q: In Hawaii we had the opposite result with respect to integrated offices. We found that if you tell staff you want more referrals, you'll tend to get them. It doesn't really matter if they're integrated or not. But the other factor that enters into it is you have to do the work on the treatment side. You can get these higher referrals but then the question is, what percentage of those cases is accepted for treatment? And in our case we had to work out arrangements with the managed care providers who provided substance abuse counseling as well as medical treatment and get the directors of treatment within those plans to agree that yes, they were going to accept those patients and allow them to be treated under the plan so there wasn't a separate cost. So sometimes I think you can get what appear to be the results you're looking for by hypothesizing that if you do this you'll get that. But if you look a little deeper, or try different results, you may discover you'll get the same thing without doing it.

A. That's absolutely true that the stuff that gets measured gets done. So you have to be careful about what it is that you're measuring, because people will do the wrong thing. It's really clear that you need to retain your focus on what is the right work. When I know that what we're about is doing the right work and I know that we're going to do the case work behind it, I'm not above using something even if I have to hold my nose a little bit to get a policy maker to say, "Okay, yeah, we should do that." If you're not a person of good conscience or you're not prepared to remain focused on what the right work is, you're absolutely right, it can be dangerous.