Skip to main content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Ensuring Quality in Contracted Child Welfare Services

Publication Date

Child Welfare Privatization Initiatives Assessing Their Implications for the Child Welfare Field and for Federal Child Welfare Programs

Ensuring Quality in Contracted Child Welfare Services

Topical Paper #6

December 2008

U.S. Department of Health and Human Services (HHS)Office of the Assistant Secretary for Planning and Evaluation (ASPE)

This paper was prepared by Planning and Learning Technologies, Inc. in partnership with The Urban Institute for the Office of the Assistant Secretary for Planning and Evaluation, U.S. Department of Health and Human Services, under contract HHSP233200600242U. The opinions expressed in this paper are those of the authors and do not necessarily represent positions of the U.S. Department of Health and Human Services.

This issue paper was written by Nancy Pindus and Erica Zielewski of The Urban Institute, Charlotte McCullough of McCullough and Associates and Elizabeth Lee of Planning and Learning Technologies, Inc.  Paper review and comments were provided by Crystal Collins-Camargo, University of Louisville, Kentucky.

This document is available online at:http://aspe.hhs.gov/07/CWPI/quality

Printer Friendly Version in PDF Format (32 pages)

Contents

  1. Introduction
  2. Ensuring Quality in Child Welfare
    1. Background
    2. Setting the Stage for Improved Oversight and Outcomes
    3. Recent Improvements in Quality Assurance and Contract Monitoring Systems
  3. Developing the Infrastructure to Support Monitoring
    1. Overview
    2. Collaboration
    3. The Organization and Roles of Monitoring Staff, Private Agencies, and other Oversight Bodies
  4. Monitoring Child Welfare Programs
    1. The Focus of Monitoring
    2. Monitoring Methods
    3. Information Needed for Contract Monitoring
    4. Staff Training
  5. Using the Information Collected
    1. Reports and Feedback
    2. Performance Issues and Remedies
  6. Conclusions and Lessons learned

References Endnotes

Acknowledgements

This project builds on the resources available at the Quality Improvement Center on the Privatization of Child Welfare Services (QIC PCW), funded by the Childrens Bureau. We want to acknowledge all of the state and county child welfare administrators and private providers that shared their experiences with us and the QIC PCW. Additional information on child welfare privatization issues is available through the QIC PCW Website:  http://www.uky.edu/SocialWork/qicpcw/.

[ Go to Contents ]

Introduction

Privatizing a child welfare service does not relieve the public child welfare agency of its responsibilities to ensure that children and families are well served and that tax dollars are effectively spent.  In addition to developing and implementing policy, the public agency continues to be accountable for high-quality and effective services that comply with state and Federal rules, and achieve specified outcomes and results (Freundlich & Gerstenzang, 2003; McConnell, Burwick, Perez-Johnson, & Winston, 2003). 

This is no easy undertaking.  States struggle to develop thorough quality assurance systems  partly because the evidence about best practice in this area is in short supply.  In 2007, the Childrens Bureaus Quality Improvement Center on the Privatization of Child Welfare Services (QIC PCW), found that public agency administrators struggle to develop quality assurance systems that systematically review contract performance while enabling contractors to creatively manage the services they are enlisted to provide.[1]

The purpose of this paper is to assist public agency child welfare administrators to better monitor and assure quality of contracted services within the context of the agencys overall quality assurance/improvement system.  This paper explains the importance of planning contract monitoring and accountability systems and training staff to be effective contract monitors.  It describes the types of monitoring activities, as well as methods for collecting and using monitoring information. The paper provides examples of some of the decisions that must be made about what will be measured, and how child welfare agencies have worked collaboratively with providers to develop realistic and constructive approaches to contract monitoring.

An overarching theme of this and other papers in the series is partnership.  When public agencies contract for services, they are seeking one or more partners to share the risks, rewards, and responsibilities of delivering services to children and families in the child welfare system.  To the extent allowed by procurement rules, a collaborative public-private planning process can ensure that consensus is reached on the broad goals and expectations of the quality assurance and monitoring systems.

This is the sixth and final paper in a technical assistance series. The project was funded in 2006 by the Office of the Assistant Secretary for Planning and Evaluation, U.S. Department of Health and Human Services (DHHS, ASPE).  The paper series is designed to provide information to state and local child welfare administrators who are considering or implementing privatization reforms.

 For the purpose of this paper series, privatization is defined as the contracting out of the case management function with the result that contractors make the day-to-day decisions regarding the child and familys case.  Typically, such decisions are subject to public agency and court review and approval, either at periodic intervals or at key points during the case. However, the following discussion about contract monitoring is applicable to any public/private partnership, regardless of the extent to which the service has been privatized.

This paper builds on information already presented in other papers in this series and makes reference to the other papers throughout. These are available online at http://aspe.hhs.gov/hsp/07/CWPI/.

  • Assessing Site Readiness: Considerations about Transitioning to a Privatized Child Welfare System
  • Program and Fiscal Design Elements of Child Welfare Privatization Initiatives
  • Evolving Roles of Public and Private Agencies in Privatized Child Welfare Systems
  • Evaluating Privatized Child Welfare Programs:  A Guide for Program Managers
  • Preparing Effective Contracts in Privatized Child Welfare Systems

This paper series incorporates research conducted under the Quality Improvement Center on the Privatization of Child Welfare Services (QIC PCW), funded in 2005 by the Childrens Bureau, Administration for Children and Families, U.S. Department of Health and Human Services.  It also draws from the research on privatization in other, closely related social services fields.  Additional information for this paper comes from field experience and telephone discussions with state and county child welfare administrators and private providers.

[ Go to Contents ]

Ensuring Quality in Child Welfare

A.    Background

The role of monitoring in child welfare is a critical, but complex one. A 1997 U.S. Government Accountability Office (GAO) study found that monitoring contractors performance was the weakest link in the privatization process (U.S. GAO, 1997, 14).   Despite the importance of monitoring, most studies conducted during the 1990s noted a myriad of problems with public agency approaches to monitoring including: staff shortages in the public agencys monitoring units; a lack of in-house expertise in effective contract management; inconsistent approaches resulting in a tendency for monitoring to be overdone or underdone from one contract to another; and a disconnect between an agencys contract monitoring work from its overarching quality assurance and improvement activities (Freundlich & Gerstenzang, 2003; McCullough & Freundlich, 2007).

The National Child Welfare Resource Center for Organizational Improvement (OBrien and Watson, 2002) notes that quality assurance (QA) is the term most often used by child welfare administrators and senior managers to describe efforts to assess their agencies success in working with children and families. The NRC notes that, in practice, QA has had no consistent meaning across child welfare agencies. Until recently, QA systems consisted largely of case record audits to monitor and report on the extent of compliance with state and Federal requirements. QA efforts have ranged from administrative case review systems, to periodic research studies, to a review of regular statistical compliance reports, and to comprehensive initiatives involving all these elements and more.

While all public agencies conduct some form of quality assurance to review the quality and impact of their directly delivered services, state systems differ in the breadth and depth of this work.  It is noteworthy that in the first round of Child and Family Services Reviews (CFSRs), a full one third of states were found to be out of substantial compliance with the systemic factor that sought a state-level QA system.[2]

Additionally, in many states, a child welfare agencys QA system primarily focused on quality of services delivered directly by the public agency.  Results of those efforts were not connected to the findings from contract monitoring that was done by small contract monitoring units operating on the margins of the agency.  The monitoring function and resulting reports often had minimal impact on the services delivered by the agency or on future procurement decisions (McCullough & Freundlich, 2007). Several early studies on privatization found a general lack of accountability and performance criteria in privatized contracts (Nightingale and Pindus, 1997; Petr and Johnson, 1999); and without performance targets, it is difficult to hold providers accountable (Freundlich & Gerstenzang, 2003). 

The isolation of contract monitoring was only one part of the problem.  Perhaps an even larger issue related to the compliance-driven nature of traditional monitoring efforts which focused on ensuring that contractors did not do anything wrong, rather than on any expectation that they might do things better. Monitors looked at whether providers served the expected number of clients and delivered the expected number of service units; not whether children and families benefited from the services they received or the system operated more effectively. 

B.    Setting the Stage for Improved Oversight and Outcomes

In recent years, states have begun to invest more resources in contract monitoring and quality assurance systems, and to build more robust systems. Arguably, the most powerful motivating factors for states to improve and integrate contract monitoring and QA activities have been the passage of the Adoption and Safe Families Act (ASFA) and the implementation of the CFSRs.  With these two events, there is now a common set of outcomes and systemic factors on which all states are assessed. These outcomes and measures typically provide the foundation for the development of outcomes and performance measures for inclusion in provider contracts and serve as the focus for monitoring and quality improvement efforts (McCullough and Freundlich, 2007).

The CFSRs, initiated in 2000, are a three-stage process consisting of a Statewide Assessment, an on-site review of child and family services outcomes and program systems, and a program improvement plan. The reviews are structured to help states identify strengths and areas needing improvement within their agencies and programs. They address three outcome areas (safety; permanency; and child and family well-being) and seven systemic factors (statewide information system; case review system; quality assurance system; staff and provider training; service array and resource development; agency responsiveness to the community; and foster and adoptive parent licensing, recruitment, and retention). 

Once the state has completed the first two stages, it prepares a program improvement plan to address the areas that have been found to be deficient.  The Childrens Bureau monitors progress on the plan on an ongoing basis and works with the state to determine when the issues needing improvement have been addressed. In addition to providing states with a common set of expectations, the CFSRs also provided a roadmap for how they could monitor progress. For some states, the CFSR was the impetus for new types of collaborative relationships with private agencies, as described later.

The Federal Government has also encouraged improved tracking and oversight of cases by providing enhanced funding for State Automated Child Welfare Information Systems (SACWIS), developing reporting requirements for the collection of adoption and foster care data as reported by the Automated Foster Care and Adoption Review System (AFCARS),[3] and creating requirements for citizen review panels and peer reviews in the Child Abuse Prevention and Treatment Act (CAPTA).[4]

Another separate, but related factor that has strengthened quality assurance for contracted services is the expanded use of performance based contracts.  States and jurisdictions use performance based contracts (PBC) to improve agency outcomes and by doing so, focus more resources on the quality and impact of contracted services. There are several parallels between performance based contracting (PBC) and QA efforts. A well developed and implemented PBC initiative inherently supports agency QA efforts through similar processes of identifying agency goals and measures, collecting data, and modifying systems (or contracts) to better align contract incentives with agency goals (Lee, Allen and Metz, 2006).  Contracts are being monitored, and in many cases, rewarded based on child and family outcomes. These and other risk-based contracts require that special attention be given to contract monitoring because providers are often at financial risk if they do not meet performance expectations.  For a more detailed discussion of this issue, see Topical Paper #2, Program and Fiscal Design Elements of Child Welfare Privatization Initiatives (http://aspe.hhs.gov/hsp/07/CWPI/models/index.shtml).

C.    Recent Improvements in Quality Assurance and Contract Monitoring Systems

As a result of all of these factors, today, many public and private child welfare agencies are collecting a range of information on program quality, practice, client outcomes, cost-effectiveness, and satisfaction and have more sophisticated tools and skills to do this.  In most states, quality assurance efforts involve both quantitative measures (client outcomes, worker caseloads, casework activities) and qualitative measures (e.g. how well stakeholders believe the system is working). Using these data, agencies identify problems and implement improvement strategies on an ongoing basis. As a way of differentiating these efforts from traditional compliance monitoring, the new approaches often are called continuous quality improvement systems (CQI). The new approach improves upon traditional compliance monitoring in three ways (OBrien and Watson 2002):

  • Quality improvement programs are broader in scope, assessing practice and outcomes, as well as compliance.
  • Rather than simply determining if services were delivered as required or whether contractors were in compliance with federal, state and agency requirements, quality improvement programs attempt to use data, information and results continually to affect positive changes.
  • Quality improvement programs engage a broad range of internal and external partners in the quality improvement process, including top managers, staff at all levels, children and families served, and other stakeholders.

Many states have enhanced monitoring and QA efforts by incorporating elements of the CFSR process into their quality improvement and contract monitoring systems.   For example, New Mexico used the CFSR process as a rallying call to bring all stakeholders to the table.[5] The process, which has been evolving since 2000, includes both internal and external stakeholders, and takes a systems perspective to quality assurance, quality engineering, and quality improvement. The indicators included in CFSRs enabled all stakeholders to talk about their measurements in a common way, to understand what others are trying to accomplish, and to make decisions about priorities, including the allocation and reallocation of resources. The state has included CFSR outcomes in Requests for Proposals and providers contracts. Working with providers to educate them about the CFSR goals has helped the state to redirect resources to families at greatest risk and to services that are most closely related to the CFSR goals. Using a data-driven approach to identify the needs of the state with respect to child welfare, legislators and providers have been more open to alternative approaches. The aim is not to shut providers down, but rather to have providers extend their mission in order to more directly address CFSR goals. In cases where a provider may have difficulty changing focus, the New Mexico Children, Youth, and Families Department has worked with them to identify other funding sources or help them to change their work.  

Like New Mexico, many other states are using CFSR outcomes and indicators in contract requirements and requiring monthly or quarterly performance reports from contractors. These reports, not unlike CFSR data profiles, allow contract monitors and contractors to continually examine aggregate data to identify trends and possible problems. Desk reviews and problem-solving meetings may be supplemented by onsite visits/interviews. Case record reviews, often modeled after the onsite portion of the CFSR, allow the contractor and contract monitor to gather qualitative information that is not evident from reported data. Both sources of information help to drive continuous quality improvement efforts.

Other initiatives to improve quality assurance of contracted services include the three projects funded under the QIC PCW.  Three states (Florida, Illinois and Missouri) have designed and implemented contracted services that integrate performance based contracts with expanded quality assurance systems.  The pilot programs are aimed at using data to identify quality practice techniques and improve both practice and client outcomes.  Each project has identified a range of outcomes and other indicators -- often practice standards such as levels of visitation and/or contact between workers and clients -- that appear to be related to outcome achievement.  These outcomes and indicators are incentivized in the performance based contracts and data on performance is monitored through expanded quality assurance systems.

[ Go to Contents ]

Developing the Infrastructure to Support Monitoring

A.      Overview

Ideally, public agencies design their specific contract monitoring/QA approach while they are designing the service model that is to be contracted.  Service goals and objectives, and reporting requirements, should be clarified at the outset and incorporated into contracts. Decisions about what is to be monitored, how monitoring is done, and how the information will be used, should be part of the initial contract discussions. These issues are addressed in Topical Paper #5, Preparing Effective Contracts in Privatized Child Welfare Systems (http://aspe.hhs.gov/hsp/07/CWPI/contracts/index.shtml).

This section outlines issues that agencies should address in building their contract monitoring infrastructure. It also examines how the public agency can design and implement its monitoring activities in partnership with service providers, and how the responsibilities for quality assurance and monitoring can be shared by public and private agencies and other oversight bodies.

Consistent and uniform risk assessment permits the Contract Oversight branch of DCF to efficiently apply its contract monitoring resources systematically to the areas of greatest need.

What factors determine the level of risk to DCF?

Risk for DCF contracted service delivery is classified into four weighted categories, including:

  1. Annual Dollar Value of the Contract the higher the annual dollar value, the higher the risk the Department assumes in contracting with the provider
  2. Nature of Service weights are assigned to the type of service depending on the risk associated with each service category
  3. Prior Provider Performance and Corrective Actions Providers who have previously had serious financial, administrative, or program deficits or have had difficulty being responsive to Department requirements are considered to present a higher risk
  4. Last Contract Monitoring Visit the period of time since the last visit will be a heavily weighed factor in the risk assessment with a longer time period presenting a higher risk

Source: http://www.dcf.state.fl.us/publications/policies/075-8.pdf [PDF - 27 pages]

How public agencies monitor contractors is as varied as the types of contracts that public agencies have with private agencies.  For each contract, the public agency must have a monitoring plan, which lays out the steps for monitoring, as well as the methods and techniques to be used. Ideally, the plans also clearly define the roles of public agency staff and private contractors in ensuring accountability.

The public agencys monitoring plan defines precisely what a government must do to guarantee that the contractors performance is in accordance with contract performance standards (Eggers, 1997, 22).  Eggers (1997) lays out steps that are important to designing a monitoring plan.  The monitoring plan should be quantifiable and specific, meaning that it includes information about the reporting requirements, the frequency and number of meetings to be held, complaint procedures, and a way to access the providers records if needed.  A monitoring plan should also include information about the number of individuals who are required to monitor the contract, who those individuals are, and what their responsibilities should be. Finally, the monitoring plan should tailor the monitoring tasks to the specific services being provided and/or the outcomes being measured.  Different services and outcomes require different types and levels of monitoring, which must be taken into account in the plan.  Similarly, different providers may need different monitoring structures.  For example, Florida bases the frequency of its on-site visits on the risk assessment of the contractor.  Those contractors that do not receive an on-site visit receive annual desk reviews (see preceding text box).

In many states, the key elements in monitoring plans are prescribed by statute or administrative rule. In Florida, for example, the Department of Children and Families (DCF) is required to adopt written policies and procedures for monitoring the contract for the delivery of services by lead community-based providers[that] at a minimum, address the evaluation of fiscal accountability and program operations, including provider achievement of performance standards, provider monitoring of subcontractors, and timely follow-up of corrective actions for significant monitoring findings related to providers and subcontractors. (Florida Statute 409.1671[2][a])

B.    Collaboration

As Eggers (1997) points out, monitoring should be viewed as a preventive rather than an adversarial function. The contractor should be considered a strategic partner and be given incentives to innovate, improve, and deliver better service. For this to happen, a relationship of trust must be built between the public agency and the contractor, and performance terms must be mutually understood. Ideally, this begins in the planning stage with developing a monitoring system that is clearly understood and accepted by both public and private agencies. The process should include designating individuals from the public agency and from the contractor staff who will communicate on a regular basis, such as through monthly meetings or conference calls. 

In practice, state procurement regulations and practices vary with respect to the timing and extent of communication between agency officials and contractors prior to the award of a contract. If not prohibited, some agencies involve contracted providers and other community stakeholders in the process of determining which outcomes to measure and in defining a collaborative approach to quality assurance and contract monitoring.  There are several examples of states that have used a collaborative decision making process to develop performance measures, penalty and reward mechanisms, and feedback loops. 

One example is Missouri.  Prior to initiation of performance based contracting in Missouri, the state undertook a two-year developmental process to involve community stakeholders in framing the content for service contracts. Key stakeholders included executives of private contracting agencies, judges and other juvenile court personnel, and representatives of advocacy groups. The resulting contracting model provides for strong partnership communication and routine feedback via interactions between contracting agencies and the administrators in the Missouri Childrens Division (Watt et al., 2007).

Contractors can provide helpful advice in developing the performance indicators that they are meant to achieve.  An advantage to this approach is that it lessens the likelihood of misunderstandings over the nature of the performance measures during the contract period (Eggers, 1997).  Furthermore, successful collaborative planning often carries through to implementation.

C.    The Organization and Roles of Monitoring Staff, Private Agencies, and other Oversight Bodies

i.      The Organization and Roles of Public Agency Staff

Individuals responsible for monitoring have different titles from one state to another.  While several different people with similar titles might be responsible for different aspects of monitoring within a state, it is not uncommon for their roles to blur in actual practice.  In some states, all staff responsible for monitoring reside in the same division within the central office or in the district/region. In other states, staff might operate out of totally different divisions within the public agency, with contract compliance being part of a procurement unit, while program monitoring is operated out of a program/service or licensing division. Some jurisdictions rely upon a single individual to be the primary monitor; others have a team approach.

While there is no evidence that one public agency staffing approach is preferable to another, it is important for staff operating across divisions to communicate and collaborate in the timing and frequency of their quality assurance or monitoring activities, share findings, and strive to reduce the duplicative and overlapping auditing and program monitoring functions that have proven problematic in some privatization initiatives. In Florida, when the burden of overlapping QA/monitoring became clear, DCF established a workgroup to streamline monitoring/audit activities, including efforts to coordinate concurrent Title IV-E, mental health, Medicaid, licensing, and community-based care evaluation activities (Freundlich & Gerstenzang, 2003).

In addition to the need for strong communication and collaboration across public agency divisions, it is critical to have the support and direction of upper management in the design and implementation of monitoring efforts.  Strong leadership promotes consistent messages throughout the public agency and to providers, and facilitates allocation of sufficient resources for monitoring and support efforts.  From discussions with several states, contract monitoring and quality assurance models are still a work in progress.  States are working to establish the best structure for their programs.  As described below, Florida provides a good example of a state working to improve its system based on lessons learned from its prior efforts.

ii.      The Private Agencys Responsibility

To this point, the discussion has focused primarily on the public agency as the entity monitoring its contract with the private provider.  However, it is important to note that most recent contracts require private agencies to have the capacity to monitor their own performance and use a robust quality assurance/improvement system to identify and remedy problems.  Private agencies with performance-based contracts often rely upon methods that are similar to those used by their public agency counterparts  namely ongoing review of performance data, chart reviews, focus groups, problem-solving mechanisms at the practice and systems level, and satisfaction surveys to tell them what is working and what needs improvement.

Prior to 2008, Floridas Department of Children and Families (DCF) had been operating an integrated tiered approach to monitoring its local community-based care (CBC) agencies.  Floridas monitoring system involved three tiers:

  1. Tier 1 Lead agencies developed and implemented a Quality Management Plan that involved minimum requirements established by DCF.  Lead agencies reviewed their in-house and subcontracted services and reported the findings back to DCF. 
  2. Tier 2 DCF staff approved lead agency Quality Management Plans and validated findings through case reviews from lead agency Tier 1 monitoring.  The approach involved several monitoring processes that were conducted on-site, simultaneously: contract oversight, case reviews and licensing of lead agencies.
  3. Tier 3 DCF staff conducted statewide Child and Family Services Reviews to check for compliance with federal reviews, providing technical assistance to assist lead agencies in their quality assurance activities and maintain Floridas Program Improvement Plan (OPPAGA, 2006). 

This tiered approach to monitoring was designed to give the CBC lead agencies the flexibility to monitor their contracts, but also to provide a structure in which DCF could oversee how the system was working.

In practice, the tiered monitoring system was not as effective in tracking lead agencies and subcontractors performance as planned (OPPAGA, June 2008).  For instance, lead agencies were not completing their Tier 1 quality assurance reviews in a timely manner (and often not reviewing the required number of cases).  This resulted in significant delays between Tier 1 and Tier 2 reviews, which made it difficult for state staff to validate earlier findings that is, match the quality assurance data collected by the lead agency with what was currently being reported in case records.

In consultation with Chapin Hall Center for Children, Florida restructured its oversight procedures to improve its ability to track contractual compliance and agency performance; some of the major changes include (OPPAGA, June 2008):

  • Developing uniform casework practice standards and ensuring quality assurance reviews assess critical standards that affect child safety, permanency and well-being, rather than focusing on discrete compliance requirements;
  • Collecting fiscal and program information from lead agencies each quarter.  Program indicators include those that most affect lead agency expenditures including caseloads, case entry rates, and proportion of cases entering foster care;
  • Developing new quality assurance implementation and oversight teams made up of lead agency and state staff that conduct quarterly reviews of the lead agencies.  Using a new quality assurance instrument with a common set of quality assurance standards, Regional and lead agency staff conduct side by side reviews of a subset of cases to help interpret information in case files;
  • Assessing child well-being through the new on site quality assurance instrument that contains a series of questions on educational and health services and whether these services are meeting childrens needs;
  • Requiring case management supervisors to review 100% of cases on a quarterly basis using a qualitative discussion guide and then providing timely feedback to case workers on the quality of services and corrective action if needed;
  • Targeting practice trends that had not shown improvement, specifically: placement stability, recurrence of abuse and neglect, and reentry into out of home care; and
  • Offering additional training to public and private agency staff on data analysis and means of identifying relationships between outcomes, service delivery, and service quality.

While state officials report it is too early to determine the impact of these changes on agency oversight and performance, the states Office of Program Policy Analysis and Government Accountability conducts ongoing assessments of the states child welfare system and will continue to produce reports on its findings.  For more information about the new monitoring process, go to: http://centerforchildwelfare.fmhi.usf.edu/kb/dataper/QA%20Implementation%20Plan%202008%20-%2003-11-08.pdf [PDF - 10 pages]

iii.      Oversight Bodies

Some states supplement their staff-driven and private agency quality assurance and contract monitoring activities with oversight by independent community-based stakeholder bodies. These groups are charged with reviewing overall agency performance and helping to identify and remedy barriers to success. Some of these bodies are created by the public agency, while others are appointed by the Governor.  Many states have legislatively mandated bodies charged with helping to continually review performance of both the public agency and its contract providers.

For example, when the County of Milwaukee Child Welfare system was taken over by the state, the state legislature created the Partnership Council by statute.  The Partnership Council is an independent advisory body comprised of state legislators, county board members and gubernatorial appointees.  Among those appointees are the Childrens Court Presiding Judge, medical leaders, public school leaders, child advocates, public policy advocates, and guardian ad litem representatives.  All meetings include public and private partners.[6] One member of the Partnership Council observed, As you look at the three-legged stool holding up any system, community involvement and accountability are good things.  Having an independent body assist in bringing public and private partners to the table to create improvements has been very effective in Milwaukee.[7]

Several states and jurisdictions have formed institutional forums for resolving problems and evaluating the public/private partnership. For example, one committee may be responsible for operational issues, and one for technical issues, while a senior executive committee addresses strategic issues. Illinois uses such a strategy.  The Child Welfare Advisory Committee (CWAC), created by the Illinois General Assembly, meets quarterly to discuss any and all issues related to child welfare in Illinois.  This includes any issues related to contracts and contract monitoring.  According to the current director of DCFS, the CWACs meetings set the stage for collaboration between the private providers and DCFS.  Moreover, these meetings keep the vehicle open for [the] private agencies to raise any issues or concerns (McEwen, 2006a).  The CWACs meetings provide an important avenue for private providers and the public agency to come together to discuss Illinois child welfare system. 

In other states, there are ongoing, less formal, public-private communication mechanisms such as monthly meetings between the public agency and its contract providers to share data, communicate new information on policies or procedures, and discuss strategies for improvement. For example, the Tennessee Department of Childrens Services (DCS) holds monthly reviews of performance data with contractors. A spokesperson for Cornerstone, a child welfare service provider that has a performance based contract with Tennessee DCS, indicates that the monthly reviews and the relationship with DCS are a critical part of the success of their contract because it has helped them to be able to meet their targets.  Similarly, in Missouri, the state agency meets regularly with private partners, alternating between the program directors (who manage the contract daily) and the CEOs (who bring big-picture issues to the table).  Communication occurs frequently between the Departments oversight staff/management staff and the respective contractors.  A CQI process has been implemented locally with the contractors, in which problem-solving between the public/private partners occurs on issues that arise with respect to implementing the foster care contract.[8]

[ Go to Contents ]

Monitoring Child welfare programs

A.      The Focus of Monitoring

Monitoring efforts can focus on different aspects of a contractors performance, including:

  • Compliance with contract terms and state and federal requirements
  • Fiscal performance
  • Case decision making and/or collaborative reviews
  • Performance 

i.            Compliance Monitoring

Public agencies monitor a private providers compliance with various state and federal regulations, and with the terms of the contract.  As noted, until a decade ago compliance monitoring was the primary focus of contract monitoring. Monitoring compliance is often tied with monitoring a providers processes.  For instance, the Texas child welfare agency requires a contractor to maintain sufficient records that adequately account for the use of awarded funds and to provide reasonable evidence that the service delivery complies with contract provisions (Texas Department of Family and Protective Services, 2008).  Compliance is included as part of its programmatic monitoring, and involves the following activities: 

  • Reviewing the service provisions of the contract to determine what the contractor is to provide and the desired quality
  • Reviewing the contractors reports and other materials to determine if services are being provided
  • Interviewing direct delivery staff and others to determine if the services are being performed according to the contract (Texas DFPS, 2008). 

ii.            Fiscal Monitoring

Public agencies are responsible for ensuring that contract dollars are spent appropriately. Agencies vary with regard to whether fiscal monitoring is conducted by a separate unit in state government, by the child welfare contracting agency itself, or by an independent audit (paid for by the private agency), and agencies differ in the level of detailed oversight required. At a minimum, fiscal monitoring focuses on whether program cost information, including administrative costs, are reasonable and necessary to achieve program objectives.  It involves:

  • Reviewing the contractors bills when they are received to determine if appropriate units of measure are reported and that costs (units x rate) are correct;
  • Comparing budgets and/or budget limits to actual costs to determine if the contractors expenditures are likely to be more or less than budgeted;
  • Obtaining reasonable documentation that services billed were actually delivered according to the contract; and
  • As appropriate, comparing bills with supporting documentation to determine that costs were allowable, necessary, and allocable.

iii.            Case Decision-Making Monitoring

Public agencies can also monitor the case decision-making process through collaborative reviews with providers.  In some states, the public agency works very closely with private providers to make decisions about cases on an ongoing basis.  This dual case management approach is used in places like Philadelphia , Pennsylvania.  For a detailed discussion of how seven jurisdictions have divided and shared case management decision-making, see Topical Paper #3 Evolving Roles of Public and Private Agencies in Privatized Child Welfare Systems (http://aspe.hhs.gov/hsp/07/CWPI/roles/index.shtml).

iv.            Performance Monitoring

Increasingly, with the expansion of performance based contracts, performance monitoring has become a central focus of most public agencies monitoring efforts. The U.S. GAO defined performance monitoring as the ongoing monitoring and reporting of program accomplishments, particularly towards pre-established goalsPerformance measures may address the type or level of activities conducted (process), the direct products and services delivered by a program (outputs), and/or the results of those products and services (outcomes) (U.S. GAO, 1999, 6). Typically, performance targets in child welfare are stated as increases or decreases in a specified factor, such as a reduction in the average length of time a child stays in foster care or other measures that are directly linked to CFSR measures.

B.        Monitoring Methods

As previously noted, most public and private agencies use a myriad of methods to assess performance, including desk reviews, case record reviews, site visits/interviews, fiscal audits, customer satisfaction surveys, and independent evaluations.  Which methods a public agency uses to monitor its contracts depends on the outcomes being measured, as well as other factors, such as the level of monitoring required to ensure accountability and the funds available to support monitoring activities. Examples from three jurisdictions are provided below:

  • Kansas conducts annual administrative reviews, in which reviewers from the public agency visit the contractors premises to ensure adherence to general contract requirements like resource family licensing. Staff from the Central Office review various contractor produced reports as well as outcomes monitoring reports generated by the Children and Family Services Division. Then, they perform analyses of data to identify trends in performance results. 
  • New York City has developed an evaluation tool called EQUIP (Evaluation and Quality Improvement Protocol).  EQUIP pulls together information from several sources including administrative data, information from case record reviews, interviews with child welfare clients and agency workers, and field observations.  All of these data are entered into the system to produce an EQUIP score.  This score, which is given to each agency, is used to compare agency performance.[9]
  • In Franklin County, Ohio, public agency staff are co-housed in the private agencies where they can conduct case reviews and work collaboratively on strategies to improve performance. Public agency staff do not do home visits or other activities that might be seen as undermining the managed care staff with the families.  Their role is to offer support and to also monitor services and contract compliance.[10]

C.        Information Needed for Contract Monitoring

A critical part of contract monitoring is determining what information is needed to monitor services, costs, and outcomes. The information needed is based on answers to a few key questions:

  • What are contracts expected to achieve?
  • What needs to be measured to assess contractor performance in achieving goals?
  • Where will the data come from?

i.             Focus on What the Agency is Trying to Achieve through Contracting

Child welfare administrators need to examine the mission and goals for the child welfare agency and the role of the private agencies, in light of Federal outcomes of safety, permanency and well-being. What is the problem the agency is trying to solve through contracted efforts? What results are needed? What program components and actions will lead to the desired results? Further, how can performance measures in contracts and the monitoring of contracts help the public agency to achieve these results? These questions should be addressed first at the agency level, as part of the agencys continuous quality improvement process, and then incorporated into contracts. Some organizations have found it helpful to use flow charts or logic models to illustrate the relationship between activities and expected outcomes. These models can then be used to define measures and identify sources of information.

ii.            Define the Measures

Performance measures can include both outcome and process measures. Outcome measures focus on the results of services that contractors provide, as well as intermediate indicators of success, such as rates of engagement of families in team meetings to develop case plans, timeliness of case plans, timeliness of reviews. Process measures focus on whether and how services are delivered. They include things like the number of children served each month, completion of assessments, accuracy of referrals, staff caseloads and staff vacancies and training, data reporting, etc. Client satisfaction can also be thought of as a process measure.

Selecting and operationalizing the performance measures that will be used to determine success of the initiative is neither straightforward nor without controversy. The challenge is to choose the right number of meaningful, measurable outcome and performance measures that are both reliable and valid.  Measures must accurately show how well the initiative is meeting its goals without overly burdening either the public agency or the contractor with costly data collection, analysis, and reporting requirements. While it is important not to overburden providers with too many reporting measures, by focusing attention on too few measures, a contract may inadvertently encourage providers to act in ways that contradict other program goals (McCullough & Freundlich, 2007). For example, examining only the timeliness of reunification or achievement of other permanency goals in the absence of measures related to re-abuse and re-entry could create potential unintended incentives in case management contracts:  contractors may focus on timely reunification without sufficient attention to ensuring lasting permanency. 

Another key question relates to how the outcomes are selected. Many states struggle to find the appropriate balance between using consistently defined statewide measures that allow for comparisons across the state, and community-specific measures that reflect local interests and needs (Freundlich & Gerstenzang, 2003).

At the time that Requests for Proposals are developed and/or private agency contracts executed, public agencies must be clear about the types of data that will be gathered and how the information will be collected.  The two main types of data that an agency could potentially collect are:

  • Quantitative administrative data to illustrate aggregate trends in service provision and client outcomes
  • Qualitative or descriptive data gathered from reviews of case notes, through interviews and focus groups with children, families, agency staff, and key external stakeholders, through stakeholder satisfaction surveys, or through field observations

Each of these types of data helps the public agency to answer different types of questions.  For instance, quantitative data answers questions such as how many children exited care in a six-month period. Quantitative data can provide consistent measures across providers or over time about the impacts of service provision and client outcomes that is missing from many other methods of review. While important, these data do not provide any information about the process of how children exit care, for example.  Case record and qualitative case reviews provide more information about the black box of how a certain outcome is achieved. They can also help ensure that processes are operating correctly. For instance, one goal of a case record review might be to ensure that all licensed foster parents have gone through appropriate background checks. Qualitative interviews and focus groups provide an even greater level of detail about how well the system is working. For example, a site visit which includes interviews with families can provide information about the quality of services that may be missing from a review that includes only quantitative data.

New York City and Illinois provide examples of how different data are used to answer different performance-related questions. In New York City, the Administration for Childrens Services (ACS) addresses three areas of contractor performance:  agency processes, quality of service, and outcomes for children.  ACS uses its own administrative data to measure agency processes and child outcomes, but uses other data sources (e.g., case record reviews, interviews with clients and workers, and field observations) to assess the quality of a contractors services (Baron, 2003).  Similarly, Illinois DCFS uses different data sources to measure outcomes in three key areas:  permanency, stability, and family engagement.  DCFS relies on data compiled and analyzed by the Chapin Hall Center for Children to measure outcomes related to permanency.  To assess stability, the state relies on data collected as part of the AFCARS system.  Finally, DCFS looks to the results of various case record reviews to monitor family engagement (McEwen 2006a).  

iii.            Address Data Collection, Communication, and Technology Issues

Researchers have noted that privatized initiatives have placed a premium on access to real time information to guide case-level decisions, contract monitoring, and system planning (Freundlich & Gerstenzang, 2003; McCullough, 2005).  However, there is abundant evidence that many initiatives launched in the 1990s lacked the technology or staff resources to collect or manage data as intended. 

Good data systems are a critical part of any privatization effort.  Both public agencies and providers need data for operational decisions and successful contract management. The MIS must be able to track performance from a variety of different perspectives client status, service utilization, service/episode costs linked with case plan goals, treatment, and outcomes.  The system must be need-driven, flexible, user-friendly, and capable of generating useful reports for all users (McCullough & Associates, 2005).

However, until quite recently, most public agencies and contractors lacked the infrastructure, data collection tools, and information systems needed to monitor contracts comprehensively.  As one study of states fiscal child welfare reform efforts notes, Inadequate data on service needs, utilization, costs, performance, and outcomes plague states attempts to implement child welfare fiscal reforms (Westat and Chapin Hall Center for Children, 2002, 68).  This study examined the management information systems of 23 initiatives in 22 states and found that few initiatives had information systems necessary to provide timely and adequate data.  Systems were found to be unable to measure impact of the reforms and did not track all features of a program (e.g. service utilization, costs, client status and outcomes).  The systems were rarely compatible across agencies and service systems. This study, along with several others, concluded that in order to manage and monitor new state reforms, significant investments in hardware, software, and training were needed.

Investments in information systems infrastructure needed for comprehensive contract monitoring are needed in both the public agencies and the contracting agencies,  and such efforts must be coordinated across organizations. The need for coordination in these activities is sometimes overlooked.  In a recent QIC PCW listserv request for information about states use of SACWIS in a privatized setting, several states reported ongoing challenges for private agencies with basic data entry and data base access.  Many private agencies continue to conduct dual data entry into the states SACWIS and into their own case management system to record all necessary information for contracting purposes.[11]

Despite the limitations noted above, it appears that a privatization initiative can improve a states ability to collect and analyze data over time.  In Kansas, for instance, regional foster care providers have developed extensive case management systems to track clients and services, and are working to track costs.[12]  One of the states private providers developed a management information system, which compiles data on a daily, weekly, and monthly basis.  These data are used to measure performance for each division within the agency on a monthly basis. Each division has clearly established performance goals and these data are used in monthly meetings to determine whether the agency has achieved these goals (Westat and Chapin Hall Center for Children 2002). Similarly, another study of privatization efforts across six States found that in five, the private agencies over time created the capacity to collect, analyze, and report data at a level that surpassed the previous public agencys capacity (Freundlich & Gerstenzang, 2003). 

Issues that must be resolved in planning a monitoring system include the degree to which data systems are shared between the public agency and contractors; the mechanisms used to translate and communicate data into useful reports; and an assessment of the information needed by contractors operating under various risk-sharing contracts.

Contractors in many child welfare privatization efforts have at least limited viewing privileges to the data systems used by their public agency counterparts. In some initiatives, contractors access to data systems is notably more extensive. In Florida, for example, private agencies with case management responsibilities are required to use the States data system to manage eligibility determinations and ongoing case management. Shared access to information systems facilitates coordination among private and public agency staff in a number of ways, not the least of which is ensuring that the state is able to meet federal reporting requirements.  Theoretically, a shared data system also facilitates the resolution of communication problems and makes it possible for contractor(s) and public agency staff to directly review information from, or identify discrepancies in, their counterparts systems.

Use of a common data system is not without challenges, however. The states automated system may or may not support data collection that will enable the private agency to effectively manage its services and meet all of the requirements of the contract.  For example, few state systems are equipped for utilization management, provider network management, or claims, billing, reconciliation, and paymentsall core functions required in some private agency contracts. Some do not even contain all the data elements required for performance monitoring.

Florida is a good example of a state wrestling with the challenges that must be faced when public and private agencies share a data system for some data collection, but maintain separate systems for other data. The community-based care agency caseworkers are required to enter data into Floridas SACWIS. Like all private agencies operating under risk-based contracts, each of these agencies also maintains their own data systems to manage their business processes and track their own performance. This requires dual data entryhardly an ideal or cost-effective solution. In 2002, the University of South Florida (USF), as part of its ongoing evaluation of community-based care, recommended a number of steps to strengthen the current system and develop an effective interface between the lead agencies data systems and the Departments system. At a minimum, USF recommended that DCF and lead agencies reach agreement regarding the data needed, the specified data format, and procedures that would be allowed for electronic submission (USF, 2002).

Though data challenges remain, Florida has taken steps to ease the burden. The State, as part of its community-based care initiative, has created a document which features explicit instructions about data used for performance measurement. This Performance Measure Methodology Document includes the definition, calculations, data sources and data processes for each measure.  The definition describes what is meant by the measure, while the algorithm explains how it is calculated.  The data source identifies who collects and enters the data into the information system.  Finally, the data processes discuss how the data are used and analyzed, as well as any contract enforcement for a particular measure.[13]

During focus groups conducted in 2005 to assess Arizonas readiness for privatization of case management, many of the providers and external stakeholders identified data technology as an area that might be problematic.  Planners of any privatized case management contract will need to assess the current public agency information technology capacity and identify enhancements that may be required to monitor the performance of contractors.  They will need to ensure that contract agencies have the technological and human resource capacity to meet specified data collection and reporting requirements. Among the basic questions that should be asked and answered are the following:

  • If we privatize the case management function, what are the implications for the states SACWIS and the collection and use of data?
  • Will private agency case managers enter data directly into state systems? If not, how will the public agency ensure compliance with all federal and state data reporting requirements and maintain a single case record?
  • What MIS enhancements are required to obtain the real-time information needed to manage and monitor the system?
  • How will all parties verify the integrity of data used to monitor performance, award incentives, or impose sanctions? (McCullough & Associates, 2005)

D.        Staff Training

A final, and extremely important, component to contract monitoring revolves around staff training. Not only are quality assurance efforts expanding and evolving, but staff originally trained as case managers are now assuming contract monitoring functions. Further, as contract expectations are increasingly focused on service quality and outcome measures (versus the delivery of service units) contract monitors need new skills to examine new features of performance. As noted previously and throughout all of the Topical Papers in this series, partnership and collaboration are a centerpiece of many recent contracts. The training contract monitors might have received in the past may not have prepared them for their new roles as a partner with the contractors they monitor.  This may be more difficult for monitors who assumed their positions after their previous jobs as case managers.

Consequently, training for contract monitors must go beyond standardization of processes and tools and also get to something more basic  helping staff re-define and clarify their purpose in relation to the private agencies. The traditional compliance-driven monitoring was not concerned with relationship building or problem-solving, it was even at times adversarial and punitive. In contrast, today states and private agencies are striving to operate more like partners. The desired collaboration is only possible in a climate of trust and openness. For many workers with monitoring experience, it is not always clear how to hold agencies accountable while also partnering with them to improve performance. As one administrator confided, Our contract monitors struggle with their two hats  trusted-on-your-side-helper versus enforcer of contract requirements. At some point, when the data says things arent working, it is not always clear to contract monitors how far they can or should go to help an agency that is not able to get the results they are being paid to achieve.

Part of the challenge might be the lack of clarity in the nature of the public-private relationship. In looking at the Florida experience, USF sums up the key question that confronts community-based care agencies and the Department, Are private agencies simply an extension of DCF, or are DCF and the lead agencies business partners? (USF 2002, 30). How states and private agencies answer that fundamental question may have far-reaching implications for how contracts are monitored.

It is interesting to note that while much of the literature addresses the need for training, there is little information about the kinds of training offered to contract monitors.  An agency in need of training may participate in training provided through national organizations.  Or, an agency can look to peers in other agencies, counties, or states who have undergone privatization efforts to learn more about their best practices or lessons learned with regard to contract monitoring (Yates 1998). As with other areas in child welfare, there is a need for ongoing training to address the chronic turnover in child welfare staff and the subsequent discontinuity in workers knowledge and experience.  Florida recently noted that staff turnover is a significant problem that adversely affects the level of expertise in contract monitoring (Office of Program Policy Analysis & Government Accountability an office of the Florida Legislature, 2008).

Florida has recently undertaken efforts to improve training for its contract monitoring staff.  In 2006, the Department of Children and Families central office surveyed contract monitoring staff to identify their training needs.  Responses were used to design statewide training which focused on essential components of the contract monitoring function, including report writing, changes in community-based care contract requirements, and a recently implemented monitoring tool for children in foster care who receive independent living services (Office of Program Policy Analysis & Government Accountability an office of the Florida Legislature 2008).

[ Go to Contents ]

Using the Information Collected

A.    Reports and Feedback

Using the information collected to ensure contract compliance, improve quality, and achieve the agreed-upon outcomes requires user-friendly reports and processes for sharing and learning. This section describes how several states are sharing information across providers and with the public, how often reports are generated, and the kinds of reports that states find to be useful for stakeholders.

i.      How States Share Information from Monitoring

The ability to collect raw data, while essential, is not sufficient to ensure that data are translated into useful reports needed by the private and public agencies to fulfill their responsibilities under the contract. Child welfare privatization initiatives have varied in the reporting requirements imposed on private contractors, but many research studies have documented a tendency for over- or under-reporting and a lack of clarity in the purpose of various reports. There has been a growing trend to broadly share findings from performance reports. Public agencies have posted performance data on the states website, allowing a comparison between private agencies and between the public and private agencies on key performance indicators or outcome measures.

Kansas, Florida, and the District of Columbia are among the states that have worked to make child welfare performance transparent. In Kansas, performance data is available on the Internet,  and includes case review information, as well as annual performance reports for foster care services, adoption services, and family preservation services.[14]  In Florida, CBC agencies are able to compare their performance to all other CBCs and to the statewide average for each outcome area.  The Scorecard is updated monthly and posted on the state website. Similarly the D.C. Child and Family Services Agency (CFSA) has a Scorecard on its website that contains performance data on CFSR indicators and on various other benchmarks established under a lawsuit (LaShawn A v. Williams) that had placed the city under a receivership. The scoreboard posts performance of all agencies with foster care contracts side by side with the performance of CFSA staff that have similar responsibilities.[15]

Creating data reports for contractors that link state child welfare administrative data to data provided by contractors can also be a useful tool.  New Mexico, for example, collects data from private service providers on the children that they have served and runs it against their own SACWIS data.  They produce reports for their contractors that include more specific information on the clients that they have served.  For instance, for a provider that offers an intensive family support program and tries to prevent further CPS involvement, CYFD provides information about the families that come back into the system.  Interviewees in New Mexico report that this process is informative for contractors, and also helps to strengthen existing relationships between contractors and CYFD.

ii.      How Often Reports and Feedback are Produced

How often do data need to be collected and reported?  There is no right or wrong answer to this question. Child welfare poses a challenge for assessing outcomes because it can take a long time for outcomes to occur.  For instance, outcomes like time to adoption must be observed over a period of several years.  Most contracts today include both outcomes and more immediate performance measures, thought to be associated with long-term results that are measured on a monthly basis. For example, a contract with timely reunification as a long-term outcome might also have monthly targets for child/family visitation and contact between workers and parents as interim measures that have been found to be correlated with long term success. 

Alternatively, agencies can construct interim targets for long term outcomes. Wulczyn (2007) provides an example of how this works in practice.  The total time period under examination is two years, but interim data are gathered every six months (though he notes that the interim periods can be longer or shorter).  Each interim period is given a target, which is scaled to the larger target.  If for example, the agency expects there to be 831 exits from care in two years, it may be reasonable to assume that at least 25 percent of them would occur in the first six months (25 percent of the total time interval).

Contracts should explicitly define the data reporting requirements, since providers need to include these costs in their budget proposals. As an example, in a recent renewal of a statewide performance-based contract for foster care recruitment, placement matching, and support, the contract specifies how the public agency will monitor performance on an ongoing basis and stipulates the contractors responsibility for submitting the following reports on a quarterly basis:

  • Number of resource families licensed as compared to goals established within each service area/community.
  • Number of families who leave each quarter per service area and reason.
  • Number of resource families who are interacting (phone or face-to-face) with birth parents of children in care and the nature and frequency of interaction.
  • Number of licensed resource families that have not been selected for a placement match within one (1) year of the issuance of the license and reasons for family not being selected for a match.
  • Progress/barriers to achieving the areas recruitment plans.
  • The number of foster, pre-adoptive, and adoptive (post-finalization) families who have received support and a description of general nature of support provided.
  • Reports of findings from focus groups with resource families and with DHS staff.[16]

iii.      The Kinds of Reports that are Useful to other Stakeholders

In general, reports are primarily used as tools for the agency and contractors.  However, data can also be useful to other stakeholders, such as the courts, citizen review boards, legislators, etc.  The reports are similar to other reports produced, but should be tailored to the particular audience.  Public agencies can also use meetings with stakeholders as ways to share information about how the state agency and its contractors are performing. 

OBrien and Watson suggest three different types of reports from automated data systems that are useful to states: 

  • Outcomes reports, which focus on client outcomes, such as lengths of stay for children in care.
  • Practice reports, which focus on key practice issues that can be gleaned from automated or other reporting mechanisms, such as the proportion of cases in which a family team meeting was held.
  • Compliance reports, which provide information on the extent to which an agency complies with requirements, such as the percent of investigations completed within a given timeframe (OBrien and Watson, 2002, 22).

They also suggest some report formats that can be helpful, including:

  • Reports that allow easy comparison across regions, local offices, and units.
  • Reports on exceptions, such as reports flagging cases where investigation dispositions are past due.
  • Early warning reports identifying cases that do not meet requirements prior to a review (OBrien and Watson, 2002, 22).

Reports should also incorporate data from sources beyond automated data systems, such as case record reviews and stakeholder input.  For program administrators, ideal reports would include information about both outcomes and casework practice of both high and low performing agencies, to promote practice changes when warranted.  These data can be combined in reports to analyze a systems strengths and weaknesses, providing a more holistic view of the systems functioning.

B.    Performance Issues and Remedies

The contract should specify clear procedures for addressing performance issues and remedies for contract noncompliance. The public agency and the contractor should share a mutual understanding about the consequences of any deficiencies identified in the course of contract monitoring.

Because private agencies want the business and want to continue providing services, they are likely to meet, or exceed, performance expectations and provide all information that the public agency needs. In some cases, however, performance problems occur. The private agency, for example, may not provide the agreed upon services, may not provide reports in a timely way, or cannot be reached for information. When these situations arise, it is critical to be able to rely on contract provisions that clearly state how the public and private agency will proceed if performance is not satisfactory (Freundlich, 2007).

Technical assistance, performance triggers, and fiscal penalties are methods that public agencies use to promote contractor compliance and address contractor deficiencies.  In fact, there is a continuum of steps that public agencies can take to respond to performance problems:

  • Preventive activities that may include referral conferences and contract review meetings;
  • Discussions and problem-solving with the private agency program staff regarding performance expectation issues as they arise;
  • Utilization of the chain of command in both the public and private agency to address performance issues;
  • Withholding of funds when performance problems arise (such as failure to submit required reports);
  • Corrective action plans with timeframes for remedying poor performance; and
  • Termination of the contract and arranging for another agency to step in and provide the services (Freundlich 2007).

Performance based contracts can be written with triggers in response to deficiencies found during the contract monitoring process. For example, when phasing in performance measures in Illinois, new contracts with foster care agencies stipulated that agencies must achieve permanency within one year for 24 percent of the existing caseload. Reviews occurred twice a year, and during that first year, intake at some agencies was suspended due to insufficient performance. This effectively sent the message that agencies would, in fact, be required to abide by the terms of their contracts. In subsequent years, the required permanency rate was increased. Agencies are now reviewed on an annual basis.  The public agency ranks all agencies from lowest to highest permanency placement rates. Those with the highest rate are the most likely to receive the guaranteed intake, which is now the only way of sustaining their revenue (McEwen, 2006).

As a result of the CFSR process, some states are requiring providers to develop and then implement program (or performance) improvement plans when performance falls below a certain threshold. Iowa is a good example.  The statewide contractor responsible for recruitment, licensing, training, and placement matching and support is required by the Department of Human Services to develop a Performance Improvement Plan (PIP) any time performance falls below ten (10) percentage points of any of the specified Performance Measure targets. If the performance remains below ten percentage points after a 6-month period of implementing the PIP, the contractor is required to develop and submit for approval another PIP, which continues for a minimum of six months or until the last day of the contract. If a second PIP is required, the contractor will dedicate one percent of its base pay for the second PIP-plan period exclusively to activities and actions related to improvement in the area or areas of identified need.[17]

Corrective action and performance improvement plans are typically created by the provider with input from the public agency and serve as a roadmap to correcting any contract performance issues. 

In New York City, the Agency Program Assistance Unit within the public agency develops Corrective Action Plans based on an agencys EQUIP score (described above) that is a compilation of performance data pulled from several sources including administrative data, case record reviews and field observations (see text box, above). 

In Kansas, these are referred to as Local Action Plans. When contract-related issues related to outcome performance arise, the Kansas Department of Social and Rehabilitative Services (SRS) first discusses the concerns with the regional contractor. They work together to identify any barriers that may cause the concern and note any resources to address them.  All discussions about the concern and efforts to address it are carefully documented.  Once consensus about the issue is reached, the SRS regional office may decide that the provider needs focused consultation and technical assistance.  The SRS regional office can ask the provider to prepare a written Local Action Plan.  This Plan is a tool for identifying the problem and measures needed to correct it, and includes specific information about the staff responsible for undertaking the plan and the timeframe for completion. It serves as a written agreement between SRS and the provider.  The SRS Region monitors the Local Action Plan and informs the provider once they have successfully completed the plan.  If the provider is unable to complete the plan, the SRS region may move to a more structured resolution process.[18]

One study of professional services contracting (Fisher et al., 2006) cautions against waiting until performance is in the red zone before taking action. The study found that it is important to monitor trends and take action when performance starts to dip, even if it is at an acceptable level. This approach offers the opportunity to provide technical assistance to improve contractor performance. This approach is important because there will be situations where a provider does what is required in a contract (provides expected services at expected levels), but does not achieve performance targets.  This early examination of performance issues can serve as a reality check for both private and public agencies because the public agency may have set unrealistic targets or provided insufficient supports in contracts to enable contractor success.

As an example, an initiative in Florida (one of the three state initiatives funded under the QIC PCW), has set up such an early warning system for its new performance based contract and quality assurance initiative.  When potential issues in performance achievement by a case management agency are identified, the lead agency provides free technical assistance for a period of time.  If problems persist and further technical assistance is required, that service comes at a cost to the private case management agency. 

According to state stakeholders, New Mexicos Children Youth and Families Department (CYFD) takes a supportive approach to contract monitoring. If CYFD staff see problems when they visit providers, they will offer technical assistance. They also offer training to providers. CYFD has a collaborative effort with a university to offer classes and, if CYFD monitors think that the provider could benefit, they will suggest that they attend. Consistent with this supportive approach, CYFD cannot sanction a provider and get money back. In egregious cases, they can cancel a contract, but the agency indicates that doesnt happen very often. Contracts are negotiated annually, at which point CYFD can decide not to renew a contract.

From a legal standpoint, it is helpful to have an agreement for solving disputes before they go to the courts. Lawyers can be very helpful in structuring a contract, but ideally, contract monitoring and contractor performance issues should proceed smoothly and not require further legal services to resolve disputes.  Clear, up-front, expectations, and a collaborative relationship based on the shared goals of providing quality services and the best possible outcomes for children and families are the best way to assure a constructive partnership between public agencies and contractors.

[ Go to Contents ]

Conclusions and Lessons Learned

The more that public agencies depend on private agencies to deliver services, especially case management services to children and families, the more sophisticated the quality assurance and contract monitoring systems should be. Planners need to carefully think through the monitoring process, drawing on the lessons learned from other communities that have struggled with finding the right balance between oversight and innovation. What is required is a balanced approach that allows the public purchaser to monitor for results while also granting the provider the flexibility to innovate.

There is no single path to strong quality assurance. Many states have significantly expanded their oversight efforts of contracted services, collecting additional information and collecting it from more sources. While it is important to set expectations, it can be challenging to know what to do when expectations are not met, especially in this new atmosphere of enhanced collaboration in service provision between public and private agencies.

A review of the literature and state experiences to date highlight the following lessons about contract oversight and monitoring of child welfare services:

  • The support of upper management is critical. An effective contract monitoring system requires buy-in at many levels, but support must start at the top of the organization in order to obtain the resources needed, provide support to staff as they transition to an outcome-focused system, and send a consistent message to staff, contractors and potential contractors, and the families they serve. 
  • Understand the link between theory, program specification, and desired outcomes and convey that understanding to providers. The focus on outcomes represents a new way of thinking for agency staff as well as contractors. What is the problem the agency is trying to solve? And what program components and actions will lead to the desired results? Public agencies need to meet regularly with contractors and genuinely engage them in planning and problem solving. Discussions should include selecting outcomes/goals and reviewing existing information and data on where performance is at the moment (OMB Office of Federal Procurement Policy, 2008; OBrien, 2005).
  • View contract monitoring as part of continuous quality improvement. If contract monitoring is going to be effective, it must be integrated under an agencys QA umbrella, and the focus must be broadened beyond compliance to include activities intended to stimulate and reinforce improvement. This may require integration of previously separate staff functions or enhanced communication across agency divisions. Key departments should be in constant communication with one another, including program, information technology, and accounting units (Meezan and McBeath, 2004).
  • Be open to re-thinking outcomes, expectations, and how contractors are judged.  Many public and private agencies have realized mid-way through a contract that outcomes and performance measures were set at unrealistically high levels. One effective way to prevent this is to examine outcomes at regularly scheduled performance review meetings between the agency and the contractor.  At a minimum, public agencies should use contract renewal negotiations to revise expectations based upon experience and research evidence.
  • Be prepared to make changes as the system matures. Initial successes may leave more challenging cases in the system or may reveal gaps in services. For example, Illinois initiated performance based contracting for child welfare services in 1997, and was successful in moving thousands of children to permanency, but problems still remained with regard to placement instability and the complexity of needs for harder-to serve youth. Having achieved a reduction in cases, the state is changing performance based contracts to emphasize best practices and to redirect funds in order to reduce targeted caseload ratios (Kearney and McEwen, 2007).
  • Collect data that are useful and use the data.  Based on the identified linkages between program components and outcomes, public agencies are increasingly reaching out to contractors to work together to select meaningful and realistic outcome measures and designing data reporting requirements around those measures. While other data may be required for compliance with state and/or Federal reporting mandates, avoid collecting any unnecessary data.  Working closely with contractors also helps to ensure that data definitions are consistent and that data are seen as valid and reliable by both agencies and providers. Finally, use the data to monitor progress and suggest improvements by comparing performance across contractors and jurisdictions as well as performance over time.
  • Invest sufficient resources, especially in monitoring staff and staff training. There is a growing realization that contract management and monitoring is complex work. This requires that agencies allocate sufficient resources in both the contracting and program offices, to do the job well (OMB Office of Federal Procurement Policy, 2008).
  • Remember that contractors are partners and share the agencys goal of achieving the best outcomes for children and families.  Traditionally, contract monitors were expected to maintain an arms length distance from contractors, but that approach may not work for todays contracting situations, especially performance based contracting. It is in the best interest of all parties concerned that the contract be successful.  A team approach is essential and will require ongoing work to sustain (OMB Office of Federal Procurement Policy, 2008).

[ Go to Contents ]

Reference List

Armstrong, M., Jordan, N., Kershaw, M. A., Vargo, A. C., Wallace, F., and Yampolskaya, S.  (2004). Statewide Evaluation of Floridas Community-Based Care:  2004 Final Report.  Tampa, FL:  University of South Florida.

Auditor General. (2001) Monitoring of community-based care providers of child welfare services by the Department of Children and Family Services. Operational audit (Report No02-033). Tallahassee, FL: State of Florida General Auditor.

Baron, J.  (2004). Reform in Action.  The Future of Children, 14(1), 10-22.

Eggers, W.  (1997). Performance Based Contracting: Designing State of the Art Contract Administration and Monitoring Systems.  How to Guide #17.  Accessed at:  http://www.reason.org/htg17.pdf

Fisher, S.L., M.E. Wasserman, and P.P. Wolf (2006). Effectively Managing Professional Services Contracts: 12 Best Practices. Competition and Choice Series, IBM Center for the Business of Government.  Accessed at  http://www.businessofgovernment.org/pdfs/FisherReport.pdf.

Florida Department of Children and Families.  (2005). Community-Based Care Lead Agency Subcontracting Guidelines.  Accessed on July 9, 2008 at http://www.dcf.state.fl.us/cbc/docs/cbc_lead_agency_subcontracting_guidelines_09-12-2005.pdf.

Friedman, M.  (1997). A guide to developing and using performance measures in results-based budgeting. Retrieved July 18, 2006 from the Finance Project web site: http://www.financeproject.org/Publications/measures.html Cited in Pal-Tech and University of Kentucky 2006)

Freundlich , M. and S. Gerstenzang. (2003). An Assessment of Privatization of Child Welfare Services: Challenges and Successes. Washington, DC: CWLA Press.

Freundlich, M. (2007). Dollars and Sense:  A Guide to Achieving Adoptions Through Public-Private Contracting.  Retrieved July 13, 2008 from  http://www.adoptuskids.org/images/resourcecenter/dollarsandsense.pdf.

Kearney, K. and E. McEwen. (2007). Striving for Excellence: Extending Child Welfare Performance-Based Contracting to Residential, Independent, and transitional Living Programs in Illinois.  Professional Development:  The International Journal of Continuing Social Work Education (Vol, 10, No 3, Winter).

Kansas Department of Children and Family Services. (2008). Policy and Procedure Manual.  http://www.srskansas.org//CFS/cfp_manuals/ppmepmanuals/ppm_manual/ppm_sections/SECTION%208000.htm

Lee, E., Allen T. and Metz A. (2006). Literature Review on Performance-Based Contracting and Quality Assurance.  University of Kentucky & Planning and Learning Technologies, Inc. Retrieved July 16, 2008 from University of Kentucky, Quality Improvement Center on the Privatization of Child Welfare Services Web site:  http://www.uky.edu/SocialWork/qicpcw/documents/QICPCWPBCLiteratureReview.pdf

Martin, L.  (2002). Making Performance-Based Contracting Perform:  What the Federal Government can Learn from State and Local Governments.  New Ways to Manage Series.  Accessed at  http://www.businessofgovernment.org/pdfs/Martin2Report.pdf.

McCullough, C. (2004). Financing and Contracting Practices in Child Welfare Initiatives and Medicaid Managed Care:  Similarities and Differences. Washington, DC: Child Welfare League of America, Inc. Retrieved July 7, 2008, from  http://www.cwla.org/programs/bhd/mhpubfinancing.htm

McCullough and Associates.  (2005). Child Welfare Privatization.  Unpublished report prepared for the Arizona Department of Economic Security, Division of Children, Youth, and Families.  Accessed on July 7, 2008 from  https://www.azdes.gov/dcyf/cmdps/cps/pdf/Final%20Report.pdf.

McCullough & Associates & Freundlich, M. (2007). The Impact and Risk of Child Welfare Privatization. Unpublished report prepared for the Arizona Department of Economic Security.

McEwen, E. (2006a).  Enhancing Performance in Contracts:  Outcomes and Monitoring.  Teleconference hosted by the National Child Welfare Resource Center on Organizational Improvement. Accesses at http://tatis.muskie.usm.maine.edu/pubs/pubdetailWtemp.asp?PUB_ID=T113006.

McEwen, E. (2006b). Performance-Based Contracts as a Strategy for Improving Child Welfare: Lessons Learned in Illinois. Unpublished report. 

Meezan, W. & McBeath, B. (2004). Nonprofits Moving to Performance-Based, Managed Care Contracting in Foster Care:  Highlights of Research Findings.  Retrieved July 14, 2008 from the Michigan Nonprofit Association web site:  http://action.mnaonline.org/pdf/snapshot02.pdf

Nightingale, D.S. & Pindus, N. (1997).  Privatization of public social services: A background paper.  Washington, DC: Urban Institute.  Retrieved August 2, 2007 from  http://www.urban.org/url.cfm?ID=407023&renderforprint=1

OBrien, M. and Watson, P.  (2002). A Framework for Quality Assurance in Child Welfare.  Portland, ME:  National Child Welfare Resource Center for Organizational Improvement.

OBrien, Mary. (2005). Performance-Based Contracting in Child Welfare (draft) Maine: National Child Welfare Resource Center for Organizational Improvement.

Office of Federal Procurement Policy, Office of Management and Budget, Executive Office of the President.  1994.  A Guide to Best Practices for Contract Administration. Washington, DC:  Author.

Office of Federal Procurement Policy, Seven Steps to Performance-Based Service Acquisition. Washington, DC. retrieved March 2, 2008 from:  http://acquisition.gov/comp/seven_steps/library.html.

Office of Management and Budget, Executive Office of the President.  1994.  A Guide to Best Practices for Contract Administration.  Washington, DC:  Author.

Office of Program Policy Analysis & Government Accountability an office of the Florida Legislature. (2006). Additional Improvements are Needed as DCF Redesigns Its Lead Agency Oversight Systems, Report No. 06-05.  Tallahassee, FL:  Author.

Office of Program Policy Analysis & Government Accountability an office of the Florida Legislature. (2008). DCF Improves Contract Oversight of Lead Agencies; Fiscal, Quality, and Performance Assessment Are Undergoing Change, Report No. 08-39.  Tallahassee, FL:  Author.

Petr, C. and Johnson, I. (1999) Privatization in Kansas: A cautionary tale. Social Work, 44, 263-267.

Texas Department of Family and Protective Services.  (2008). Contract Monitoring.  Accessed on July 7, 2008 at  http://www.dfps.state.tx.us/documents/prevention_and_early_intervention/pdf/contract-monitoring-fy04.doc.

U.S. Government Accountability Office. (1997b). Privatization: Lessons learned by state and local governments. Publication No. GAO/GGD-97-48. Washington, DC. Retrieved July 14, 2008 from  http://www.gao.gov/archive/1997/gg97048.pdf.

U. S. General Accounting Office (1998). Privatization: Questions state and local decision-makers used when considering privatization options (USGAO/GGD-97-98). Washington, DC: Government Printing Office.

U. S. General Accounting Office (1999). Agency performance plans: Examples of practices that can improve usefulness to decision makers. Washington, DC: Government Printing Office.

U.S. Department of Health and Human Services. Childrens Bureau Child and Family Services Reviews Fact Sheet. Accessed on July 9, 2008 at  http://www.acf.hhs.gov/programs/cb/cwmonitoring/general_info/fact_sheets/index.htm.

Watt, W., R. Porter, L. Renner, and L. Parker. (2007). Maintaining Positive Public-Private partnerships in Child Welfare: The Missouri Project on Performance-based Contracting for Out-of-Home Care. Professional Development:  The International Journal of Continuing Social Work Education, 10(3), 49-57.

Westat and Chapin Hall Center for Children, University of Chicago.  (2002). State Innovations in Child Welfare Financing. Retrieved from  http://aspe.hhs.gov/hsp/CW-financing03/report.pdf

Wulczyn, F. (2007). Monitoring Child Welfare Programs: Performance Improvement in a CQI Context. Chicago, IL: University of Chicago, Chapin Hall, Center for Children.

Yates, J. 1998.  Managing the Contracting Process for Results in Welfare Reform. Welfare Information Network, 2(13).  Accessed at http://76.12.61.196/publications/contractissue.htm.

[ Go to Contents ]


Endnotes

[1]  Personal communication with Crystal Collins-Camargo, Director, Quality Improvement Center on the Privatization of Child Welfare Services.

[2]  The CFSR includes an assessment of the states quality assurance system specifically, Item 30: Standards to ensure quality services and ensure childrens safety and health; Item 31: Identifiable QA system that evaluates the quality of services and improvements.  For more information about findings from the first round of CFSRs, go to: http://www.acf.hhs.gov/programs/cb/cwmonitoring/results/genfindings04/ch1.htm

[3]  Section 479A of the Social Security Act.

[4]  1974 (P.L. 93-247). This Act was amended several times and was most recently amended and reauthorized on June 25, 2003, by the Keeping Children and Families Safe Act of 2003 (P.L. 108-36).

[5]  Interview with Maryellen Bearzi, New Mexico Children, Youth, and Families Department. July 2, 2008.

[6]  See http://dhfs.wisconsin.gov/bmcw/partnership/INDEX.HTM.

[7]  Comments on the QIC PCW listserv by Linda Davis, member of the Milwaukee Partnership Council.

[8]  Interview with Gino De Salvatore, Cornerstone, Inc. June 19, 2008.

[9]  Starting in July 2008, New York City implemented the Improved Outcomes for Children (IOC) initiative.  IOC is a series of reforms for Foster Care and Preventive Services designed to strengthen the work of the Administration for Childrens Services and its partner agencies.  One of the IOCs reforms is a new performance monitoring system, including a new provider agency evaluation tool called Scorecard. Scorecard builds on the EQUIP system and will include a performance scorecard for each agency, detailing each agencys performance in the areas of safety, permanency, well-being, foster parent support, and community and cultural competency.  For more information see: http://www.nyc.gov/html/acs/html/about/ioc_initiative_faqs.shtml

[10]  Personal communication with Tina Rutherford, Franklin County Ohio.

[11]  For more information about the listserv exchange, see http://www.uky.edu/SocialWork/qicpcw/documents/SACWISThemes0907.pdf

[12]  Sherry Love, VP/Chief Clinical Officer KVC Clinical Health Center, Olathe, KS. Presentation at 2008 National Child Welfare League of America Meeting.

[13]  For more information go to: http://www.dcf.state.fl.us/cbc/docs/CBC_Performance_Measure_Methodology_Doc_11-27-07.pdf  

[14]  http://www.srskansas.org/CFS/QA/qamain.htm for case review information and http://www.srskansas.org/CFS/datareports08.html for program reports.

[15]  For more information go to: http://cfsa.dc.gov/cfsa/frames.asp?doc=/cfsa/lib/cfsa/may_2008_scorecard_-_contracted_agencies__07-31-08_final_.pdf to view reports.

[16]  Contract Number BDPS-07-018 between the Iowa Department of Human Services and Four Oaks Family and Childrens Services.

[17]  Contract Number BDPS-07-018 between the Iowa Department of Human Services and Four Oaks Family and Childrens Services

[18]  http://www.srskansas.org//CFS/cfp_manuals/ppmepmanuals/ppm_manual/ppm_sections/SECTION%208000.htm

Populations
Children