Toward Understanding Homelessness: The 2007 National Symposium on Homelessness Research. Accountability, Cost-Effectiveness, and Program Performance: Progress Since 1998.. Case Study: The Community Columbus Shelter Board, Columbus, Ohio

03/01/2007

Since 1997 the Community Shelter Board has conducted annual program evaluations for the Columbus and Franklin County Continuum of Care Steering Committee. The Steering Committee utilized evaluation of renewing projects as a means to make ranking decisions, adjust funding awards, and monitor program performance. The program evaluation has considered client characteristics, program utilization and outcomes, program design and implementation, and program costs. The evaluation compared planned results as described in the prior application with actual results obtained. The program was also assessed for compatibility with local priorities and overall community impact. The data were obtained from HUD APRs, interviews with providers, and on-site program visits. Over time, the Steering Committee began tracking and comparing housing outcomes for all programs, as well as comparing program costs (per household served and per housing unit provided). As the process is centered only on the HUD application and is not part of the HUD contracting process, it has more control than a purely voluntary process but less control than a performance-based contracting process. (See Exhibit 3.)

Exhibit 3. Summary of program evaluations conducted by the Columbus and Franklin County Continuum of Care Steering Committee, 1997-2006
Year # Programs Evaluated Performance Rating Not 
Funded
Permanent 
Supportive
Housing
Transitional
Housing
Services
Only
High Medium Low
1997 4 9 1 7 6 1 1
1998   5   1 3 1  
1999 2 3 3 1 3 4 1
2000 6 4 1 3 6 2  
2001 0 7 0 1 4 2 2
2002 1 3 0 3 0 1  
2003 4 4 2 5 3 2 2
2004 1 2 0 0 1 2 2
2005 5 2 0 6 0 1  
2006 10 3 0 12 0 1  

Over this 10-year period, the evaluation process has been modified to better address community needs, respond to best practices, and comport with HUD funding requirements. The impact of using data to inform community funding decisions has been profound:

  1. Overall program performance has increased. Programs experience higher housing outcomes and improved program occupancy, and serve more challenging clientele.
  2. The inventory of programs has shifted to 91 percent permanent supportive housing beds in 2006 vs. 69 percent in 1997.
  3. Community confidence in program accountability and results has increased.

As a result of poor program performance, the Steering Committee ended funding for eight transitional housing and supportive services only programs. Additionally, three programs converted from transitional housing to permanent supportive housing. The latter occurred as the Steering Committee determined that HUD continuum-of-care resources could be allocated on a priority basis to programs that focus both on (1) high need clients (i.e., those with long histories of homelessness, severe disabling conditions, and limited income); and 2) improved housing outcomes for those clients. Clients with low needs (i.e., those with fewer barriers to housing placement, less disabling conditions, and/or better income stability) were diverted to housing placement services and community-based services that were both more effective at meeting their needs and less expensive to the community.

The Steering Committee established a priority for effective and innovative housing service delivery that is expressed as providing housing and services for those with the greatest needs and greatest difficulty accessing the current homelessness service system. Monitoring of program admission and client selection practices has been particularly important during evaluation to determine how programs serve persons with special needs, demonstrate proactive inclusion and non-restrictive housing admission requirements, and practice expedited admission processes. Thus programs that operate in a more selective manner, such as requiring multiple interviews, mandating pre-admission drug testing, and/or restricting admission by persons with criminal histories will disadvantage those with histories of chronic homelessness and multiple barriers. Such program would be rated lower in performance. Based on these provider ratings, HUD resources can be prioritized for the most difficult to house homeless persons.

The Steering Committee has defined program occupancy as one measure of cost-effectiveness. The average monthly occupancy over the 12-month review period should be at least 95 percent. Low occupancy can indicate many program problems, including offering a program that is not desired or needed by homeless persons, selective admission practices, and/or poor property management resulting in slow unit turnover. By evaluating occupancy, the Steering Committee pushed providers to adjust their practices to assure that the precious resource of housing was available to homeless persons on a timely basis.

As HUD has only recently defined housing stability measures (as opposed to allowing programs to self-define outcomes), it was necessary for the local Steering Committee to define the measurement and assign a performance target.  The Steering Committee established that as all HUD funding programs were aimed at addressing the needs of homeless persons, it was imperative that housing stability be a primary outcome for each program. This shift is evident when comparing residential stability goals in the late 1990s to the most recent period.

For example, a Shelter Plus Care provider was operating under these agency-designed residential stability goals during 1998-99:

  1. 50 percent of initial participants will maintain continuous sobriety and active participation in all program components for at least their first 12 months.
  2. 50 percent of the single women clients who had children placed in foster care prior to entry into the program will regain custody within 12 months of program entry.
  3. 100 percent of clients will develop quarterly goals for independent living skills.

In 2006, this same program was required by the Continuum of Care Steering Committee to meet the following residential stability goals:

  1. There is evidence in the APR that at least 80 percent of persons served during the evaluation period remain in the permanent supportive housing project or exit and move into permanent housing, where the client has control of the housing.
  2. The average length of stay for persons living in permanent supportive housing is at least 12 months.
  3. The project has met its housing stability goals for the APR period being evaluated.

This example illustrates the shift from addressing homelessness as a personal condition in need of rehabilitation to addressing homelessness as a condition resolved by achieving housing stability. In 1998-99, this program would have considered clients to have been “successful” if they were sober but still homeless. In 2006, clients are only “successful” if they remained housed and are no longer homeless.

The full evaluation report includes all programs that were evaluated during the period and is provided to each agency for distribution to program and management staff. It is hoped that agency leadership not only shares the report but also uses the measures to communicate their vision for program and client outcomes. The ability to benchmark programs against other programs operating within the community is also helpful.

Program and financial data were readily available to the CoC Steering Committee due to the HUD requirements for submission of annual reports. Upon closer review, we did find that programs that experienced program and agency administrative problems were not able to produce reliable, accurate client and financial data. The lack of administrative capacity was also usually correlated with poor program performance.

Providers have resisted the use of standardized measures, citing concerns about differences in admission criteria, program design, and resources. Initially, some providers were more focused on service and treatment delivery, rather than housing stability, thus they were resistant to having their programs’ performance evaluated on the basis of attainment of stable housing.

The conduct of annual program evaluations is also not without cost. The Steering Committee’s process requires the services of an outside evaluator and two or three Steering Committee members who participate in the site visits. The evaluator is responsible for reviewing program documentation and reports, communicating with the provider, coordinating and participating in site visits, and summarizing findings. Providers also absorb staff costs related to preparing for the evaluation, participating in site visits, and responding to the reviewers’ report.

As renewal grants are now required to be limited to one-year terms, rather than three- to five-year terms, the number of programs reviewed is increasing each year. The need for annual program evaluation is being questioned, as overall program performance has improved over time and nearly all programs consistently perform at high levels. The Steering Committee is considering the efficacy of conducting bi-annual reviews for high performing programs and reserving annual program evaluations for programs with sub-par performance.

Another challenge relates to the timing of the design of program evaluations. All too often programs are designed for implementation, with evaluation measures as an afterthought or treated only as a grantor-imposed requirement. Thus, program evaluation measures may be perceived as irrelevant to the program, not measurable based on data collection instruments, and/or too costly for implementation.

Another challenge is that programs change over time and their evaluation methods may not change. The Steering Committee observed the latter when a program shifted from an abstinence-based sobriety housing model to low-demand safe haven programming. Obviously, attainment and maintenance of sobriety was no longer relevant as a measure of self-sufficiency, but measuring reductions in substance use, while more relevant, was also more difficult. This particular provider was also reluctant to concurrently reduce admission barriers (be less selective in admission) and increase housing outcomes expectations as it believed that serving a more “difficult” population would mean that housing outcomes would decrease. Based on local experience and the national literature, however, the Steering Committee required that housing outcomes goals be greater than under the prior program design.

Recently, the Community Shelter Board has begun publication of quarterly program indicator reports from the HMIS. Most HUD SHP–supported programs submit data into the HMIS, and Shelter Plus Care programs will be added over the next year. The following measures are reported for each program:

  1. Number served
  2. Program occupancy (average number of units occupied)
  3. Housing stability (average length of stay)
  4. Housing outcomes (number remaining in supportive housing or moved to other permanent housing destination)

Results are compared to community or program standards (if higher than community) for compliance. CSB also aggregates data across programs to create a report on results for the systems as a whole (i.e., family shelter, adult shelter, and supportive housing). In the future, CSB intends to include clients’ demographic and key characteristics (gender, age, race, household type, disability, education, homelessness history, etc.) to better understand program results. As the shelter and housing systems better refine their assessment processes, it will be possible to better define risk adjusted outcome targets and improve matching between programs and clients.

To provide accountability to the community and promote transparency, CSB posts all program evaluations and indicator reports to www.csb.org . This transparency has been very powerful in achieving greater program and system accountability for client results. While some providers have expressed concern about this practice, it is overwhelmingly supported by funders, providers, and others. Although there was concern about the potential for political fallout (e.g., loss of local government funding) if programs did not achieve planned results, this has not been the case. Continuously low-performing programs have improved program performance, changed the program model, or ended the program. The elimination of programs has been both voluntary and as result of funding withdrawal. The overall result is better-performing programs that address higher priority community needs.

By focusing on a limited number of indicators that are directly related to the overall community goal of ending homelessness, it is feasible to utilize the HMIS to report on the impact across programs and for the overall system of care. This approach could be feasible for communities across the country to implement. While providers may want to track and report on other measures, e.g., completion of treatment, job placements, etc., these measures would vary by program and thus be difficult to implement across all programs. By keeping the approach simple, communities will be more successful at implementation and will be more effective at communicating progress and challenges to the public and decision makers.

As the Columbus experience illustrates, creating accountability systems and performance measures is possible, but not without challenges. Including providers, funders, and other community leaders in the process can help to encourage change, and transparency can assure that problems and issues are confronted in an open and forthright manner. Most importantly, the Columbus experience shows how deliberate goal setting accompanied by consistent and clear performance measurement can be used to move both providers and the service system overall in a desired policy direction and, ultimately, change the configuration of the service system consistent with the goals of the local planning authority. Since most agencies have multiple funding streams, it is important that performance measures be constructed to allow the agencies to respond to a variety of grant reporting requirements. It is important to construct the measurement system so that the basic measures (stable housing, employment and/or income, linkage to needed services like mental health, and improvements in education/skills) be used to respond to multiple grants.

View full report

Preview
Download

"report.pdf" (pdf, 561.34Kb)

Note: Documents in PDF format require the Adobe Acrobat Reader®. If you experience problems with PDF documents, please download the latest version of the Reader®