Perhaps the most valuable use of HMIS data beyond its reporting functions is the data capacity it creates for longitudinal, multisite, multisystem research. The Congressional directive authorizing the HMIS initiative refers to the need to use HMIS data to determine if people served in homeless assistance programs are accessing mainstream social welfare and to determine if mainstream systems are shifting people and costs onto the homeless assistance system. Indeed, this may prove to be the most powerful use of HMIS data if it can encourage larger service systems to dedicate additional resources to this vulnerable population. Administrative data integration projects, or “data linkage” efforts, are not without challenges. Yet, as the surge in cost and cost offset studies mentioned previously reveals, communities are getting increasingly savvy about how to access these data sources and have had some significant successes, even without full-scale HMIS data infrastructures. In this section, we briefly consider the potential opportunities for administrative data linkages and some of the challenges that have to be overcome.
A potential research agenda for advancing our understanding of homelessness based on data integration efforts has been summarized elsewhere (Culhane & Metraux, 1997). Among the most fundamental issues to address is the degree to which the homeless system and other social welfare institutions share common populations. From the perspective of the mainstream systems, particularly those that invest heavily in institutional care (hospitals, foster care, corrections), the rate at which people leaving their care become homeless would presumably be of keen interest. From the perspective of homeless assistance systems, an important issue is the amount of shelter demand that is accounted for by people exiting mainstream systems. In both cases, researchers could use event history analysis to inform these issues and to identify risk factors that distinguish these subpopulations from their respective reference populations. Administrators could conduct periodic database merges to assess whether efforts intended to reduce discharges to homelessness are working.
A second general class of questions relates to the impact of homelessness on other service systems. The cost studies reviewed earlier are an example of these efforts. The cost offset studies are a related use, serving evaluative purposes associated with a given intervention. Evidence of a particular type of system use (e.g., inpatient mental health treatment) is also an indicator that can be used in various research projects, as a control variable, or as a moderating variable in models seeking to examine utilization dynamics or program effectiveness.
A cross-system utilization analysis could also be used to determine program eligibility — for example, for programs that target high service users. However, in most cases, uses of data integration are restricted by law to planning, auditing, and research functions and cannot be used for client contact or eligibility determination or any other means of identifying individuals, unless clients provide written consent.
The social welfare systems with data that could serve as valuable linkage include, but are not limited to: public assistance, various health service records, corrections, vital statistics, public and assisted housing, criminal justice, child welfare, public education, and earnings. Linkage with each of these data sources could form the basis of mainstream program targeting, program design, evaluation, and policy analysis across a wide variety of program areas.
Finally, address data can be used to study patterns of residential instability and moves among households that become homeless. Addresses provide a spatial distribution of the places people lived before they became homeless. This can be used as a means of studying underlying causal processes in neighborhoods or in the housing market more generally, and for geographic targeting of prevention programs. Through integration with other housing databases, researchers can also examine building- or unit-level risk factors or triggering events (e.g., utility terminations) that may present opportunities for intervention.
Research of this nature is not possible without the cooperation of the agencies that have responsibility for maintaining these data. Obtaining data access can be very complicated. However, the federal government could provide incentives or even requirements for routine data matches through its mainstream programs. For example, the federal mental health block grant program already requires states to report how many of the people with severe mental illness in their respective states are homeless and what mental health services are provided to them. It is possible that this could be answered more precisely and consistently through a database merge, perhaps on an annual basis. State Interagency Councils on Homelessness, formed in some states in response to the federally sponsored “Policy Academies,” could be the entities that use such data for their own planning and priority setting. The federal government could pilot data merge projects among willing state volunteers to demonstrate the feasibility and cost of requiring such reporting of all grantees. Similar approaches could be taken to improve state reporting regarding homeless children, prisoners re-entering society, and youth aging out of foster care.
Given their relatively low cost and temporal efficiency, administrative data linkage projects based on HMIS implementations could well be the basis for a rapid expansion of research on homelessness and on the accountability and effectiveness of homeless assistance programs. Indeed, based on the recent experiences reported here with 10-year plans, such an expansion appears to be already underway. However, as has been observed in the growing number of cost and cost offset studies, many of these efforts could benefit from the participation of academic partners and from federal support. Organized and sponsored programs of research are necessary to bring needed cohesion and value to these and other projects like them.
Until now, we have focused on the literature and reporting tools that inform system design, policy, and program planning. Another area in which there has been some progress since 1998 is program assessment and performance measurement. While most communities are still working to implement their HMIS, some communities have gone further by using HMIS and other program data to assess how programs are doing relative to one another in terms of client outcomes. A few others have used such data to award performance incentives to programs that meet stated objectives, such as improved housing placement rates, or shortened lengths of stay (“performance-based contracting”). Such uses of HMIS and program performance data provide homeless assistance system administrators with systematic tools with which they can attempt to manage or shape provider behavior. Such tools can help to assure that programs are working to serve designated client populations, delivering the intended services and achieving the desired outcomes. While some promising practices have emerged in this area, fully operational models are still far and few between. Only a few of the larger and more sophisticated homelessness service systems are likely to include ongoing performance assessments, let alone performance-based contracting.
In a recent overview of outcome measurement in homeless assistance programs, Crook et al. (2005) characterize an outcome measurement system as “a comprehensive, systematic approach to identifying, tracking, and reporting data that reflect the extent to which program participants experience the intended benefits or changes as a result of service provision” (p. 379). However, the authors state that they were unable to locate a single comprehensive outcome measurement instrument that could be used for the homeless assistance system of care. Instead, at the client level, there are instruments that reflect the impact on a single domain, primarily mental illness or substance abuse. In this section, we review the efforts of a model program from Arizona, where an assessment and outcome system was created that is giving providers the ability to better measure whom they serve and how they perform in terms of client progress over time. A feedback system helps providers to benchmark their effectiveness relative to other providers, and to meet and discuss program strengths and weaknesses. Following that case study, we will also examine the experience of Columbus, Ohio, where regularly collected and analyzed program data has enabled that city to shape its service system to meet stated policy objectives.