Beyond the broad sector-specific trends in decision-making discussed above, individual foundations and USG agencies often engage in planning and development processes specific to their organizations. These are addressed briefly in this subsection, as well as in Chapter V, where we report on a few examples of organizations and initiatives that could serve as case studies. Such issues will be addressed at length in the case studies themselves.
EMCF has been cited as an example of a foundation that has engaged in a very deliberate rethinking of its approach to decision-making. In the late 1990s, the Foundation chose to move away from several broad programmatic areas (poverty, child welfare, education) to a single area (youth development). This decision was innovative and potentially effective, at least insofar as it responded to the criticism, voiced regularly in the literature, that foundations tend to spread their resources across too many areas. Moreover, EMCFs movement away from program to operating grants appears to have gone against the grain in the private philanthropic sphere. Another interesting aspect of the new The EMCF approach is that it applies strategies from the for-profit sector to the foundations grantmaking process, including due diligence, business planning, and organizational performance tracking. Again bucking a foundation trend, The EMCF emphasizes multiyear grants to allow for the sometimes painstaking work of organizational development. Finally, the Foundations close relationship with its granteesincluding the provision of technical assistancereflects the broader trend of venture philanthropy, where emphasis is placed on hands-on work with grantees to ensure their success. None of these strategies is innovative in and of itself, but the comprehensive shift coming from within the foundation world represents a new way of envisioning the donor-recipient relationship. This shift is responsive to some of the common criticisms of private philanthropy.
As mentioned above, a few foundations have made noteworthy inroads on tracking their own broad organizational performance. The metrics developed by the Robert Wood Johnson, William and Flora Hewlett, and Annie E. Casey foundations are innovative in that they present a new way of examining success at the foundation as opposed to the program level. The Robert Wood Johnson Foundations system of comprehensive performance measurement is multifaceted, and at least three aspects deserve specific mention. First, the Foundation developed a Scorecard; this is released annually and reports outputs at the foundation level, outcomes from key grantees and foundation-wide, and changes at the population level in the broad health indicators their programs seek to address. Second, the Robert Wood Johnson Foundation developed and implemented internal assessments for each of its own programmatic teams, as well as employee surveys to gauge attitudes about the Foundations work. Third, the Foundation set up a public archive of all the data from its sponsored research. A case study of the Foundations focused and sustained attention to organizational assessment, coupled with its willingness to make data public, points to improved focus and strategy in grantmaking, increased innovation, and better alignment of board and staff goals (Guidice and Bolduc 2004).
The William and Flora Hewlett Foundation developed its expected return metric to support the systematic selection of grantees, specifically by considering their foundations comparative advantage in a given area, as well as the presence of other funders. Expected return is calculated by multiplying the benefit (of an intervention under optimal conditions; usually drawn from extant data), times the likelihood of success (calculated internally), times the foundations contribution (adjusted for varying roles in each situation), divided by the programs total cost. The metric is fairly straightforward, although data for each input may be of varying availability and quality. Because expected return considers other funders in the equation, the consistent use of such a metric by more foundations could support sector-wide improvements in effectiveness.
The Annie E. Casey Foundations results-based measurement approach was developed through an iterative processnot unlike the Robert Wood Johnson Foundations development of comprehensive performance measureswith heavy involvement from foundation leaders and staff. The Annie E. Casey Foundations process is also noteworthy because it involved granteesa step taken intentionally to gain support for new reporting requirements. Identifying performance measures required the Foundation to articulate the strategy for each program area with great precision. As one leader at the Foundation put it, In order to measure how we were doing, we needed to be as clear as we could possibly be about what we intended to do (Kaufmann and Searle 2007, p. 7). As such, the process proceeded, to some extent, in reverse order with the concept of measurement driving [their] thinking about what results should be (ibid.). The system considers results in three categories: impact (the direct effect of a grant on beneficiaries), influence (the effect on behaviors of people not directly touched by the grant), and leverage (additional support beyond the Annie E. Casey Foundations contribution that the grant built or attracted). The Annie E. Casey Foundations framework does not allow the Foundation to overcome some of the challenges (already cited) associated with measurementfor example, availability and consistency of measures but the process appears to have strengthened program strategy and enhanced thinking about different levels of performance. For example, in the Foundations Education Program, the process resulted in a formal expression of the rationale behind the results that the K-12 Education Program sought. This included a description of their vision for core results; identification of three critical barriers to achieving the vision; elaboration of the consequences of these barriers; and articulation of the specific role of the Education Program in overcoming them, which also spelled out the results for which the Program would be accountable (Kaufmann and Searle 2007, pp. 7-8).
The Bill & Melinda Gates Foundation has sought to address the challenge of measuring progress, not merely at the foundation level, but at the societal level as well. With an initial grant of $105 million in 2007, Gates helped to establish the Institute for Health Metrics and Evaluation. The Institute works to develop and compile data on five areas of public health: health outcomes, health services, resource inputs, metrics for decision-making, and evaluation. The purpose of IMHEs work is to put as much information as possible about health in the public domain in a way that is useful, understandable and credible to enable policy-makers and decision-makers to craft the best policies with the highest benefit for their own context (IHME web site). The institute has recently published statistics that challenge the reporting of the World Health Organization (WHO), which as a public agency could be prone to the interference of politics in its data gathering and reporting (The Seattle Times, April 9, 2008).