While the literature reviewed here supports the general contention that measurement remains a challenge for both federal agencies and foundations, both sectors appear to have embraced the challenge to some degree, and successes in this area are not entirely uncommon. Interestingly, both sectors appear to have moved beyond the notion of measurement as primarily a means of demonstrating accountability or impact and also are seeking to measure progress to inform their own broad decision-making processes (Kramer 2007; MCC 2008a). This tendency seems more pronounced in the private philanthropic sector, where one survey of foundation leaders revealed little evidence that evaluations were used to determine grant renewal or termination decisions (Kramer 2007; p. 15). Rather, grant programs often have critics or supporters within the organization who may influence decision-making more heavily than evaluators. The survey revealed that the most useful evaluations for foundations purposes inform planning and implementation, as well as tracking the progress of the organizations broader goals (Kramer 2007; cf. Guidice and Bolduc 2004; Levinger et al 2007; William and Flora Hewlett Foundation 2008). Toward these ends, the Robert Wood Johnson Foundation developed a system of comprehensive performance measurement (a system of measuring progress against the foundations theories of change and indicators of performance); the William and Flora Hewlett Foundation (WFHF) developed an expected return metric (a quantitative process for evaluating potential investments based on consistent metrics); and the Annie E. Casey Foundation (AECF) embraced results-based accountability. On a much larger scale, the Bill & Melinda Gates Foundation has dedicated significant funding to the Institute for Health Metrics and Evaluation at the University of Washington for the development of data systems to support the monitoring of public health issues at a societal level. In the public arena, MCC seeks to implement results-based management, which uses data to inform aid giving and management, even as it focuses on results; and MCCs core indicators have been used by other agencies, including USAID, to guide decision-making. Each of these will be discussed in greater detail below, but it is worth noting here that the literature suggests foundations may achieve their best successes when applying metrics at earlier points in the continuum, while USG appears to apply metrics more consistently at all stages, with emphasis on evaluation for accountability. This is illustrated by the example (presented at the beginning of this section) that the State Departments (2008) conception of an initiatives life cycle explicitly includes a phase for post-implementation evaluation; whereas MacArthurs (Benedict 2003a) change phase may imply an evaluative component that is not given the prominence it receives from USG agencies and is not necessarily linked to accountability.