There are differences of opinion between USG and foundations, as well as among foundations, as to what constitutes useful evidence. Our study found cases in which funders sought and accepted relatively soft evidence of their programs impacts. Ashoka, for example, requires Fellows to submit reports that track their progress against benchmarks. Yet self-reports, benchmarks that vary across respondents, and biased response rates result in evidence that might not be acceptable to organizations with different evaluation standards. For example, it would be difficult to reconcile Ashokas approach with that of MCC or the many federal programs that encourage or even require independent, often experimental evaluation before programs can be reauthorized. The point is not that one approach to evaluating effectiveness or success is superior to another in all instances, but rather that, in interacting, stakeholders need to be aware of their potentially conflicting approaches.
Nor is the Ashoka example meant to portray the foundation sector as taking a soft approach to evidence in general. Indeed, the Gates Foundation has invested hundreds of millions of dollars in the development and dissemination of indicators to gauge the impact of its own and other philanthropists programs. Gates also is investing in rigorous evaluation methods. Moreover, several case study foundations show the development of particularly innovative ways to measure progress at different levels. Hewlett and RWJF both consider outcomes and/or impacts at the program, foundation, and societal levels. Similarly, the Rockefeller Foundation has developed (though not yet implemented) a monitoring and evaluation program to track changes at the user, provider, and funder levels.