USG and foundations operate under very different constraints. In implementing health and social service agendas or initiatives, it is important that players from each sector understand the culture and constraints of the other. This is especially true of more intensively collaborative efforts, but such understanding supports all types of interaction around an issue.
The primary constraints that emerged in the USG case studies lie in the rules and regulations of public agencies, public funding mechanisms, and bureaucratic organizational structures. As respondents noted, the problems that federal agencies can address, the means they can use, and the resources available to them are circumscribed by legislation and federal policy priorities, as well as a host of rules and regulations, none of which is easily altered or bent. For example, a respondent from MCC asserted that USAIDs approach to many problems is limited by earmarks on fundingthe legislation authorizing aid often ties it to a specific geographic region and/or programmatic area. Also, USG reporting requirements, typically meant to promote accountability, sometimes can pose challenges to collaboration. For example, the need to link spending to demonstrated resultsas is the case, to varying degrees, for MCC, PEPFAR, and PMIcan, according to one respondent, serve as a deterrent to collaboration, since impacts could not necessarily be linked to one or the other donors efforts. Related to this, but not limited to the public sector, is the burden placed on aid recipients if they must comply with the many and varied reporting requirements of different funders (including various federal agencies, as well as foundations and others).
In several cases, USG funding mechanisms were cited as a source of difficulty for philanthropic efforts and as challenges to interaction with foundations. While the federal government has the resources to make long-term commitments to different initiatives, yearly appropriations processes introduce uncertainty. Moreover, one respondent lamented that USG funding decisions often are made at the last minute, making planning difficult. MCCs multiyear compacts, relying on untied funds, were cited as a positive development.
The nature of some federal agencies organizational structures was cited by foundations and agencies alike as a constraint. Decision making and reaction to events on the ground were said to be slow because of bureaucracy. Moreover, the complexity of federal structurescoupled with the many variations in structure across agenciescan make it difficult for parties seeking to interact with an agency to know where the authority lies to make decisions and direct resources.
Foundations also operate under constraints that affect their ability to engage in some forms of interaction. The major constraints are their relatively short time horizons, and organizational cultures that view evaluation and accountability in terms quite distinct from those widely accepted in USG. The short time horizons probably are related to foundations focus on cutting edge, innovative programming, which case study respondents suggested may yield a self-image as planting seeds, rather than focusing on long-term cultivation of an initiative. This aspect of foundation culture may actually provide an opportunity for supplementary action or communication with USG, which typically lacks the degree of flexibility to innovate and tolerance for risk more typical of foundations. Communication could facilitate situations in which foundations support high-risk or experimental approaches, and if they prove feasible, USG then supports scaling them up.
Whereas USG has put great effort in recent years into developing accountability structuresalbeit with results that may, as yet, remain unclearfoundations have less motivation for such procedures. Some even have argued that foundation culture is hostile to measuring outcomes, not to mention impacts. As Michael Porter and Mark Kramer point out in their seminal article, Philanthropys New Agenda, foundations own internal processes provide the wrong incentives for adequate measurement: Failure risks censure, they note, but success adds no reward (1999, p. 129). Cases like Ashoka and Hewlett illustrate a less rigorous approach to evaluation and a more qualitative understanding of impacts than at USG agencies, as highlighted below.