This is STAGING. For front-end user testing and QA.
The Chronicle of Philanthropy logo

Opinion

More Grant Makers Want to Evaluate Their Work but Struggle With How

October 2, 2011 | Read Time: 6 minutes

Almost a decade ago, I got a phone call I’ll never forget. I was six months into my job as the first employee of the Center for Effective Philanthropy, and we had just sent a survey to CEO’s of large foundations to learn more about how they approach assessment.

My phone rang, which in itself was noteworthy because it didn’t happen very often back then, and the caller introduced herself. It was a name I instantly recognized as a senior person at a significant foundation, but because I couldn’t imagine why she’d be calling me, I figured it was a friend playing a joke on me.

“Uh-huh,” I said, sardonically, after she introduced herself.

Then, as she started talking, I was horrified to realize that the caller was indeed who she said she was. She had received our survey—and she was angry.

She told me that it was “dangerous” and “inappropriate” to ask CEO’s of foundations about overall effectiveness because of the diversity of foundation structures and practices—and the potential for people outside philanthropy to use the data we’d develop in ways that would harm foundations.


She said she had called key leaders at two foundations that provided the center’s early support to complain that they would pay for such a study. To their credit, they had told the disgruntled executive that if she had concerns, she should express them directly to me.

I listened carefully and tried to explain why we thought getting a snapshot of assessment practices across large foundations was important.

I said that I thought assessment was crucial to learning and improvement but that it is also much more difficult in the nonprofit world than in business, government, and elsewhere and more difficult still for foundations. I emphasized that our goal was to be helpful to foundation leaders as they undertook the challenge of gauging their performance.

But the foundation executive was not persuaded and, it turned out, she was hardly alone in her skepticism.

A year later, another large foundation, one that made grants to organizations like the Center for Effective Philanthropy, denied a proposal it had asked us to submit with this explanation, delivered by a program officer via e-mail: “It appears that there is a fundamental disagreement, internally, about whether philanthropy—as a field or domain—can or should be evaluated, measured, and/or assessed in any form.”


Today I can’t imagine I’d get either the executive’s call or the program officer’s e-mail. Foundation leaders have a new attitude about performance assessment.

In a survey of foundation CEO’s we conducted this year, nearly three-quarters said assessment of foundation effectiveness is among their highest priorities.

And most agree with the statement that foundations have made “great progress” in the past decade in assessing their work.

I am not suggesting that the Center for Effective Philanthropy is responsible for this shift (though I hope that we played some constructive role).

Nor do I believe that no foundations were grappling with these issues before 10 years ago.


As William Schambra of the Hudson Institute has noted in these pages and elsewhere (although, to him, it is a lament), the “mania to measure” dates back a century—to the earliest days of the mega-foundation in the United States.

Contrary to news-media reports that make it sound like no one cared about results in philanthropy before the Bill & Melinda Gates Foundation appeared on the stage, institutions like the Robert Wood Johnson Foundation have been focused on assessment and evaluation for decades.

Yet, still, something real has changed in the past decade, and it comes through in our survey results, described in a report we released this month entitled “The State of Foundation Performance Assessment.”
There is a strong desire, at least among the majority of the 173 foundation CEO’s who chose to respond to us, for more and better data to understand effectiveness.

But the most important question is not whether attitudes about assessment have shifted but whether practices have changed. Here, the news is more mixed.

On the positive side:


  • Our latest survey shows that foundations are using a broader range of assessment data than they were a decade ago. More than 90 percent are conducting formal evaluations of their work (the median spending on evaluations is 2 percent of a foundation’s grant-making budget). Two-thirds use surveys of grantees to get feedback—and our analysis of the grantee surveys we conduct for foundations suggest that those who repeat this process are often making real improvements that their grantees experience.
  • Nearly half are combining assessment data into an overall assessment, following the lead of foundations like Robert Wood Johnson that have for years shared a “scorecard” with its board.
  • Nearly half are either coordinating measurement with other foundations or considering doing so.

But there is also cause for concern.

  • Foundations continue to struggle with how to involve their boards in assessment: 70 percent of CEO’s say they want their boards more involved in this work. Challenges cited include a sense that the board lacks a deep enough understanding of the foundation’s grant-making priorities, unrealistic board expectations of what can be captured, and a lack of support for allocating the resources to do this work.
  • A relatively small proportion of foundations get feedback from those who should matter most—the intended beneficiaries of their work—whether through surveys or focus groups or in-person gatherings. That is the case even though those foundation CEO’s who say they get that kind of feedback also tend to report higher levels of confidence in the effectiveness of their work.
  • Grantees frequently don’t find foundation reporting and evaluation processes to be particularly helpful to them.

Foundations still appear to be too frequently making their own needs, or those of their boards, the top priority in assessment, over the needs of those doing the work the foundation supports.

Assessing whether foundations are effective is tough work, requiring a mix of indicators and a range of data sources—both quantitative and qualitative. There are no simple formulas or ratios, no easy analogs to business measures like return on investment or profitability. Assessing the work of a foundation is much more complex and intellectually challenging than gauging the performance of Google or General Electric.

As a result, I know of few, if any, foundations that think they’ve really nailed this issue.

But those that do the hard work of defining a set of inevitably imperfect indicators that show progress in carrying out their strategies—and reviewing them routinely with their staff and board members—reap the benefits in the form of better-informed decision making.


Foundations still have a long way to go in the difficult task of assessing their effectiveness in carrying out their missions.

But we can and should celebrate the fact that we’re no longer arguing about whether, as the program officer wrote to me, “philanthropy—as a field or domain—can or should be evaluated, measured, and/or assessed in any form” but instead discussing how best to do it.

About the Author

Contributor