This is STAGING. For front-end user testing and QA.
The Chronicle of Philanthropy logo

Opinion

Keep Charity Evaluation Tests Relevant

January 15, 2012 | Read Time: 6 minutes

When I left graduate school in 1989 to go to work evaluating one of the first federally financed community-based efforts to prevent AIDS, I was armed with the basic measurement skills I needed.

But it turned out that the lesson that changed my career came not from my statistics courses but from Danny Keenan, who was running a program to teach teenagers how to avoid the disease.

Danny made me understand that evaluation is fundamentally about finding the best ways to help nonprofits do their jobs better. And to do that well, evaluators need to listen carefully to the people working on the ground, understand their ambitions and values, and provide data to answer their most pressing questions.

This lesson has gotten lost in today’s polarizing debate about whether we should evaluate nonprofits in a businesslike fashion or let nonprofits do their work guided by values, vision, and experience, unencumbered by often irrelevant and burdensome measurement.

It’s time for all of us to learn from Danny. When I met him, the country was facing a scary time fighting AIDS. Little was known about the disease, and it was spreading fast. Danny’s prevention program, a collaboration of two San Francisco nonprofits—Huckleberry Youth Programs and Larkin Street Youth Services—was trying to deliver safe-sex messages to young people who were homeless or faced other struggles.


Danny understood young people. Even better, he motivated everyone around him to care deeply. He had studied for the priesthood but dropped out when he realized he was gay. Danny still had a fire in his belly, and when he spoke, he filled the room with a religious fervor and belief that change was both necessary and possible.

He was a leader I wanted to help, but the first thing he ever said to me in serious conversation was, “Fay, I don’t give a —- about this evaluation. You do what you need to, but I am here to save lives, and I just want you to stay out of my way.” I was deflated.

But I was also inspired to meet the challenge of making the evaluation meaningful.

Over the weeks, months, and years ahead, we talked about his questions and concerns—about the effectiveness of his team’s outreach efforts, where they succeeded and where they fell short. He wanted answers to questions like: What type of teenagers was his program reaching? Did they increase their understanding of how HIV was transmitted? Did they have friends or any social support? Did they abstain from risky behavior? Did they use condoms? What kinds of prevention efforts were associated with the best outcomes?

Once we got going, it turned out that Danny had a lot of questions he wanted answered.


I designed a way to collect data to answer his questions in time to be most useful to his planning and decision making for budgets. He began to understand how data could be useful, not just as accountability for the federal agency that financed his work but also as a tool to improve outcomes for young people and to advocate changes to make the program as effective as possible.

Several years later, Danny was a leading force in the development of a comprehensive clinic for teenagers in San Francisco—the Cole Street Youth Clinic—providing medical care, psychosocial counseling, and prevention education in one place. He was able to make the case for such a clinic because he had credible information on the limits of what a prevention program on its own could accomplish given the many other needs facing troubled young people in San Francisco.

In short: Danny used evaluation data to save more lives.

In 1992, Danny Keenan died from AIDS-related complications. Among the gifts he left to the people whose lives he touched was this crucial lesson for me: always strive for evaluation to add value to the work nonprofits are doing to help others.

Twenty years later, too many people are still waging the battle that pits measuring results against community wisdom and nonprofit know-how in the effort to save lives.


Those arguments took center stage in a special Wall Street Journal section on philanthropy in November, with articles by Charles R. Bronfman and Jeffrey R. Solomon, chairman and president, respectively, of the Andrea and Charles Bronfman Philanthropies, and Michael Edwards, a senior fellow at Demos, a social-issues think tank.

In the article, Mr. Bronfman and Mr. Solomon suggest that good intentions are not enough and nonprofits should operate efficiently and be accountable to their purpose and their donors, measuring return on investment with the same seriousness as a business does.

Mr. Edwards argues that social values should take precedence and that social change is not as easily measurable as business results. He goes so far as to argue that business discipline can be bad for the poor because it can lead philanthropists to support only what is easy to change and ignore the most intractable problems and most vulnerable people in our society.

Fundamentally, I agree with Mr. Bronfman and Mr. Solomon that more disciplined nonprofits will achieve greater results in all areas, including serving the poor. But the business language and references that they use are unnecessarily polarizing to those more rooted in a social-values perspective and make Mr. Edwards’s argument appear more credible than it is.

Specifically, it is problematic for Mr. Bronfman and Mr. Solomon to suggest that all nonprofits measure return on investment à la New York’s Robin Hood Foundation, which seeks to calculate a dollar value on every outcome.


Such an analysis can be appropriate and useful, especially in realms of social service or jobs, but generally it is difficult to measure economic return in the civic, social, health, and violence-prevention programs provided by nonprofits all over the globe, and often the calculations are too theoretical to be useful for decision making. I say this having led the social-outcomes measurement for one of the very first serious social-return-on-investment calculation efforts in the 1990s, with the Roberts Enterprise Development Fund. I saw firsthand both the value and the limits of calculating social returns.

But Mr. Edwards’s claim that measurement is somehow the technocratic tool of the devil is even more problematic. He stakes out a moral high ground by saying that the really important things—civil rights, poverty alleviation, community cohesion—cannot be measured and that nonprofit skills and visionary leadership will get us to our desired destinations in an organic, nonplanned, and nonmeasurable fashion. But this stance in many ways reflects even more hubris than is often associated with business-minded philanthropists.

It is now time, especially when all nonprofits have to do more with less, that we recognize the values of measurement and community are not in opposition. Measurement is a tool to help achieve important social ambitions. Admittedly, it has not always been the most useful tool, and there is room for considerable growth on all sides—among those who do evaluation and those who commission it.

It is time to change the tenor and focus of these debates from whether we measure to how we measure and what we are learning. We can begin by listening to each other, to our respective concerns, questions, and perspectives. We can measure what matters and deliver data in constructive ways that leaders can use to make decisions. We can use language that seeks to establish common ground and interests. If we take these steps, we are more likely to evaluate in ways that are consistent with our values and ambitions and that, most important, improve our work.

Danny Keenan would have asked nothing less.


About the Author

Contributor