This is STAGING. For front-end user testing and QA.
The Chronicle of Philanthropy logo

Opinion

Foundations Can Learn a Lot From the People They Want to Help

November 13, 2011 | Read Time: 7 minutes

A few weeks ago, I arranged for a new cellphone plan for my family. Within a day, I received a call from the company asking about my experience. That is pretty typical these days. When I do business with a car dealership, stay in a hotel, or buy something online, within a day of my transaction, sometimes within hours, I receive a survey asking questions like, “How was the service? Was the information properly explained? Were your questions answered? Do you now understand how to operate x? What could be improved?”

Businesses know that when customers believe they have received a quality product or service, that is good for the bottom line, and satisfied customers will return and refer others.

Nonprofits, however, do not systematically solicit the perspectives of our intended beneficiaries.

Big institutions with paying customers like colleges or museums are exceptions to that rule. And some social-service groups survey their clients, but too often those surveys are poorly designed, poorly carried out, or both, and if done at all, they tend to be narrowly focused on whether clients were satisfied with the services they received. In those cases, the data typically skew to the positive and are difficult to interpret and act on. What does it mean that a service provider received an average rating from its clients of 5.6 on a 7-point satisfaction scale? Is that good or bad? Nonprofits have no “industry benchmark” for those measures.

Many grant makers rely on nonprofit organizations or research experts to learn what the charities’ beneficiaries need or how they viewed their experience.


Getting advice from the grantees and experts is necessary, but in bypassing the ultimate beneficiary as a primary source of information and experience, we deprive ourselves of insights into how we can do better.

When I first moved to the grant-making side of the table at the Bill & Melinda Gates Foundation about five years ago, after founding and running a strategic-consulting and evaluation company for 10 years, I was in a role that involved exploring how the foundation gathered insights from what we called outside voices.

Grant makers have plenty of ways to solicit and consider the views of outsiders, and the Gates foundation has pursued many of them.

But it seemed to me an important and unique challenge to hear from the people we hoped our grants would help. They were all over the globe. How could we and our grantees hear their voices in ways that were rigorous and authentic and, most important, that enabled us to figure out how to do better?

One example can be found in India, where the Gates foundation’s Avahan program is working to reduce the spread of HIV.


Avahan has provided money and support to programs that seek to prevent the spread of HIV in the six Indian states with the highest rates of disease. The programs focus on communities along the nation’s biggest trucking routes and serve the people who are most vulnerable to HIV infection, including female sex workers, their clients and partners, men who have sex with men, and people who take drugs by injection.

When the Avahan team members began to design their projects, they sought advice from key constituents—service providers, clinic workers, public-health professionals, policy makers—and the people they hoped would stop behaving in ways that put them at risk. The advice from the beneficiaries proved especially valuable both in designing the program and in improving it.

Most of the people the project wanted to reach weren’t very public about the behavior that put them at risk of disease. Many female sex workers, for instance, did not look or seem like what Americans would think of as stereotypical prostitutes. Most were married, with children, participating members of their community, sometimes covered in religious garb, who went into sex work to provide income for their families. The HIV prevention effort needed to give the women ways to keep their privacy and still provide information about practices that would help them avoid infection.

A central component of the project involved peer education and outreach—hiring people to teach others like them how to practice safer behavior. Low literacy rates would pose a challenge for collecting data to monitor progress, but turning to the people the Avahan team wanted to help led to a solution. The peer educators themselves developed data-collection sheets with pictorial representations of people they contacted, topics discussed, information provided, and supplies shared. The female sex workers recorded their interactions with a small adhesive dot, called a bindi, that women in India wear as a forehead decoration.

The workers all had an ample supply of bindis, and when they pasted them on their tracking sheets, they had an instant graph of activity and gaps. The smartest researcher could not have come up with such a great solution.


The ultimate beneficiaries in this program were among the most enthusiastic consumers of data I have ever met. They pored over their own tracking sheets, aggregated the results, and looked at the trends. They collectively discussed ways to improve their numbers and results and took action, changing the places where they offered education and prevention messages to attract people most at risk and developing cooperative approaches with local police, who often came into contact with people at risk of disease.

To be sure, some grant makers and nonprofits that work in global economic development deride the participation of intended beneficiaries as “the new tyranny.” They argue that asking people to share their advice can be an onerous process and subject to manipulation, as some beneficiaries may overstate the needs of some communities and understate others to direct more resources their way. Of course, one has to be mindful of pitfalls, but let not the abuse of a thing be an argument against its proper use.

Another example of beneficiaries providing useful data and insight came from a Gates foundation-supported effort in the United States.

The aim was to understand—and help schools and districts understand—the high-school experience from the student perspective. The foundation developed several key partnerships: with the Center for Effective Philanthropy because of its track record developing comparative data sets to make the data constructive and easy to act on; with top education researchers to ensure validity of the measurement; with MTV to make the interface compelling to young people; and with school and nonprofit leaders to ground the work in real-world practices.

After four years, this YouthTruth effort has grown from 20 schools and 5,000 students to 215 schools and 120,000 students. The data are systematically collected in electronic format and benchmarked with comparable schools, and the results are reported in short order so schools and districts can make improvements as soon as possible. Those results are leading to change—in areas like instructional techniques and discipline policies.


In education, policy makers and others have largely relied on standardized test scores as the one and only measure of progress. Those tests are necessary but insufficient and, as has been sadly demonstrated in several big city districts, the use of only one measure has become an invitation to game the system. Multiple measures almost always make gaming and cheating harder to do.

What’s more, evidence is now emerging that student perceptions about their teachers and classrooms are linked to student achievement. Student opinions of school safety and quality of education are valuable in their own right, and because these data are collected and reported back in a timely fashion, teachers, district leaders, and grant makers can make immediate changes to help students improve academically.

Nonprofits and grant makers often wait for years to understand whether they have achieved their desired level of success.

Those data are important, but real-time data that are linked to long-term change let you make better decisions to help people in the present.

Signs of promise abound. Charity Navigator, one of the most prominent watchdog organizations, historically provided ratings for charities based solely on overhead and other financial data. While financial data are important, they are woefully inadequate to examine effectiveness or results. That’s why Charity Navigator is revamping its scoring system to include reviews from the people a nonprofit serves as a key part of its rating criteria.


Getting advice and thoughts from the people a nonprofit program is supposed to serve will not solve every social challenge. But we are beginning to see the power of serious and systematic feedback from those intended to benefit—not as advocacy or as academic research but as a unique source of timely information and insight.

At a time when nonprofits and government need to stretch fewer dollars further and make social programs more effective and efficient, we should not forget to look to the people we are all here to help.

About the Author

Contributor