Evaluation Can Help Us Learn What Works, if We Fix It
March 7, 2018 | Read Time: 7 minutes
Talking with people about the future of evaluation in philanthropy these days can start to sound a lot like the opening of Charles Dickens’s A Tale of Two Cities.
It’s the best of times, as new data methods, tools, and analytics continue to flower and expand.
Yet it’s also the worst of times, as foundations and nonprofits struggle with the complex landscape of evaluation, and most find they have insufficient resources to productively apply the new advances.
Data is becoming more accessible than ever, but figuring out how to integrate information into decision making effectively remains a challenge for foundations and nonprofits. Despite a growing movement to ensure that evaluation incorporates the views of nonprofits and the people they serve, efforts to monitor progress and learn what works continue to serve foundations’ needs better than those of grantees and their communities.
In essence, the future that most people expect is marked, simultaneously, by the promise of greater understanding and impact and by frustration that individual bright spots don’t add up to clear and meaningful solutions. Many in philanthropy simply find themselves at a loss for how to move forward.
The collection, control, and use of data can be infused with power dynamics and bias. Foundations don’t have the only view on what works.
Yet there’s also real hope and possibility for the future of evaluation, including monitoring and learning functions that are key to any good review of a charity’s work. Experts can envision a much more positive future in which continuous learning becomes a core management tool and foundations, as commentator Van Jones once put it, “stop giving grants and start funding experiments.”
That future should include foundations, grantees, and others involved in a single cause or community sharing data, learning, and knowledge openly and widely. And it will make constituents’ feedback about what they need and what success looks like central to strategy development and review. It’s a hopeful view of how evaluation can produce the right information to help grant makers make better choices over time.
Unfortunately, if foundations and nonprofit organizations continue to act as they have been doing, the future we get will almost certainly be the equivocal future we expect rather than the brighter future we hope for.
Three Characteristics of a Better Future
Over the last two years, we at the Monitor Institute by Deloitte have been exploring how to help grant makers, both individually and collectively, begin to shape the future we hope for in regard to evaluation. As part of our “Re-imagining Measurement” project, we have spoken with more than 125 foundation executives and program staff members, evaluation experts, nonprofit leaders, data wonks, and others who depend on independent reviews.
Based on our research, we identified three characteristics that are essential to creating a better future.
The first characteristic, purpose, is about the “why” of monitoring, evaluation, and learning. The consensus in our conversations has been that organizations need to make sure they collect data and choose methods in a way that is driven by the decisions that foundations, nonprofits, and other key players need to make.
Too often, the starting point for measurement is understanding the reporting requirements and focusing on what metrics and methods to use rather than on deeper questions about what decision makers need to know to make better choices about how to achieve impact. As one expert told us, “Instead of evidence-based decision making, we need decision-based evidence-making.”
In other words, data collection, analysis, and interpretation should aim to put decision making — not methods and measures — at the center of evaluation efforts.
While this may seem obvious, many organizations have historically struggled to define and track metrics that are most meaningful in decision making. It is very common, for instance, when looking to judge the effectiveness of some philanthropic effort, for grant makers to ask what key performance indicators are available. This is a form of the “streetlight effect,” the observational bias of looking for data where it is easiest to search for and not necessarily where one should be looking. The tendency is to rely on available data rather than data that likely would be more useful but is harder to collect. Putting decision making at the center is the discipline of being clear on purpose, then on approach, and only then on the right indicators.
Perspective, the second characteristic, entails that foundations and nonprofits seek feedback from those meant to benefit from their programs and that they promote diversity, equity, and inclusion in their evaluation efforts.
Perspective is about the “who” of measurement and evaluation; it is about adjusting who gets to define what information is collected, who owns it, how it gets used, and what constitutes success.
The process itself — the collection, control, and use of data — can be infused with power dynamics and bias. While foundations have useful perspective on impact, it’s not the only — or necessarily the best — perspective.
Embracing multiple perspectives and bringing in the voices of those affected by social programs can be critical to correctly identifying social and environmental needs, understanding real impact, and engaging the beneficiaries of philanthropy in the shaping of solutions to challenges.
Aligning with other key players, the third characteristic, is about learning more productively at a scale that makes a difference. It’s about getting better at learning from and with other actors.
Nonprofits and foundations need to understand the good, bad, and inconclusive findings from other players that share their social-change missions, to better respond to the size and complexity of today’s problems.
No single organization can solve large, interconnected issues by itself. More than ever, we need systems that allow us to learn from each other’s experiences. A great opportunity exists to make a bigger difference more quickly if everyone involved can better learn from the insights of individual organizations in a way that’s simple and easy to understand. New opportunities abound to develop collective knowledge and integrated data that promote learning at the scale needed to address problems the world faces.
Moving Toward the Hoped-For Future
The good news is that the seeds of a better future are already being planted. The Open Society Foundations, for example, have recognized that it’s hard to focus on lessons to be learned from various projects when evaluation is considered only in relation to what grants to support and renew. So the foundations have separated conversations about funding allocations from those focused on learning from projects they supported. That way, nobody feels as though they are being graded or penalized, and what’s learned can be useful in future grant making.
Meanwhile, the Family Independence Initiative, an organization focused on economic mobility for low-income communities, provides a platform for families to connect with one another and identify what they need to improve their lives. The organization then deploys dollars in response to those preferences.
And HomeKeeper, a data-management system for affordable-housing organizations, now pools data and enables a view of what is working across different locations, types of home buyers, and approaches.
Individual efforts like these are pointing the way to a better future for evaluation and the use of data.
On their own, however, they may not be enough. Instead, grant makers and nonprofits must come together to imagine new possibilities and improve the system. This will require real experimentation and investment — not just within individual organizations but also across multiple nonprofits, and not just from evaluators and evaluation directors but from everyone involved, including program staff members and executive leaders.
It will be real work. Yet not doing anything will lead to a future situation we don’t want. Imagine, for example, what monitoring, evaluation, and learning systems might look like if we:
- Changed reporting requirements so that grantees collect, monitor, and share data that is meaningful and useful for grantees and constituents, first and foremost.
- Provided incentives to a group of grantees working on the same cause but using different approaches so they can compare evaluation findings to learn what really works to bring about change and what doesn’t.
- Built an effective data infrastructure to acquire feedback from people served by nonprofits so they could help shape decisions that directly affect them.
Through our Re-imagining Measurement project, we have identified more than 30 calls-to-action like these that grant makers can experiment with, individually or together, to set evaluation by nonprofits on a path toward a better future. It is now in our hands to transform our organizations and the entire mission of social change by focusing on the best ways to measure, evaluate, and, most important, learn to do better. But we will need to do more than just hope for a better future to come.
Rhonda Evans, Gabriel Kasper, and Tony Siesfeld wrote the “Re-Imagining Measurement” report issued by the Monitor Institute by Deloitte.