Innovative Nonprofit Programs Need Rigorous, Conclusive Evaluation
October 1, 2009 | Read Time: 5 minutes
LETTERS TO THE EDITOR
To the Editor:
Although we endorse Lisbeth Schorr’s call for a broad array of evaluation techniques to identify promising social programs (“To Judge What Will Help Society’s Neediest, Let’s Use a Broad Array of Evaluation Techniques,” Opinion, August 20), we believe that her rejection of a central role for randomized controlled trials is unwarranted.
Our reasoning — to quote a recent National Academy of Sciences recommendation — is that evidence of effectiveness generally cannot be considered definitive without ultimate confirmation in well-conducted randomized trials, “even if based on the next strongest designs.”
In fact, there are many examples in social policy and medicine of interventions that appeared highly promising in preliminary studies but were subsequently found not to work in well-conducted trials (e.g., in social policy, many case management or home-visitation programs for low-income families; in medicine, hormone replacement therapy for postmenopausal women).
Ms. Schorr’s advocacy of innovative programs without conclusive evaluations is the approach U.S. social policy has largely followed, and to what end?
New programs, introduced with great fanfare as able to produce dramatic gains, have come and gone; no one knows for sure which were effective; minimal progress has been made. The respected National Assessment of Educational Progress long-term trend study, for example, shows very limited improvement in K-12 educational achievement since the 1970s, or reduction in the gap between minority and white students since the 1980s. Similarly, the official U.S. poverty rate, 12.5 percent, is slightly higher than in 1973. And government data show that adolescent use of drugs or alcohol, despite a recent decrease, now stands at approximately the same level as in 1990.
Randomized controlled trials offer a way to end this spinning of wheels. Contrary to Ms. Schorr’s statement that they cannot evaluate complex programs “with multiple interactive components” and can only be applied to “single, isolated remedies,” a number of trials have in fact evaluated complex programs and produced valid, actionable evidence.
Clearly, there is a critical role for both randomized and nonrandomized evaluations in evidence-based policy. Nonrandomized studies in social policy and medicine play an essential role in developing and identifying interventions that are well implemented, highly promising, and therefore ready to be evaluated in more definitive randomized studies. And in cases where a randomized controlled trial is not possible (which are far fewer than Ms. Schorr suggests), policy makers need to rely on the results of well-conducted nonrandomized evaluations.
This is not an “either-or” dilemma; many types of study designs, playing complementary roles, are needed to build a body of scientifically valid knowledge about what works in social policy.
Jon Baron
President
Coalition for Evidence-Based Policy
Washington
***
To the Editor:
Lisbeth Schorr raises common concerns about how “evidence-based” requirements will be interpreted by funders and donors.
At Innovations for Poverty Action, we share Ms. Schorr’s desire to avoid “squandering money … on efforts that do no good” but strongly disagree with her assessment that a newfound emphasis on rigorous evaluation methods will inhibit innovation and effectiveness in social policy.
The reason that we and so many others strongly support randomized trials is that they offer one of the best ways to understand what works and help successful programs scale up.
Ms. Schorr’s speculation both overestimates what is expected of randomized trials and underestimates what they can accomplish.
Contrary to Ms. Schorr’s assertion, few would argue that randomized trails should be applied in every evaluation of every program. Rather, advocates support the strategic use of the trials when appropriate to provide evidence on whether and how to scale up programs with potential to improve people’s lives.
The idea that the trials are the “gold standard” necessarily implies that there are other standards and approaches to inform program evaluations. Ms. Schorr even suggests that some funders would prefer to leave needy people in the lurch rather than invest money in a field with few trials. If this were true, many programs today would not be funded.
No one suggests that we immediately cut off programs that lack evidence. But it’s critical that we invest in understanding what truly works to make sustained progress on solving social problems. Policy makers and funders should move forward with the best evidence available today, while at the same time working to improve the evidence base itself.
Ms. Schorr’s second assertion, that trials are limited to measuring “single, isolated remedies,” is also incorrect. In fact, researchers working in the U.S. and abroad have been able to test complex packages of interventions and dynamic processes using creative study design.
Finally, far from inhibiting innovation, randomized trials can provide a way for nonprofit groups and policy makers to take a chance on new ideas and gather evidence through pilots before making massive spending decisions.
Randomized trials allow innovators to prove that their ideas get results without having to rely on fads or rhetoric. There is no better argument for a revolutionary idea than the ability to present strong, clear evidence that it works.
We, like Ms. Schorr, and everyone else in this sector are driven to help those in need. We believe that these people deserve not just good intentions but our best efforts to ensure that the help we are offering will actually do them good. That’s why the “evidence-based” movement is so important.
Delia Welsh
Managing Director
Meredith Startz
Project Associate
Innovations for Poverty Action
New Haven, Conn.
***
To the Editor:
Lisbeth Schorr describes a world where funders direct their resources to programs with evidence to support their effectiveness. I’m wondering what planet Ms. Schorr is living on. I’d like to move there.
On a more serious note, Ms. Schorr warns of reaching epistemological nihilism — paralysis induced by the inability to prove anything effective. Not only is this speculative but it’s demonstrably false. Even using the highest standards for evaluation methodology, there are already many examples of “evidence-based” programs from around the world. The reason that evidence exists for few programs in many areas is that the fear of epistemological nihilism has covered up the fact that we currently live in evaluation nihilism — paralysis induced by the fear and difficulty of evaluation.
I, for one, believe that those who need our help also deserve our best efforts to identify what really works. Avoiding evaluation because it is difficult (or worse, because it might show that our best theories don’t work) isn’t good enough.
Timothy N. Ogden Editor-in-Chief Philanthropy Action West Chester, Pa.