College Programs Fail to Meet Their Goals
September 6, 2001 | Read Time: 5 minutes
Each year, foundations pour millions of dollars into programs designed to help minority and low-income students graduate from high school and make it through college. But many of those programs produce mediocre results, at best, and few grant makers are aware that college-preparatory programs are far from universally successful.
To be sure, college-preparatory programs provide great human-interest stories and photo opportunities for grant makers. But data on the long-term effectiveness of such programs are often misleading or meaningless, when statistics exist at all. Some grant makers join their grantees in claiming, for instance, a “90 percent success rate” at getting young people into college. But a more accurate claim might be, “All of the students who were specially selected are going to college.” How the students were selected for college-preparation work, and what happened to those who weren’t chosen, may be an entirely different matter.
A grant maker might counter by saying that the participants were not top students when they were selected in the ninth grade. That leaves unanswered how many kids in the foundation’s original sample stuck with the program compared with how many of those who made it through to the senior year got into college.
In a test of that question, an independent review of a program run by the Fulfillment Fund, a charity in Los Angeles that receives significant donations from Hollywood celebrities, found that as few as 29 percent of the original participants remained in the program at the time of graduation. Studies of other programs have produced similar results.
A recent U.S. Education Department report on Upward Bound, a 36-year-old federal program that helps prepare kids for college, says that the program can make a real difference to students who are struggling academically, but tends to let those students slip away before they get to college.
Patricia Gándara, a professor of education at the University of California at Davis, estimates that as many as two-thirds of low-income and minority students enrolled in college-preparation programs leave before completing the work. The most-effective programs work with borderline students and make deliberate efforts to keep them on the college track, such as closely monitoring students’s participation and ensuring that they are in the right courses, Ms. Gándara says.
Besides being skewed by high attrition rates, data on college-preparation programs are subject to what researchers call selection bias: in this case, the exclusion of students who could benefit the most from remedial work. Even in programs that claim to focus on students who otherwise would not go to college, program administrators are often tempted to select students who already are likely to succeed, according to William G. Tierney, a professor of education at the University of Southern California. This makes programs look more effective than they really are.
Despite those challenges, college-preparatory programs can play a key role in improving the chances that students who face academic and economic challenges will get into college and graduate. How, then, might foundations become more helpful to the people who run such programs — and, ultimately, to the students that the programs are intended to serve?
Above all, foundations must base their grant decisions on sound data. Indeed, rather than relying on intuition in shaping college-preparatory programs, grant-makers must help to identify existing research, find ways to fill in gaps in the data, support grantees who genuinely seek to evaluate and improve their programs, and provide information to charity officials and others in ways they can readily understand and apply.
As part of their efforts to glean accurate data, grant makers must establish clear ways to measure students’ progress. Of the students who enter a given program, how many complete it successfully — however “success” is defined? How many actually get to a college campus — and not just send transcripts or enroll in classes? How many graduate? What types of colleges are students entering, and what courses are they taking, and passing? Are they entering as full-time or part-time students, and what are the historical trends? How are the overall rates of college attendance changing at high schools served by college-preparatory programs?
The GE Fund, in Fairfield, Conn., has established College Bound, a 15-year, $30-million program in schools where GE has operations. The program has recently become more rigorous in its data collection. It now incorporates information on how many students stay in college-preparatory programs alongside its basic measurement of how many move on to higher-education institutions.
In addition, the GE Fund is investing $1-million in a major evaluation system for College Bound over the next five years. The system will establish standard data-collection methods, support participating schools’ evaluation efforts, and intensively study a portion of the institutions that offer College Bound programs.
Likewise, the James Irvine Foundation, in San Francisco, is putting data collection and evaluation at the core of its efforts to make grants for college-preparation projects in California’s Central Valley, the state’s most educationally depressed region.
For example, in supporting College Summit, a summer college-preparatory program for students in the academic middle, an evaluation will assess whether the program contributes to a college-going culture for the whole school in addition to helping the selected students.
Nationally, grant makers and a coalition of education groups have joined in the Pathways to College Network, an effort to improve the process of gathering data on college-preparation programs and disseminate the information to educators, policy makers, community organizations, grant makers, businesses, and others.
Those are all promising trends, and they should not be taken lightly. Still, grant makers have the privileges of time, attention, independence, and cash. And with every grant made, foundations are not tinkering with abstract theory but affecting the well-being of the nation’s young people.
With these privileges comes the responsibility to apply the resources for greatest effect, investing in research and evaluation, determining what programs really work, and focusing on proven approaches.
Grant makers must remember that simplistic data don’t mean real results.
Roger Nozaki is a program manager at the GE Fund, and Bob Shireman is program director at the James Irvine Foundation.