Society and Culture, Education

Low Graduation Rates, Cont.

Over the past week, my colleagues and I have been fortunate to receive quite a bit of feedback—from academics, educators, policy makers, journalists, and the like—about our recent AEI report “Diplomas and Dropouts: Which Colleges Actually Graduate Their Students (And Which Don’t).” Two issues that have been raised time and again, and which have been the subject of the most discussion, relate to issues of transfer students and overall data quality. As a result, I thought it would be worth highlighting more explicitly how these factors played into our analysis and why better data are needed if researchers are to draw a complete and representative picture of college graduation rates in America.

First, it’s important to remember that the “Student Right to Know” (SRK) graduation rate used in the report does not count transfer students who leave their first school and graduate from another institution within six years. We know that upwards of 60 percent of undergraduates will attend more than one institution during their college career. This is a major shortcoming of the SRK data, as high transfer rates will invariably depress graduation rates. Without statistics that follow individual students, the best we can do is account for differences in transfer rates. Unfortunately, while the U.S. Department of Education provides schools with the opportunity to report this information and encourages schools with a “transfer mission” to do so, only some schools actually comply. In our sample of 1,385 schools, for example, only about 490 reported their transfer rates. Without data for all schools, we can’t begin to account for the confounding effect of transfers. Moreover, even when schools do report transfer rates, we’re unable to determine the ultimate success of these students—so it’s still impossible to compute an institutional graduation rate even if we know n students transferred.

Second, we’ve underscored that the failure of schools to report transfer rates brings up the broader point about data quality. We’ve already received a number of calls from colleges and universities claiming that their data, as reported to the National Center for Education Statistics (NCES), is incorrect. All of the schools that have contacted us have done so because the rate as reported to NCES is lower than their “actual” rate. One institution even thanked us for identifying the error in NCES’s Integrated Postsecondary Education Data System (IPEDS) database. Artificially high graduation rates pose an equally significant problem. Indeed, as a journalist at a major newspaper found after a few phone calls, school officials at Arkansas Baptist College, which boasts a 100 percent graduation rate in the database, admitted that the school’s data were reported erroneously. Other schools in the sample have reported exceptionally large increases in graduation rates that raise questions about reliability. The number of schools with artificially high rates is unknown, since those institutions are naturally reluctant to call and request a correction. Even when “called out” for inaccurate rates, though, at least one institution unapologetically admitted the error (to a national newspaper) because there are few consequences for incorrect reporting.

It is worth noting again that these data are congressionally mandated and collected by a major federal agency. Would we tolerate such inaccuracies from the companies who report workplace injury rates to the Occupational Safety and Health Administration or the state agencies that administer the food stamp program? More relevant is the ongoing effort to produce cohort-level high school graduation rates. Increased scrutiny, concerted effort, and more funding from NCES to ensure accurate data collection has improved estimates of high school graduation rates and will produce even better ones in the near future.

As researchers, we are largely at the mercy of the data that are available, and clearly these data are imperfect. Our report spells out such caveats, and our results are subject to all of the limitations voiced above. The apparent inaccuracies in IPEDS, however, and the limitations of the SRK graduation rate more generally, suggest that forcing institutions to report outcomes that have little impact on a school’s stature, accreditation, or enrollment provides few incentives for accurate reporting and vigilant monitoring of the data. Unless these data are pulled from the shelf, dusted off, and displayed in a user-friendly way, inaccuracies will not be corrected, and policy makers will not be pressed to come up with more comprehensive indicators of performance. Getting this information, and other indicators of school success, into the hands of consumers is a first step in a larger effort to ensure college and university accountability.

Andrew Kelly is a research fellow in education policy studies at the American Enterprise Institute.

Comments are closed.