This afternoon, AEI’s Center on Higher Education Reform, in partnership with the American Council of Trustees and Alumni, will host a distinguished panel to discuss a deeply important, but too often forgotten, subject: higher education accreditation.
Accreditation is the process by which a college or university is evaluated and its quality vetted. It is the proverbial seal of approval that shows potential students that the school is of sufficient quality, and protects students from nefarious “diploma mills” that take tuition dollars but offer worthless degrees in return. This seems straightforward enough. After all, most industries have some sort of regulatory apparatus (say, for new drugs or food), and America’s colleges and universities are among the best in the world, so something must be working.
And yet beneath this rosy surface, there are significant dysfunctions that suggest that accreditation is failing consumers and stifling innovation. As former University of Colorado president and US Senator Hank Brown says in a new essay released in conjunction with today’s event, “When it comes to federal funding of higher education, the government’s approach to quality assurance and consumer protection is a public policy and regulatory failure by almost any measure.”
There is much to say for Brown’s position. Accreditation started in the late 19th century when a bunch of northeast colleges got together to define what exactly a bachelor’s degree meant and to share insights for mutual self-improvement. This tradition has remained relatively unchanged over the years. Today’s accrediting agencies are staffed by members from the same institutions they are regulating, meaning accreditation today remains a collegial peer review process.
In addition to the question of who reviews these schools, the “what” they are reviewing must also be scrutinized. Accreditation is based largely on inputs (for example, the number of books in a school’s library or a university’s mission statement) and not student outcomes. As a result, even severely underperforming schools rarely lose accreditation. Brown documents several institutions that graduate less than 20% of their students within six years, yet remain accredited. New America’s Kevin Carey offers a similarly harrowing account of just how difficult it was to remove accreditation from Southeastern University here in Washington, DC.
Perhaps most importantly, the federal government has tied federal funding with accreditation. Students can only use federal student loans at accredited institutions. To be accredited, an institution must offer full degree programs, not just individual courses. What this means in practice is that accreditation runs the risk of discouraging innovation in higher education. Most new providers can’t survive without federal funding, so they do their best to mimic traditional colleges—such as emphasizing lecture-style instruction or full degree programs. Students are thereby left with fewer options.
Massive open online courses (MOOCs), the next big thing in higher ed, can’t be accredited—they are individual classes, not full degrees. Want to use student loans with a provider like StraighterLine, which offers introductory college courses for as little as $99 per month, and then transfer those credits to an established institution to speed up graduation? Tough luck. In an era where the value and escalating costs of a college degree are repeatedly called into question, failed public policies limit cost-effective innovations.
Defenders of accreditation argue that the peer review process is critical to retaining institutional autonomy and academic integrity. It’s a fair point—after all, academic freedom and self-governance are hallmarks of the American higher ed system—but it’s worth asking if the system could better regulate underperforming institutions and allow for new providers. To help make sense of the debate, join us at 4:30pm for what promises to be a lively discussion on a crucial topic.
RSVP and Livestream here.