Skip Ribbon Commands
Skip to main content

 Email  Share  Print

Why Graduation Rates Matter—and Why They Don’t

 

Bryan Cook and Terry W. Hartle

 

Graduation rates have become a key component of discussions about accountability in higher education. Federal graduation rates have been calculated for more than a decade but for most of that time, nobody paid much attention when the data were released. That’s no longer the case.
 
The recent interest in graduation rates (a phrase sometimes used interchangeably—and incorrectly—with attainment rates and completion rates) began with the Commission on the Future of Higher Education, also known as the Spellings Commission, which called for “dramatic” changes in higher education to address the “persistent gap between the college attendance and graduation rates of low-income Americans and their more affluent peers.”
 
President Obama upped the ante in 2009 when he committed the nation to once again having the highest proportion of college graduates in the world and promised to provide the resources needed to get there.
 
The focus on graduation and completion rates goes beyond words in a report or bold goals stated by the president. Almost every other new initiative, research report, or news story on students and higher education somehow relates to graduation rates.

In recent years, we’ve seen a steady stream of reports and other efforts that call attention to graduation and completion rates. For example:
 
  • A 2010 report by Fastweb and Maguire Associates found that among 23 criteria of institutional quality, high school seniors chose graduation rates as the fifth most important indicator of quality, ahead of graduate school placement, a rigorous core curriculum, the existence of an honors program, and college rankings in U.S. News & World Report and other college guides.
  • The Complete College America Alliance of States was created in 2009 and thus far, 22 states have promised to develop specific plans to improve their college completion rates.
  • The National Governors Association announced the Compete to Complete initiative, which focuses on increasing the number of students in the United States who earn college degrees and certificates.
  • During the 2010 National Collegiate Athletic Association (NCAA) March Madness basketball tournament, U.S. Secretary of Education Arne Duncan not only criticized the graduation rates of student athletes but also suggested that NCAA teams that fail to graduate 40 percent of their players should be ineligible for post-season competition. He raised the proposal again during the 2011 tournament.
In an environment in which accountability and transparency have become watchwords for virtually anything government does, it’s easy to understand the appeal of graduation rates. They are an obvious, commonsense indicator of how well an institution is serving its students. After all, what better evidence could we have than the percentage of those students seeking a degree who actually receive one?
 
Moreover, common sense suggests a graduation rate can be a simple, standard measure that is easy to calculate. The federal formula itself—the percentage of students who enter an institution in a given year and leave with a certificate or degree some number of years later (historically, six years for four-year schools and three years for two-year schools1)—is straightforward enough.
 
Finally, a single graduation rate is very easy to understand and little interpretation is required: A high number is good and a low number is bad.
 
An Uncertain Business
Unfortunately, things that look too good to be true often are. It turns out that calculating and interpreting graduation rates is far more complex and analytically challenging than it should be. As a result, the numbers themselves may, despite their apparent simplicity, provide a seriously misleading picture of how well an institution is doing.
 
The challenges of establishing a standard, comparable graduation rate have been present ever since Congress first mandated that institutions calculate them. While the Student Right to Know Act2 was passed in 1990, it took five full years—and several false starts—before the U.S. Department of Education was able to issue final regulations setting out how schools should calculate and report their graduation rates.

It turns out that determining who is and is not included in the calculation is a pretty uncertain business.
 
Both national and institutional graduation rates are calculated using data from the Integrated Postsecondary Education Data System (IPEDS ).3 But IPEDS-produced graduation rates should come with a big, clear warning label: “BECAUSE MANY STUDENTS ARE EXCLUDED FROM THIS CALCULATION, GRADUATION RATES MAY BE SIGNIFICANTLY INACCURATE.”

The IPEDS calculation excludes students who begin college part time, who enroll mid-year, and who transfer from one institution to another. Put another way, IPEDS counts only those students who enroll in an institution as full-time degree-seekers and finish a degree at the same institution within a prescribed period of time.
 
The definition may have been appropriate for higher education institutions in the mid-1980s when traditional students were a much larger share of enrollments. But the rapid increase in nontraditional enrollments means that the current 25-yearold definition excludes a huge number of students. According to American Council on Education calculations, about 61 percent of students at four-year schools and 67 percent of those at two-year institutions are simply excluded from the calculation.
 
In addition, IPEDS graduation rates have historically been limited to students who complete a degree within six years at a four-year institution or in three years at a two-year school.4 To put this in more straightforward terms, consider President Barack Obama, House Speaker John Boehner, and former Alaska governor Sarah Palin. One of the few things these three individuals have in common is that all are, according to the federal definition, college dropouts.
 
Of course, other sources for graduation rates include the students IPEDS does not and also have the capability to look beyond the timeframe currently mandated in the Higher Education Act. The best known sources are the National Student Clearinghouse (NSC ) and data systems maintained by state governments. However, all of these sources have serious limitations. Because NSC is voluntary, it does not encompass student data for all postsecondary institutions—and it too excludes some students.
 
In the case of state data systems, they are largely limited to public institutions and students who attend school in state. So the data usually provide little information about graduation rates at private colleges and universities and do not track any student who transfers to a school outside the state system.

While calculating graduation rates is more complex than widely perceived, interpreting them is an even bigger challenge. As previously stated, one of the reasons graduation rates are frequently used as a measure of institutional effectiveness is the perception that they are comparable. If University A has a graduation rate of 89 percent and University B has a graduation rate of 51 percent, then it seems sensible to assume University A is more effective at graduating its students. But what if University A is highly selective with less than 30 percent of students qualifying for a Pell Grant, while University B is not selective and nearly 80 percent of its students receive federal need-based aid?
 
We pride ourselves on being a nation of diverse postsecondary institutions with open access to a college education, but we also want to compare the graduation rates of schools that serve low-income, disadvantaged students who are often less academically prepared, to the rates at highly selective institutions. Until graduation rates can be normalized to account for the diversity of students’ academic and economic backgrounds, it is difficult to know how a graduation rate at one school compares to the rate of any other institution. Perhaps the best we can do is hope that in comparing graduation rates, policy makers and the media will focus on institutions that have similar student characteristics.
 
Doing the Math
There are no simple solutions to the challenges presented by calculating graduation rates. One step considered during the Bush administration was the creation of a national student database that would have made it possible to track student behavior over an extended period of time, regardless of where they enrolled. Aside from the technical challenges of creating such a system, the idea raised serious privacy concerns, and Congress enacted legislation forbidding the implementation of such a system.
 
The National Student Clearinghouse system might provide a basis for a non-governmental solution but at present it does not collect all the information needed to make accurate calculations. Moreover, not all colleges and universities participate in the Clearinghouse and those that do retain control over the use of the data. In short, even if the Clearinghouse had the requisite data, it could not release it without permission from the institutions that participate.
 
The Obama administration has proposed that state governments create education databases that can track students from kindergarten through college, collecting identical information so that it would be at least theoretically possible to follow students who move across state lines. Some foundations are supporting this effort, but the enormous complexity involved in the creation and then merging of multiple state databases suggests this system will, at best, require a decade of development.
 
So given their complexity, do college graduation rates really matter? In fact, they do because in the eyes of the public, policy makers, and the media, they provide a clear, simple, and logical—if often misleading—number. Indeed, a recent study from the American Enterprise Institute for Public Policy Research compared investing in a college education without knowing an institution’s graduation rate to buying a car without knowing its mileage. This is not exactly a fair comparison, as under federal standards, car mileage is precisely calculated by putting all cars through an identical set of tests in a carefully controlled setting.

When graduation rates are determined, little is controlled and much is excluded or ignored. At best, graduation rates are—for the vast majority of schools—an estimate based on a relatively small number of students. And therefore, as they always say in the car ads, “Your mileage may vary.”
 
 
Bryan Cook is director, Center for Policy Analysis, American Council on Education, and Terry W. Hartle is senior vice president, Division of Government & Public Affairs, ACE.
 
 
Notes:
1. The 2008 Higher Education Opportunity Act expanded the window for graduation rates to 200 percent of normal time to degree (eight years for four-year institutions and four years for two-year institutions).
2. The Student Right-to-Know and Campus Security Act requires colleges and universities to collect and report data on various subjects, including graduation and retention rates.
3. IPEDS is a system of interrelated surveys that collects information on all colleges and universities that receive Title IV funding. The surveys are conducted annually by the U.S. Department of Education’s National Center for Education Statistics (NCES ).
4. Although the 2008 Higher Education Opportunity Act expanded this window to eight years, an increasing number of students work to afford college (some even stopping out for significant periods of time), so it is conceivable that more and more students may be taking longer than eight years to complete a degree.