Andy notes that college administrators and faculty are not big fans of the magazine rankings, unless, of course, they're at or near the top of the list. Educators, as Andy acknowledges, have complaints about the "purpose, methodology or integrity" of the rankings. Even legislators and parents have joined in the chorus of concern that "rankings have become a kind of tune that schools have to dance to, whether they want to or not." There's no question that tuition dollars, as Andy mentions, are being channeled into publicity stunts of one kind or another as schools jockey for the spotlight. Too many such dollars, in my opinion.
Then Andy gets to the good part:
I can understand the concern. But speaking as a prospective purchaser of higher-education services (knock wood), I think the criticism misses the point.To the extent that Andy is arguing information dissemination is important, I totally agree. To the extent that he argues what's currently available in terms of rankings gets the job done, I totally disagree.
College is what economists like to call an "experience good." The term simply means you have to consume it - experience it - to really know what you're getting.
And by then you've already bought and paid for it.
***
You can't easily compare prices - how do you know what's a bargain, or what might be worth paying up for? So consumers seek help.
***
That means consumers' need for help - and the demand for magazine rankings, individual admissions counselors, and who knows what else - is probably here to stay.
I don't think this diminishes or "commodifies" higher education. It simply makes more information available to all sides, creating more transparency and efficiency.
College students and their parents need information. Rankings provide little of what matters, distort some of it, and then mix it into a brew according to arbitrary weights to generate some sort of number that has no genuine meaning. The attempt to rank college football teams through a complex formula is a good example of another, justifiably criticized, and yet far more refined, attempt to sort data.
Let's turn to where the rankings go wrong, and why the product that the magazines and other reviewers are making available is of such dubious quality. I'm not going to rank the rankings, because I think all of them are inadequate. Consider this a consumer review of consumer reviews.
First, the rankings omit important information. What information matters? It depends on what the applicant wants. Someone interested in a chance to showcase his or her talents for a professional sports league probably doesn't care about the dollars of research grants obtained each year by the physics department. Someone headed for a career in astronomy probably has little interest in the number of prizes won by the English literature faculty. I'm not certain either of those statistics, important, of course to youngsters interested in physics or English literature, find their way into the rankings data. One of the rankings that gets right to the point is the "party school ranking," which is described and analyzed in articles such as this one from the Chronicle of Higher Education. I'm sure both students and parents can find value in that ranking, but perhaps for different reasons.
Other useful information might include the range of salaries and average salary for graduates who major in each discipline at the school, measured at 1, 5, and 15 years after graduation. I've seen such statistics for particular schools and programs, though I'm not sure how public they are. True, students going to college to get some sort of holistic experience might not be interested in these figures, but parents and students who are contemplating the investment of $150,000 or more into four years of education just might want to see what sort of economic return awaits. How about information revealing the acceptance rates experienced by a college's graduates when they apply to specific graduate schools?
Perhaps information on campus security, which the government requires colleges to publish but which some schools allegedly doctor, would be important. What about ease of transportation to and from home or jobs? Would parents and prospective applicants be interested in statistics on the downtime of the university's information systems?
Second, some of the rankings are based on information acquired in ways that generate misinformation. The use of surveys has drawbacks, because they are so subjective, but even if subjective opinion of reputation is useful, the surveys need to be refined. Having been on the "surveyee" side several times, I quickly concluded that the questions were poorly designed, as I explained several years ago in Ranking Tax Programs. It's as though a car magazine ranking 2006 model automobiles included places on the survey to rank Ramblers, Packards, and Studebakers. Garbage in, garbage out.
Third, rankings acquire bad information. Stories abound about schools providing doctored information. It's an open secret that schools manipulate information. There are some law schools that restrict first-year admissions so that their "numbers" look better, and that admit scores of transfer students for the second year to make up for the tuition loss. What the rankings tell us about such schools isn't a representation of reality. It's spin. Spin is useless. No consumer should pay for it.
Fourth, and this is perhaps the worst aspect of the rankings, someone decides that some statistic is worth 3.484484 percent of the total weighted score used to rank. Some other factor is worth 9.939203 percent. How do we know that? We don't. It's a number grabbed out of the air. The problem is that for some applicants, the crime rate is important, for others, the number of Nobel Prizes per decade won by the faculty matters, and for still others, the number of bars and taverns within walking distance of the campus has value. Relying on some editor's weights is about as good as relying on some neighbor's opinion. As Andy points out, "In the old days (before U.S. News), they got it from friends, or family, or high-school guidance counselors. Or they looked for signals, such as ivy-covered buildings or winning football teams, that seemed to connote quality." However inadequate the old days were, the new days are no better. At least in the old days we knew our friends and family. How many of us know the new days' rankings editors, those anonymous folks who are hidden away and can't be asked the followup questions we'd get in dialogues with friends and family?
What would be a great service for prospective students and their parents are the sorts of things that we get with respect to other goods and services. Surveys of a college's recent graduates, with the sort of scoring and commentary we find when looking to purchase cars or computers, would be far more interesting and valuable. Why can't the ranking folks simply provide alphabeticized lists of education institutions, along with the underlying data, rather than burying the data behind formulas and weights? Perhaps adding a variant in which the schools are listed geographically or by tuition would be helpful. Hah, why not make the information available in a manner that lets parents and students enter their own weights for each factor? Why not, therefore, separate the reporting of information (news) from the weighing of the factors (editorializing), and permit (and encourage) applicants and their parents to do some thinking for themselves?
I suppose that what we have is better than nothing. That argument is the canard that every rankings outfit hauls out as the ultimate defense of what is offered. But having something that is better than nothing is no reason to abandon the effort to offer something better. In this instance, the something better can be generated, not by adding features, but by stripping away the data manipulation and presenting unadulterated information. Then we truly would have something that does what the current and flawed rankings system fails to provide.