Ted Seto, who extracts the tax faculty rankings from the SSRN data, shared some perspectives and asked some questions. He noted that Tax Management portfolios are not held in the same high regard in some portions of the academy as are articles. He's right. That's one of my pet peeves about the rankings game and the evaluation of law faculty. Treating law review articles as superior to all other forms of publication is a an anachronistic remnant of a dying elitism. Though there are good, but easily countered, arguments that some forms of tax publication, such as blogs or listserv postings, aren't as carefully reviewed as are law review articles, it makes no sense to consider treatises, portfolios, and articles in practitioner journals such as Tax Notes and Journal of Taxation to be inferior to something published in a traditional academic law journal.
Ted also pointed out that court citation counts don't interest most academics. Again, he is quite right. The reason probably is that many, perhaps most, law review articles are not cited by the courts, which is in and of itself telling. Why be interested in something that sends an unwelcome message?
Ted also pointed out that reputational surveys, which lie at the heart of some rankings, such as the ones done by US News, are biased by extraneous factors. If a school's athletic teams do well, somehow that translates into academic prominence. Academics on the West Coast claim that East Coast schools do better in the rankings because of their location, press coverage, and similar factors.
Ted asked me: "What objective proxy measure would you use instead? Or would you really prefer to stick with reputational surveys?" Here's my reply:
Ted,Mike McIntyre made several observations. Though he suggested that they might be "perhaps useless," I find them helpful, perhaps because I agree with them to some extent. One, for students selecting law schools, the US News rankings have credibility, a sina qua non of any rating system. Two, a rating system should reflect its audience, but it might not make sense to invest resources into a system that helps faculty making lateral moves pick their destination school. Three, it's difficult for schools to make significant moves in the U.S. News rankings but it probably is a bit easier than it was before U.S. News rankings appeared, when national reputations were "nearly immutable." Four, there may be some East Coast bias in the rankings, but West Coast schools are beginning to attract "top students and top scholars" and have "caught up a lot quicker" because of U.S. News rankings. Five, bar passage and job hunting success might be sensible factors in theory, but in reality few schools' "actual worth fluctuates anywhere near as much" as those factors do. Six, counting books and articles says little about quality, but because no one can agree on how to measure quality, we end up using things that can be counted, though counting is sufficiently flawed that reputational surveys may be better.
I think you and share similar concerns. The focus on academic journal articles to the detriment of books and other publications, to say nothing of digital course materials, etc., skews the rankings. I confess that to placate my dean and colleagues I periodically (no pun intended) put something into an academic journal. The impact of university name on program rankings is very real, and another source of skewing. Reputation surveys canvas opinions, which are worth, well, sometimes a lot and often not much.
Why have rankings? Supposedly to give various groups some sort of baseline against which to make decisions. Prospective applicants need information, hiring partners need information, law schools seeking faculty need information. So perhaps there should be different rankings based on the group seeking information. Not that I'd go so far as to
rate "party law schools" (as is done for undergraduate schools) to assist prospective applicants, but things such as percentage of students receiving scholarship financial aid, work study opportunities, and the like would be factors useless to hiring partners. On the other hand, bar pass rate should be a factor with meaning for most rankings
constituencies, and that's a statistic that is more objective than many of the others.
For me, in measuring faculty (individually or collectively, for different purposes), I want information on what faculty should be doing: teaching effectiveness, publication, and service. The first and third I'll leave for now. When it comes to publication, what is the purpose? To enrich teaching? Then its measure is within the measure of teaching
effectiveness. Is it to get attention for the school? Where? Among whom? Is it to contribute to an academic environment? I think the point of publication is to demonstrate to the various constituencies that a school's faculty can think, express itself, and be persuasive. What's the best test of its effectiveness in doing so? The extent to which
their publications (of any sort) are favorably quoted, reprinted, republished or cited by courts, journals, mainstream media, blogs, etc. and the extent to which their publications are so quoted, etc., as reference sources. In contrast, I'd subtract for cites and quotes that demonstrate serious flaws in a publication, such as a court's dismissal
of an article because it omits consideration of relevant precedent.
Thus, mere cite counts is insufficient, for the same reason SSRN downloads don't tell us why there was a download or what someone's reaction to the downloaded article was. To do what I propose would require resources, of time and/or money, to sift through the cites so that they could be evaluated as positive, neutral, or negative. It's not
a matter of looking for the A publications (the universally accepted treatise, for example) or F publications, for they, like student grades, announce themselves. It's in the middle that it matters, and so the measure of quote/cite/etc. would need to be carefully done.
As you can tell, I would consider all publications. So there would need to be some interesting research. I've yet to figure out how to identify all the people who have cited/quoted me in the digital world. Every once in a while I come across something very positive that I didn't know was "out there."
One idea that occurs to me, that probably will never fly, reflects the fact that at many schools faculty are subject to a merit compensation system. So perhaps some sort of "citation bank" where faculty provide their publications and discovered cites, perhaps supplemented with cites found by others (student research assistants independent of the school?). Then this pool of information would be available not just to those doing rankings but also to Deans and administrators seeking a better measure of faculty value beyond "I published an article in X" or "I have z downloads" or "it was cited y times." So we could factor out the times I cite myself, or the 20 downloads of someone's article by the entire faculty of his or her institution!
The fight over what to count means something with compensation committees. I've been there and I've skirmished, though because of what I publish it never has been an issue because there's enough traditional stuff to let administrators defer the question of how to value the blog, for example. But for someone doing rankings, there's no fight ... the rankings are created and then people can argue about its value, and those who make good points contribute to refinement of the rankings system (as I think has happened with U.S. News to a small extent).
If the goal of faculty publication is simply to get the school's name "out there," then a variant statistic would be relevant though perhaps of dubious quality. As I've mentioned to my faculty, a portfolio sent to 10,000 subscribers gets something in front of far more people than an article published in a journal with 300 subscribers. Of course, we don't
know if 1 person or 20 people share a portfolio or article, and we don't know if a subscriber to her law school's journal reads a particular article in it. So mere numbers are as unhelpful as mere cite counts or download statistics. At the moment, I remain fond of my "analyzed quote/cite/etc" approach.
I must hasten to add that I appreciate your SSRN download analysis. It is information, and it has its uses. But it also has its limits, and my concern is that unknowing folks (e.g., non-tax faculty) would consider it to be much more than what it is. My Dean wanted to know why and how we were behind Chapman, and I responded that somehow we were ahead of Florida. Tax folks would know that those three are in inverse order, but non-tax people might not.
I addressed several of Mike's points:
The serious flaw in U.S. News or any subjective evaluation-based ranking is that it polls people who are not necessarily aware of what changes have been taking place in legal education. How many of the polled judges and practitioners who graduated years ago will shift their perception from what the relative positions were 20 years ago? In other words, much of the pre-U.S. News lock-in that you describe continues to exist, to a great extent, in the subjective polling. When reputation numbers are inconsistent with other information, which should be considered suspect? Depends on the numbers, I suppose. There are some schools, as you point out, that have changed significantly, but are those changes showing up in US News as they should? Some are. Some aren't.In turn, Mike agreed that there is a flaw in the reputational aspect of the rankings, but it probably was not too important because in making decisions, law school applicants react to reputation and not to the outcome of objective measurement. That applicants behave in this manner is evident from the extent to which they will pass by lower-tuition high quality state schools for more expensive prestigious schools. Mike also noted that the ignorance of survey responders with respect to many schools is muted if there are sufficient survey responses. I'm not so sure of this. Ten times as many ignorant responses doesn't filter out the nonsense. Mike agrees, though, that there is a bias, as demonstrated by the high ratings that a Princeton Law School gets in some tests, simply because Princeton has a fine reputation. Mike wonders why the academy would do rankings for employers, and I agree. I'm not proposing that law schools do the rankings, I'm just suggesting that if employers, or someone in the private sector on behalf of employers, did a ranking the reflected the needs of employers, translate, the needs of clients, we might see something very different, and surely not paying much attention to SSRN downloads though perhaps paying a bit of attention to Tax Management portfolios. Mike sees a risk in employer-focused rankings, because law schools might succumb to student pressure to "teach to [those] ratings," but I'm not convinced that's in and of itself a bad thing. Employers have as much incentive to tell law schools, in effect, "this is what we need you to provide to us" and law schools seem to have to tell employers, "this is how we educate lawyers-to-be so figure out how to adapt your practice to their arrival in your offices." Mike closes by noting that "[a] 'objective' rating system that truly measured quality would be a disaster for a large number of schools." Do tell. That's my point. Using the wrong ranking masks problems. As Elliott Manning put it: "In short the ratings are like the drunk looking for the car keys under the street light, instead of the middle of the block where he dropped them--because the light is better there. This also part of the national trend of judging schools by scores on standardized tests, because they are easier to measure--never mind whether they actually teach students to think."
As for rankings audiences, the other group, perhaps, of substantial size with interest would be employers. But perhaps their minds are already made up, and no ranking will convince them otherwise.
Of course, law faculty are interested, not only for issues of lateral movement and even (perhaps) law review submission selection, but for purposes of convincing central administration that their efforts in upping the school's ranking warrants more money staying at the school and not going to main campus. Now to persuade some Deans that this proposition has some merit ....
Finally, Paul Caron weighed in with a defense of rankings based on SSRN downloads. Because the SSRN rankings tend to correlate with "the right top schools," Paul concludes that "most people would find SSRN's tax faculty ranking more persuasive" even though I claim that ranking by Tax Management portfolio author is "no more or less meaningful than any other tax faculty ranking." Paul agrees that SSRN is incomplete, but defends it on the basis that 3,500 law faculty have posted 11,500 papers on SSRN. Paul is correct that the response is "not to go in the other direction and focus on the offerings of a single publisher." I agree. As I noted at the beginning of this post, there was an intended facetiousness to my rankings. Of course ALL publishers should be considered. Think not only Tax Management, but the various editions of Tax Notes, the practitioner journals, and all the other forums in which good, sound legal reasoning is displayed. There may be 2.4 million SSRN downloads, whatever that measures, but how many millions of times has a publication that is not an SSRN article been opened and used by someone trying to solve or prevent a tax or legal problem?
I think Paul agrees, because he quotes his previous proposals to use "all faculty publications" and to weigh them in some manner that reflects utility and value. He suggests that an extensively quoted publication should carry more weight than a mere cite in a footnote. Of course. Here's the challenge. Even if the resources are acquired to do such a ranking, it will be resisted, chiefly by those who don't do as well under it, on the ground that "we've been using SSRN-download rankings and why change something that isn't broken?" That's why I prefer to abandon SSRN-based rankings so that the advantage it obtains by showing up early in the game doesn't overshadow the fact that it shows up early because it quick and easy. The defense of using SSRN-download rankings because anything is better than nothing ought not apply because there are all sorts of anythings and somethings that are not preferable to nothing.
I'll close with my two responses to Paul's commentary. The first deals with a systemic SSRN flaw:
SSRN is biased in favor of recent articles, having almost nothing, as best I could tell, that was published before the mid to late 90s. Some of the most influential pieces of tax publishing, whether in article form or otherwise, was generated long before the mid 90s.The second summed up the point I tried to make on Friday:
Don't take my facetious jab at SSRN rankings too seriously. The point of my post wasn't to advocate using one publisher, but toIn a world where "money talks," perhaps a fun way to figure this out would be a web site that offered free subscriptions to lawyers, gave them $1,000,000 in non-redeemable credit, provided a list of all law faculty publications, and asked the lawyers to spend the $1,000,000 as though they were purchasing what they needed for whatever it was they were doing as lawyers. Assuming the Justice Department didn't brand such a technique some sort of on-line gambling national security threat, it might trigger a high response rate by combining the concepts of money, lawyers, and games. So what would they buy? Yes, I know that some would argue that some truly valuable legal scholarship would fare badly in such an experiment, and my response would be, "Why?"
(a) count everything, not just the self-glorified world of articles, beyond which the world has moved,
and
(b) count something that has meaning in terms of influence, such as citations by courts and other authors, filtered for approval and disapproval, rather than download numbers that can easily be bloated, don't tell us if the downloaded item was read, and, most importantly, don't tell us anything about the quality of the downloaded item.
The point of my post was an exaggerated display of the "easy to count, therefore gets attention" game that is going in with the rankings game, whether by US News, Leiter, or any of the others, except for a few attempts here and there to introduce something of greater value.