Paul L. Caron
Dean




Wednesday, October 25, 2017

The Deceptive Linearity Of Law Review Rankings

We're at that time of year - the fall law review placement season, the FRC and hiring, tenure applications and review - when the question of the value of the placement of an article in a particular journal raises its questionably meaningful head.

The point I am raising here is related to but somewhat different than that made by Paul in his 2006 essay, The Long Tail of Legal Scholarship.   That was about the top-end loading of citations. Along the lines of the 20-80 principle, not only do 20% of the articles account for 80% of the citations (in concept), but the tail of seldom or never cited work stretches out a long, long way.  I add to it something I observed about US News "peer assessment" rankings around the same time:  the rankings masked the underlying distribution curve under which the ordinal rankings were significantly less meaningful once you got beyond the very top ranked schools (and apart from all the other issues with the meaningfulness of those rankings).

Somebody in my hearing raised the question of the relative value of a law review article placement based on the Washington & Lee law school library's ranking system.  So I took a closer look at it.

I would suggest, before getting all hot and bothered by the difference between, say, the 60th and the 95th ranked journal, or between the 100th and the 220th, one ought to look at how the data appears if you put aside the ordinal ranking and group the reviews in bands.  I arbitrarily took bands of 10 points.  For the top the fifty-two, where the review ranks ordinally makes a hell of lot more difference than if you are looking at reviews 52 through 98, and once you get below 98, Paul's "long tail" thesis kicks in.  Is there really anything meaningful about the ordinal ranking at all?

W&L

https://taxprof.typepad.com/taxprof_blog/2017/10/the-deceptive-linearity-of-law-review-rankings.html

Legal Education, Scholarship | Permalink

Comments

Another question ought to affect how we interpret this data: Which direction (if any) does causality run? That is, are articles cited because they are in Stanford? Or are they in Stanford because they are likely to be cited? The latter hypothesis would suggest that the editors are doing something valid. The former is clearly true to a degree. The true answer is probably somewhere in the middle, but which effect dominates remains an open question. All of this is beside the point, of course, if the author's primary objective is to pad his or her resume as much as possible.

Posted by: Theodore Seto | Oct 25, 2017 2:24:23 PM