Paul L. Caron

Thursday, March 15, 2012

More on the 2013 U.S. News Peer Reputation and Overall Rankings

Eric Talley (UC-Berkeley)  U.S. News Logotook the data in my post on the new 2013 U.S. News Law School Rankings and generated the two graphs below that he agreed to share on TaxProf Blog:

The first graph is academic rank against overall rank and estimates a linear regression. The second uses score on the horizontal axis, and estimates a non-linear exponential fit. In neither case did I try to control for special types of error structures (and there's clearly some heteroskedastic "fanning" of the errors), nor did account for the integer nature of rankings. But neither of those corrections would do much. Based on this quick pass, knowing academic peer rank / score appears to explain around 90% of the variation in overall USNWR score. (The scatterplots omit all unranked schools, since they do not lend themselves to this sort of analysis.)

Law School Rankings, Legal Education | Permalink

TrackBack URL for this entry:

Listed below are links to weblogs that reference More on the 2013 U.S. News Peer Reputation and Overall Rankings:


Both Ted and BDG are spot on, and are correct to point out how social-science speak may not mean the same thing to lay audiences. More specifically, if you think of the rankings as a dynamic process, there is a strong reason to believe that reputation ranking at time "t" is in part a function of various lagged USNews outcomes (e.g., rankings or even reputation scores in t-1, t-2, t-3's...). Thus, to do this right you really want to look at a cross section over time -- which is what a panel data set is.

A few years ago, I tried to engage in something along these lines, specifically to determine whether certain "engineerable" metrics, such as LSAT, LGPA, and Bar passage rate (yes, that is also somewhat engineerable in a more macabre fashion, through academic disqualification policies) had a durable effect on reputational capital. My recollection from that informal analysis was that it seemed plausible that one could engineer some very small changes in judges/lawyers' assessments, it was very hard to do much to academic rating, which was quite durable over time.

Go Princeton Law!

Posted by: Eric T | Mar 15, 2012 1:30:23 PM

An alternative interpretation of the data is that USNWR rank explains 90% of academic peer scores.

Posted by: Lea | Mar 15, 2012 10:45:09 AM

It's worth emphasizing, as Eric surely would given the space to do so, that this is a cross-sectional analysis, which largely prevents us from identifying the direction of causation. To see whether in fact UNWR rankings influence subsequent years' peer scores, as most people assume they do, we'd need panel data.

Posted by: BDG | Mar 15, 2012 9:10:41 AM

In reading Eric's graphs, keep in mind that correlation does not establish causation. In particular, be careful of the sentence "knowing academic peer rank / score appears to explain around 90% of the variation in overall USNWR score." As a matter of statistics-speak, the sentence correctly uses the term "explain." Lay readers, however, commonly interpret "explain" to have its more common English-language meaning -- which, in the sentence above, it does not. Correlation does not actually "explain" anything. The fact that wearing tattoos is strongly correlated with youth does not mean that doing so causes one to be young, even if statisticians might correctly state that it "explains" a large percentage of observed variations in age at a high level of statistical significance.

Years ago, the ABA published a study concluding that LSATs "explained" almost all of the variation in US News' "overall scores." Many law schools incorrectly read this to mean that the only way to increase their US News ranking was to increase their incoming LSATs.

Posted by: Theodore Seto | Mar 15, 2012 7:26:23 AM