TaxProf Blog

Editor: Paul L. Caron
Pepperdine University School of Law

A Member of the Law Professor Blogs Network

Saturday, January 18, 2014

Muller: Ranking the Law School Rankings, 2014

Rankings 2014Derek Muller (Pepperdine), Ranking the Law School Rankings, 2014:

Last year, I introduced the first-ever ranking of law school rankings at PrawfsBlawg. I thought I would reprise the task again.

As Elie Mystal at Above the Law noted at a recent conference, law school rankings tend to encourage more law school rankings. So it may be useful to put them in a single place and analyze them.

The rankings tend to measure one of, or some combination of, three things: law school inputs (e.g., applicant quality, LSAT scores); law school outputs (e.g., employment outcomes, bar passage rates); and law school quality (e.g., faculty scholarly impact, teaching quality). Some rankings prefer short-term measures; others prefer long-term measures.

Last year, I ranked 15 rankings. I'm adding four other rankings: Enduring Hierarchies; Witnesseth Boardroom Rankings, Above the Law Rankings, and Tipping the Scales Rankings. ...

1. Sisk-Leiter Scholarly Impact Study (2012): Drawing upon the methodology from Professor Brian Leiter, it evaluates the scholarly impact of tenured faculty in the last five years. It's a measure of the law school's inherent quality based on faculty output. In part because peer assessment is one of the most significant categories for the U.S. News & World Report rankings, it provides an objective quantification of academic quality. Admittedly, it is not perfect, particularly as it is not related to law student outcomes (of high importance to prospective law students), but, nevertheless, I think it's the best ranking we have. ...

12. U.S. News & World Report (2013): It really isn't that this ranking is so bad that it's 12th on my list. It's not ideal. It has its problems. I've noted that it distorts student quality. But, mostly, it's a point that there are quite a few rankings that, I think, are much better.

Law School Rankings, Legal Education | Permalink


Prof. Muller is forthright about how he ranks rankings: "The methodology is simple: it’s wholly idiosyncratic based upon what I value, which is, of course, what I expect others to value." (2013)

My study, Where Do Partners Come From?, 62 J. LEGAL EDUC. 242 (2012), which he characterizes as "thoroughly debunked," was intended primarily to help employers decide how to deploy scarce recruiting resources cost-effectively. The study was summarized in two articles in Bloomberg Law Reports, Cost-Effective Recruiting: New Data, 2 BLOOMBERG LAW REPORTS – LAW FIRM MANAGEMENT 9 (October 17, 2011); Becoming a National Law Firm Partner: New Data, 2 BLOOMBERG LAW REPORTS – STUDENT EDITION 3 (September 6, 2011).

The latter made clear how I believed my study to be relevant to prospective students: "In sum, the study strongly suggests that, in general, students should attend law school where they ultimately hope to practice."

I listed what I characterized as the top NLJ 100 feeder schools. I then added the following strong caveats: "These are aggregate numbers, not adjusted for class size. They measure the extent to which schools have established feeder relationships with the NLJ 100. That’s all.

"They do not tell us whether a student is more likely to become an NLJ 100 partner if she attends one school rather than another. They do not imply, for example, that she should attend BU rather than Stanford. They do tell us, however, that BU does appear to have more established feeder relationships with the NLJ 100 than Stanford does.

"All else being equal, feeder school status may be relevant – even if only as one factor among many."

The "debunking" to which Prof. Muller refers consists of the argument that my numbers should have been adjusted for size to produce per-capita results instead. Doing so would have made the numbers useless for their primary intended audience -- employers.

I addressed the relevance of per capita measures to student decision-making in the following terms:

"Whether and when per capita data should be relevant to applicants is a more complex question. The single most important determinant of how schools perform on most outcome measures (bar passage, hiring, big-firm partnership, etc.) is the quality of the students they attract. In significant part, therefore, per capita outcome measures are merely proxies for student quality. In other words, they just track other measure of student quality, like median LSATs, median UGPAs, or rejection rates. A good per capita bar passage rate (or placement rate or big-firm partnership rate) almost always means that a school has highly-credentialed students; it does not necessarily mean that the school actually does anything for those students.

"Unfortunately, applicants commonly misread such measures as reflecting value added. (“I am more likely to pass the bar if I go here rather than there, because the bar passage rate here is higher.”) Unless a measure controls for student quality, however, it says nothing about the benefits of attending a particular school. The fact that students at highly ranked schools almost always pass the bar is largely a function of the academic ability of the students themselves; it does not necessarily mean that such schools do anything to prepare students for the bar – indeed, the fact that students at more selective schools are likely to pass the bar in any event may even reduce pressure on such schools to pay attention to bar preparation.

"Analysis of the value added by particular schools with respect to particular output measures is a extraordinarily difficult task, worthwhile but well beyond the scope of this article. I have not attempted any such analysis here. What I do offer are the raw numbers – which I believe are less likely to mislead."

I stand by my work and my discussion of the utility of per capita measures. In the future, I would ask that Prof. Muller bring the same kind of care and thoughtfulness to his discussion of rankings that I'm sure he does to his other scholarly work.

Posted by: Theodore Seto | Jan 18, 2014 9:11:08 PM

Next will be the rankings of rankings of rankings.

Posted by: michael livingston | Jan 19, 2014 6:45:30 AM