TaxProf Blog

Editor: Paul L. Caron
Pepperdine University School of Law

A Member of the Law Professor Blogs Network

Wednesday, March 27, 2013

ATL Survey: A Better Law School Rankings System

2014 U.S. NewsAbove the Law:  What Would a More Relevant Law School Ranking Look Like? You Told Us:

Last week, we asked for your thoughts on what an improved, more relevant approach to law school rankings would look like. This request was of course prompted by U.S. News’s revisions to its rankings methodology, which now applies different weights to different employment outcomes, giving full credit only to full-time jobs where “bar passage is required or a J.D. gives them an advantage.” ... [A]bout 500 of you weighed in with your opinions on which criteria should matter and which should not when it comes to ranking law schools:

ATL

http://taxprof.typepad.com/taxprof_blog/2013/03/atl-survey-.html

Law School Rankings, Legal Education | Permalink

TrackBack URL for this entry:

http://www.typepad.com/services/trackback/6a00d8341c4eab53ef017c38255ff8970b

Listed below are links to weblogs that reference ATL Survey: A Better Law School Rankings System:

Comments

Biases of Law School Rankings

The Above the Law (ATL) survey concerning the perceived quality of a law school is in many ways proof of the anti-intellectual and anti-qualitative nature of the legal academy and the various systems by which law schools are ranked. I don’t even want to go into the idiocy of the US News rankings and the serious harm they have done to US law schools. It is enough to examine the responses to the ATL survey to understand just how confused, incoherent and superficial are the perspectives of many legal educators. In negotiation strategy it is always dangerous to allow others to establish the agenda or define the categories concerning what is “on the table”. Yet law faculty members have allowed US News to do just that. A result is that it is almost impossible to escape the “dead hand” of the US News categories in determining whether their law schools are worth anything as a matter of real and perceived quality.

Let me offer a sense of what I am saying based on the ATL responses.

Amazingly, library resources come in last place in terms of relevant characteristics. I say “amazing” because one might expect that the scope and depth of an institution’s library resources for the purposes of faculty and student research were vital elements in that institution’s “actual” quality from an intellectual, research driven perspective.

Student survey results are intriguing in the sense that they appear at all. It is also a broad category with numerous surveys (happiness, satisfaction, ‘nice” faculty, on-site food service, best coffee and so forth) offering profound insights into matters that are obviously at the heart of a law school’s quality. Of course I even question the purpose of anonymous student surveys of a teacher’s quality and effectiveness, at least on the criteria of relevance and potential utility, because the students lack the experience requisite to understanding what is relevant in the areas of being a lawyer so there is legitimate reason to think that in many instances law faculty submit themselves to this system because they must. I have, however, long found older students and evening students with life and job experiences to have a great deal to offer and also discovered that honest discussions can be had with such students that benefit teacher and student.

On balance, however, my core questions relate to what seem to me to be the most critical issues in gauging an institution’s quality along with how that quality is measured. If we refer to the ATL list above we should apply the “rigorous analytical” powers possessed by American law professors and actually go behind the categories to ask what they have to do with measuring actual quality, to what extent lumping these generally unrelated categories together into a purported “method” for qualitative evaluation has any rational legitimacy (no), and if not and we still want to invent measures for qualitative rankings what would honest categories be and how do we identify, define, collect and interpret the data to produce real rankings.

Let me put a few thoughts out for consideration. Seven of the final eight categories ranked as having the greatest relevance (reputational, selectivity, GPA, LSAT score, federal clerkship, large law firm placements, and employment data) are combinations of bias, historical reputation, and the effects of market factors relating to the particular contexts within which specific law schools operate. A “national” law school will therefore automatically do better than a “local” law school because its graduates are spread across a wider set of employment markets in the US. When judges are submitting reputational assessments they will (as we all do) go with “what they know” or think they know versus having any sense of the qualitative reality of what occurs at a specific law school that is not “on their radar”. The same goes for law professors’ assessments, in part because they have a motivation for higher ranking of the law schools they attended and they typically attended a national law school. This same phenomenon applies in the case of partners in large law firms and the bias toward federal clerkships as a qualitative indicator.

An interesting thing is that there are categories offered as separate indicators that appear to be intimately connected. I would include selectivity, GPA and LSAT scores in that dimension because they are part of a whole in which selectivity is largely defined by GPA and LSAT. Similarly, there are real market differences between national, regional and local law schools that distort the fair assessment of actual quality because local and regional law schools are forced to compete in a context in which they do not generally operate and as a result are automatically downgraded in relative terms in ways that have little or nothing to do with the quality of the school in the specific area within which it competes.

The same can be said of the large law firm, federal clerkship and even an important part of the overall employment category. This applies in several ways. One is that it shows a clear bias among law professors for the federal clerkships and large firms as indicators of quality in law practice. This can be said to be deference to federal judges and the hiring practices of large law firms as indicia of quality (of some sort), as a statement of “what matters” in a legal career, and especially a “reaffirmation of self” by law professors who continue to emulate Duncan Kennedy’s description of the world of law schools as “reproduction of hierarchy”. My point is that while it may represent a type of quality it is only a small part of what lawyers do, an incredibly small aspect of law practice, and a world that is far outside the one in which most people come into contact with law and legal institutions, or one in which the vast proportion of lawyers and legally-trained law graduates function. Basing “quality” on such a narrow segment of law practice, employment, and background inevitably distorts assessments.

It is quite intriguing that faculty scholarly productivity is ranked relatively low in the relevance category. In fact, this may be one of the most honest among the responses in the sense that various analyses have called into question the issue of productivity. The reality seems to be that very few law review articles actually get read. To the extent that some do then the audience is often like-minded “scholars” who are pleased to be cited and to discover that others think like they do. This does not mean that “productivity” has much of anything to do with substance or impact.

If, in fact, the critiques of legal scholarship as being technical reportage, or endless “regurgitation” of the same without adding much of anything new to our store of knowledge are often fair assessments—including the fact that the average article is read by three people—then it is fair to list scholarly productivity low on the relevance scale. I would prefer for the evaluation systems to begin to come up with something like a “Faculty Impact Assessment” in which law faculty are evaluated on the totality of their productivity and outreach activities as they contribute to the intellectual, legal and societal dimensions. Many in law schools create and participate in programs and projects, work in institutions aimed at reform and justice, offer substantive presentations and the like that never make it into a law review but in fact contribute more and reach more people with greater impact than an article read by few and understood by none.

In the same vein, how should criteria be developed that measure educational quality of an institution? One might think that for students true educational superiority would be an important criterion. But in the US News and the ATL “methods” there are no categories that attempt to measure educational quality. Mostly, all the so-called methods do is to regurgitate pre-existing criteria that are of questionable merit and utility in relation to real educational and institutional quality. It would be nice if law faculty had the intellectual interest, political will and commitment needed to design a fair and useful system of qualitative evaluation.


Posted by: David Barnhizer | Mar 28, 2013 9:28:02 AM