May 29, 2012
Steinbuch: A New Method of Ranking Law Faculty and Law Schools
Robert Steinbuch (Arkansas-Little Rock), On the Leiter Side: Developing a Universal Assessment Tool for Measuring Scholarly Output by Law Professors and Ranking Law Schools, 45 Loy. L.A. L. Rev. 87 (2011):
With varying results, many scholars and commentators have focused their attention on judging the quality of law professors, as measured by their scholarly output. First, this Article explains the methods respectively developed by Brian Leiter and Roger Williams University School of Law for top-tier and second-tier law schools, and it considers other works of scholarship that measure academic publication. Then, this Article explicates a protocol (the “Protocol”) for measuring all of the scholarly output of any law school faculty member. Building on the Leiter and Roger Williams methods, the expanded Protocol accounts for a wider breadth of faculty publications and includes weighting factors based on law-journal rankings. Finally, this Article concludes by applying the Protocol to its Author and his colleagues. In sum, the Protocol that this Article develops and applies will provide a significantly more objective set of data with which to evaluate the scholarly performance of legal academics.
It must be a bit awkward in the Arkansas faculty lounge this summer:
TrackBack URL for this entry:
Listed below are links to weblogs that reference Steinbuch: A New Method of Ranking Law Faculty and Law Schools:
I hope Mr. Steinbuch is not too far under water on his house. One way or another, he's going to be putting it on the market soon.
Posted by: jmike | May 29, 2012 2:19:29 PM
Considering the amount of unadulterated B$ that is the average law review article these days. I am not sure that the less productive ones are not the greater benefactors of society.
Posted by: Walter Sobchak | May 29, 2012 3:45:19 PM
(insert obligatory rant against the sheer idiocy of "publish or perish" here)
Posted by: Vader | May 29, 2012 6:25:10 PM
Perhaps Prof. Steinbuch's methodology is fine, but (as was the case with Leiter's and Cooley's rankings) it is disconcerting when the author is ranked higher under his or her own ranking than he or she would be under more well-known sources.
Posted by: Anon | May 29, 2012 6:36:30 PM
Why don't we rank law professors on, say, their ability to teach law? Does anyone care to posit any kind of correlation between productivity/citations and performance in front of paying students?
Posted by: Francis B. | May 29, 2012 7:30:44 PM
I think Paul Caron is the best because he has this great blog-
The study is flawed, a successful teacher is measured by the success of their students. Success is a subjective factor that may be quantified by a survey of students who have taken the course. I do not see this in the study. The student is the consumer or client and ultimate judge of the ranking.
Posted by: Nick Paleveda MBA J.D LL.M | May 30, 2012 10:19:27 AM
No doubt the default title of the pdf is solely an artifact of truncation based on character limits:
"On the Leiter Side_ Developing a Universal Assessment Tool for Me"
Still, one cannot help but notice that his metric -- er, Protocol -- reveals him to be not only likely more highly-ranked than he would be under traditional metrics (as noted by Anon above), but in fact his law faculty's #1 bargain.
(In terms of scholarship, at least. Re: Francis B.'s comment, he notes at the beginning of the article: "I leave for another day, and perhaps another person, the task of developing the measurement tools to evaluate teaching and service. With that said, however, I note at the outset that I reject the claim that scholarship, teaching, and service are mutually exclusive categories.")
Posted by: heh | May 30, 2012 4:22:51 PM
As someone on Steinbuch's list (and hoping that my favorable comment won't be taken as a function of seeming under-compensation at my former school), I can only say that I appreciate the effort to quantify scholarly productivity. I found (as Nick Paleveda's post indicates) the article's claims to be quite modest insofar as Steinbuch states that he is only describing a slice of the assessment pie, and only proposing a system for discussion at that, for the greater purpose of showing that it can be done. I like some of his metrics (the ones that make me look good) and I dislike others (the ones that don't), but I'd say overall that I laud any effort to assess faculty performance by objective and consistent standards, rather than by who is a favorite child for one reason or another of academic politics.
Posted by: Richard J Peltz-Steele | May 31, 2012 1:09:01 PM
Sorry, my parenthetical mention of Nick P.'s post was supposed to be of heh's.
Posted by: Richard J Peltz-Steele | May 31, 2012 1:42:07 PM
I too am on Prof. Steinbuch's list. A couple of thoughts. First, in response to Anonymous, Steinbuch is primarily relying upon a methodology developed by others. Thus, I don't think that the fact that Steinbuch performed well undercuts the credibility of his conclusions. In addition, the underlying substantive reason Steinbuch did well is that he is one of the most productive scholars on our faculty. If you have alternative metrics you'd like to propose, I would be interested in seeing them. Second, in response to Nick Paleveda, the most important measure is column 1. Steinbuch finished second to our colleague Michael Flannery. Third, more generally, I did not perform well given Steinbuch's measure, but I'm fine with the system he developed. Based on my review of the article and discussions with the author, I think it is an excellent attempt at quantifying scholarly output. Fourth, and last, I want to second Rick Peltz-Steele's comments.
Posted by: Josh Silverstein | May 31, 2012 2:40:02 PM
I need to make a correction too -- my response to "Nick" was really a response to "Heh." Sorry.
Posted by: Josh Silverstein | Jun 1, 2012 12:29:59 PM
To Vader's comment above. The author explicitly states his application of an
" expanded Protocol" which " accounts for a wider breadth of faculty publications and includes weighting factors based on law-journal rankings." This benefits the entire faculty which would not have been given credit under other "well known" binary metrics which only account for publication in top tier journals. As such, the entire faculty benefits from this protocol. It should not be disconcerting, unless one believes publication should only count in top-tier journals. This protocol is much more inclusive than binary measures and benefits the entire faculty.
Posted by: researcher | Jun 5, 2012 6:18:11 PM
correction to my post..not re: Vader but re: Anon
Posted by: researcher | Jun 5, 2012 6:53:01 PM