TaxProf Blog op-ed: A Defense and Explanation of the U.S. News “Citation” Ranking, by Ted Sichelman (San Diego):
Since U.S. News & World Report released its plans to rank law schools on the basis of citation counts, the blogosphere has been agog in criticism of the proposed ranking (e.g., here, here, here, and here). Unfortunately, much of the consternation is based on pure speculation as to how the ranking will be constructed, resulting in an echo chamber of misinformation that has now led some law school deans to consider a “boycott” of the rankings. At the same time, other critics bemoan yet another quantitative metric to “rank” law schools, buttressed by concerns that a ranking based on faculty citations will do little to aid would-be law students focused on teaching quality and jobs.
Here, I attempt to clear the air by dispelling this misinformation and by offering a brief defense of the proposed ranking. As background, I have been constructing a similar ranking with Paul Heald (at Illinois), using in part much of the same HeinOnline data that will be used for the U.S. News ranking. Additionally, I have been providing substantial input to Hein on its citation metrics. As such, I am intimately familiar not only with the limitations (and substantial benefits) of the HeinOnline database, but also of constructing such a ranking more generally. With that background, I address the major arguments lodged against U.S. News’s proposal in turn.
1. Quantitative Rankings in General Fail to Capture the Qualitative Nuances of Scholarly Work
Some have quipped that many strong pieces of scholarship are infrequently cited and many weak pieces are frequently cited. As such, on this view, citation counts are not a strong proxy for faculty reputation.
Although I agree the first premise—i.e., many strong pieces with low cites, many weak pieces with high cites—on average, based on my prior work, the pieces with the most cites are of much higher quality than pieces with lowest cites. For instance, according to HeinOnline, the five most-cited law review articles published since 2000 are Elena Kagan, Presidential Administration, Harvard Law Review (2001); William J. Stuntz, The Pathological Politics of Criminal Law, Michigan Law Review (2001); Russell B. Korobkin & Thomas S. Ulen, Law and Behavioral Science: Removing the Rationality Assumption from Law and Economics, California Law Review (2000); Henry Hansmann & Reinier Kraakman, The End of History for Corporate Law, Georgetown Law Journal (2011); and Dan L. Burk & Mark A. Lemley, Policy Levers in Patent Law, Virginia Law Review (2003).
Anyone familiar with the legal literature reading through the top few hundred most-cited pieces published since 2000 on Hein would immediately see that the vast majority are very high quality works. In contrast, anyone reading through the least cited works would generally see the opposite. Certainly, there are many exceptions to the rule, but a strong metric need not be a perfect proxy.
Indeed, nearly all law professors assign grades to their students on the basis of a single final exam or paper, or perhaps a midterm thrown in for good measure. These grades are quite imperfect measures for a student’s quality as a prospective lawyer, but we use them nonetheless. The same goes for LSAT scores, bar exam scores, and all sorts of other metrics that law school deans, professors, admissions committees, employers, and others rely on every day to make critical choices that deeply affect the lives of those being measured. Why? Because undertaking a deep qualitative review of every single student so as to be more accurate than these imperfect quantitative measures would be too time-consuming and costly to undertake. Rather than have no measure, we adopt rough proxies, which serve a very valuable purpose. As the hackneyed law school saying goes, “Do not let the perfect be the enemy of the good.”
2. A Ranking of Law School Faculty Reputation is Unnecessary, In Fact, Pernicious.
Besides employment and bar passage rates, tuition, faculty-student ratio, average LSAT and undergraduate GPA, many prospective students want to know the general quality and reputation of their professors within and outside the legal academy. In other words, “Will my constitutional law professor be more like Laurence Tribe or a part-time adjunct nobody has ever heard of?”
Indeed, the largest component of the current U.S. News ranking is “Peer Assessment,” which asks law school deans and other faculty members to rate a law school overall. Although more goes into this measure than faculty reputation, from my own experience taking the U.S. News survey and conversations with others, I know that many largely base their scores on their views of the scholarly quality of faculty at other schools.
Unfortunately, the “Peer Assessment” score in U.S. News largely tracks the overall U.S. News ranking, and thus for most schools, provides no useful indicator separate from the other metrics used by U.S. News (the same mostly holds true for the related “Bench & Bar” assessment score). Indeed, in my work, I measured the correlation between the 2016 Peer Assessment and Overall Ranking to be 0.96, meaning that the differences between these measures are essentially negligible.
The reason for this high correlation is probably that deans (who account for 50% of the peer assessment input) are very unlikely to have the time or interest to keep abreast of academic developments at other law schools. Thus, their ratings very likely track the overall ratings. Indeed, the same problem likely afflicts ordinary faculty members.
Thus, to the extent one thinks some faculty reputation metric is useful in ranking law schools, a better measure is needed. Although citation counts are not perfect, they are quite good—and used in many in other disciplines—at measuring faculty reputation, particularly at a school-wide level.
I discuss why in more detail below. But before doing so, I address a criticism that ranking schools based on faculty reputation, by citations or otherwise, is in fact pernicious and does not help students. The argument is generally two-fold. First, well-known and highly cited professors are typically immersed in their research and are generally poor professors. Thus, providing any incentive to law schools to preference research will diminish the quality of teaching and harm students. Second, the type of research that law professors conduct has little relevance to the real world, much less to law students, who are mainly concerned about getting high-quality jobs. Rankings that incorporate faculty reputation therefore distort or mask the types of information students should care about most.
As to the first argument, the empirical evidence points in the opposite direction, showing that highly cited professors are at least the same or better-than-average at teaching (see here and here). This is sensible, because professors who are usually highly cited and well-known are usually strong speakers, keep up on their subjects, and think creatively about the important issues they teach.
As to the second argument, again, the best evidence is largely to the contrary, showing that legal scholarship is highly relevant (see here and here). In addition, in my experience, those scholars who are well-known and highly cited are often well-connected to the judiciary, law firms, governments, non-profits and the like, and can even play a substantial role in helping their students land high-quality jobs. Certainly, there are counterexamples, but again, it’s important to focus on trends, not singular examples.
Moreover, there is an important value of legal scholarship to the academic and legal community beyond the interests of prospective students. Ranking schools on this basis helps the academic and legal community better understand these contributions. This is especially so at top research institutions, where schools are vigorously competing for funding, professors, and other important inputs to a school’s overall faculty reputation. In other words, Harvard is not simply “Harvard” because of its students and their immediate interests.
In sum, faculty reputation is important to students and to the academic and legal community at-large. Again, while it would be “perfect” to have exhaustive qualitative evaluations of each and every law school professor, we must strive for the “good,” and in my view, faculty reputation rankings help contribute to the good.
With that said, based on my understanding, the citation ranking—to the extent it is even incorporated into the overall ranking—will very likely count for a small percentage of the overall U.S. News ranking. Thus, it will remain more of a separate ranking for those prospective students who find it of value. In my view, it seems hard to argue with a quantitative metric that will mostly cater to those who choose to use it.
3. The Hein Online Database Cannot be Used to Generate Reliable Rankings.
Currently, HeinOnline generates its own citation counts using citation formats (e.g., “123 Harv. L. Rev. 1”) to identify citations to a given article, and it displays those counts on its website. Unfortunately, in an impetuous rush, a few law school librarians, blog pundits, and others have speculated on how the rankings will be constructed, cataloguing various phantom “weaknesses” of the Hein “methodology”—leading to substantial misinformation among law school deans and others.
In the interest of space, I only address the major concerns regarding Hein and the potential ranking methodology here (but feel free to email me, firstname.lastname@example.org, if you have questions or concerns on other issues related to Hein or the proposed rankings).
A. Hein Fails to Capture Citations to Books, Treatises, Pre-Publications, and Other Non-Law Journal Publications.
First, some have criticized Hein for failing to capture citations to books, treatises, pre-publication citations (e.g., to SSRN working papers), and journal articles not in law reviews--in contrast, for example, to the Gregory Sisk et al. methodology that captures these citations using the Westlaw database. As an initial matter, based on my own work, a ranking based solely on Hein counts and the Sisk et al. ranking have a correlation of approximately 0.9. So contrary to one pundit’s assertion that failing to cite these works would result in “garbage,” there would not be much change in the overall rankings if books, treatises, and non-law journals were not included.
Nonetheless, I am working with Hein to accurately count these citations (and to exclude citations to blog posts, emails, letters, and other non-academic work, which is—in my view, wrongly—included in the Sisk et al. ranking) for the U.S. News ranking. Although these non-journal citation counts may not be available in Year 1 of the U.S. News ranking, they will be included by Year 2. Moreover, Hein offers many advantages over Westlaw citations, such as counting 3+-author article citations, many more foreign law journals, correcting for name misspellings, and several others. Indeed, Hein has substantially greater law journal coverage than Westlaw, Lexis, or Bloomberg.
Of course, the citations to books, treatises, and the like that will be counted by Hein are those citations found in law journal articles. Citations found in books, non-law journal articles, treatises, and so forth will not be included. However, citations in law journal articles (and judicial opinions, see below) to these sources are a very good proxy for their impact within the legal academy. In this regard, although citations to non-law journal articles in other non-law journal articles will not be counted, arguably this is not a significant shortcoming, as the measure here is faculty reputation within law schools. For example, the fact that a faculty member may have published a highly cited scientific article prior to becoming a law professor will usually be irrelevant to that faculty member’s reputation as a law professor. Although some non-law to non-law citations are relevant (e.g., within economics, history, or political science journals), based on my prior research, the number of these citations on a schoolwide-level is sufficiently low as a relative percentage at any school so as not to be material.
B. Hein Does not Count Citations from Courts
Some assert that Hein does not count citations to legal scholarship from courts. This is dead wrong. Hein includes Fastcase, a comprehensive database of court decisions, and tallies citations from judicial opinions to law journal articles. Relatively speaking, the number of citations from court opinions to law journal articles is relatively low, and my own work confirms that including these numbers (at least without a substantial multiplier) would have essentially no effect on schoolwide rankings. Nonetheless, for completeness, U.S. News has informed me that it will include citations from judicial opinions in the overall count.
C. Hein Can Be Gamed by Self-Citations and Other Mechanisms.
Some have claimed the Hein citations will be flawed because they include self-citations. Again, this is dead wrong, because (one last time for good measure) Hein’s current method of counting citations is not what will be used for the U.S. News citation ranking. Indeed, Hein currently measures self-citations, and they will be removed prior to tallying total counts. However, like the “non-issues” addressed earlier, doing so will not materially change the rankings, because self-citations in the legal academy are relatively low. Of those scholars with notable numbers of citations (over 1000), the highest percentage of self-citations is under 10%. Nonetheless, for completeness, U.S. News has confirmed with me that self-citations will not be counted.
Another concern is that faculty who co-author will each receive a citation, and that somehow when co-authors are that the same school, this is unfair “double-counting” (see here). This argument makes little sense in my view. A co-author is an author and each is entitled to a citation, regardless of whether they are the same school (and this is the procedure in the sciences and social sciences citation studies). If the allegation is that somehow illegitimate co-authors will be added to articles merely to increase cite counts at a school, this seems preposterous. Personal reputation matters more to legal scholars than the reputation of their school, and adding a non-author not only reduces the benefits to the real authors from receiving citations, but more importantly, if what is essentially fraud becomes known to the broader community, this risks greatly damaging those authors’ personal reputations. As such, I cannot imagine anyone engaging in such a practice, at least in numbers notable enough to affect the rankings.
A more serious concern would be authors increasing citations to work of their colleagues at the same law school. This sort of gaming can be addressed through statistical techniques used in the sciences and social sciences that adjust citation counts based on the importance of the citing article and that control for “echo chambers” of citations, whether conscious or unconscious. I have been assured by Hein that it will actively analyze whether there is such gaming by determining “self-citations” at the school-level on an on-going basis. To the extent there is substantial inflation of school-level self-citations, U.S. News has informed me that it will actively work with Hein to incorporate the methods used in the sciences and social sciences to remove the effects of this gaming.
Others have suggested that law schools could also game the rankings by hiring professors in highly cited fields compared to low cited fields (e.g., here). Given that the citation ranking will, if it is included in the overall ranking, only count for a low percentage of the overall U.S. News ranking, this seems very unlikely to me. The rankings may increase law schools’ reliance on citations as a plus-factor, but predicting that the ranking “could create disturbing incentives for faculty hiring and retention, as well as affect what professors write about” seems greatly exaggerated.
D. Hein’s Use Of the Bluebook Format to Identify Citations Is Incomplete.
Some have argued that Hein misses citations because it uses Bluebook formats and variants (rather than author name) to identify citations, and sometimes the citation formats vary from those used by Hein to generate citation counts. Hein is well aware of this limitation, including the limitation of using optical character recognition (OCR) to identify citation formats.
To correct this, Hein has confirmed with me that, with my input, it will also conduct searches by author name (and all school-provided variants of author names, as well as “fuzzy” variants”), as well as article title (and “fuzzy” variants within its database), to ensure completeness. Moreover, all citation formats will be carefully checked and all potential variants (including misspellings) added before final counts are determined. This process will not be terribly difficult because the universe of faculty names that need to be checked for variants of citation formats is not large (about 10-15,000) from a data analysis perspective. (Nonetheless, this process at the school-wide level is very unlikely to have any material effect because any “measurement error” is likely to be random and small. As noted earlier, the correlation between my Hein-based ranking and the Sisk et al. ranking, based on different data sets and somewhat different methodologies, was a high 0.9.)
E. The Rankings Will Include Non-Doctrinal Faculty and Therefore Improperly Skew Rankings
Finally some have concerns that U.S. News will include (or not include) non-“doctrinal” tenured or tenure-track faculty in the counts, such as clinical, externship, and legal writing professors, as well as librarians. By and large, non-doctrinal faculty members have relatively low citation counts, and the most sensible approach would be to exclude all of them from the citation metrics. (In this regard, contrary to some assertions, it is not difficult to identify faculty with primarily clinical, writing, librarian, and similar titles.) Otherwise, schools with large numbers of tenured and tenure-track non-doctrinal faculty, such as Cornell, would be unfairly penalized in the rankings. I have confirmed with U.S. News that it will analyze the data for all schools to determine the impact of both including and excluding non-doctrinal faculty from the rankings. U.S. News has further informed me that it will work with experts to make suggestions on how best to handle non-doctrinal faculty in the final analytical framework including, but not limited to, excluding such faculty altogether for all schools, or excluding such faculty that have citation counts less than both the school’s median and mean citation counts, or some other variation so the analysis appropriately takes into account schools whose non-doctrinal faculty publish nothing or infrequently. (Based on my own research, there appear to be no non-doctrinal faculty with recent citation counts over their school’s mean and median citation count.)
On the opposite end of the spectrum, some criticize the exclusion as unfairly penalizing schools with non-doctrinal faculty with high citation counts, or as an affront to non-doctrinal faculty. On the first complaint, based on my previous work, there are very few non-doctrinal faculty with sizable citation counts, and those who have high counts are at law schools where their inclusion would not make a material difference in the school’s overall ranking. On the second complaint, in my view, it is not an affront to non-doctrinal faculty when they are not tasked with primary responsibilities for research and writing. Perhaps there should be another U.S. News metric to rank the reputation of non-doctrinal programs and faculty—and to some extent the U.S. News program specialty rankings already do this—but that issue is separate from whether there should be some metric to rank the faculty reputation of schools overall.
Unfortunately, there is substantial misinformation being circulated about the U.S. News ranking. Perhaps U.S. News should have provided more information about the ranking to law schools. But this does not mean it is acceptable to engage in rank speculation, treat fiction as fact, and then reject the value of the proposed ranking. An informed understanding of the ranking dispels the so-called limitations of the Hein platform for generating citation counts that will prove useful in generating a quantitative ranking of faculty reputation. And while not perfect, such a ranking will be more than “good” for students and law schools alike, especially when compared to the status quo.
Prior coverage of the U.S. News Faculty Scholarly Impact Rankings:
- U.S. News To Publish Law Faculty Scholarly Impact Ranking Based On 2014-2018 Citations (Feb. 13, 2019)
- U.S. News FAQ: Law School Scholarly Impact Rankings (Feb. 14, 2019)
- More Coverage Of The U.S. News Law Faculty Scholarly Impact Rankings (Feb. 15, 2019)
- Robert Anderson (Pepperdine), Some Contrarian Thoughts On The U.S. News Faculty Scholarly Impact Rankings (Feb. 18, 2019)
- Law Prof Commentary On The U.S. News Faculty Scholarly Impact Rankings (Feb. 19, 2019)
- U.S. News Updates FAQ On Law School Scholarly Impact Rankings To Address Inclusion Of Non-Doctrinal Faculty (Feb. 27, 2019)
- Derek Muller (Pepperdine), Gaming The New U.S. News Citation Rankings (Mar. 6, 2019)
- Jeff Sovern (St. John's), How The U.S. News Scholarly Impact Rankings Could Hurt Niche Subjects (March 11, 2019)