Following up on my previous posts (links below): Derek Muller (Pepperdine), Gaming Out Hein Citation Metrics in a USNWR Rankings System:
There are two reasons to be worried—non-random biases and law school administrative reactions. ... [M]y colleague Professor Rob Anderson notes one virtue of the Sisk rankings that are not currently present in Hein citation counts: “The key here is ensuring that Hein and US News take into account citations TO interdisciplinary work FROM law reviews, not just citations TO law reviews FROM law reviews as it appears they might do. That would be too narrow. Sisk currently captures these interdisciplinary citations FROM law reviews, and it is important for Hein to do the same. The same applies to books.”
We simply don’t know (yet) whether these institutional biases exist or how they’ll play out. But I have a few preliminary graphics on this front.
It’s not clear how Hein will measure things. Sisk-Leiter measures things using a formula of 2*mean citations plus median citations. The USNWR metric may use mean citations plus median citations, plus publications.
Understanding that Sisk-Leiter is an approximation for Hein at this point, we can show the relationship between the top 70 or so schools in the Sisk-Leiter score (and a few schools we have estimates for at the bottom of the range), and the relationship of those schools to their peer scores.
This is a remarkably incomplete portrait for a few reasons, not the lease of which the trendline would change once we add 130 schools with scores lower than about 210 to the matrix. But very roughly we can see that the trends roughly correlate between peer score and Sisk-Leiter score, with a few outliers—those outperforming peer score via Sisk-Leiter above the line, those underperforming below the line.
But this is also an incomplete portrait for another reason. USNWR
scales standardizes each score, which means they place the scores in relationship with one another before totalling them. That’s how they can add a figure like $80,000 of direct expenditures per student with a incoming class median LSAT score of 165. Done this way, we can see just how much impact changes (either natural improvement or attempts to manipulate the rankings) can have. This is emphatically the most important way to think about the change. Law school deans that see that citations are a part of the rankings and reorient themselves accordingly may well be chasing after the wind if costly school-specific changes have, at best, a marginal relationship to improving one’s overall USNWR score. ...
[A]all this means is that for the vast majority of schools, we’ll see little change—perhaps some randomness in rounding or year-to-year variations, but I don’t project for most schools much change at all.
Someone with more sophistication than I could then try to game out how these fit into the overall rankings. But that’s enough gaming for now. We’ll wait to see how the USNWR Hein citation figures come out this year, then we might play with the figures to see how they might affect the rankings.
Prior coverage of the U.S. News Faculty Scholarly Impact Rankings: