Paul L. Caron
Dean





Monday, March 6, 2023

Amar: What Law Schools Rankings Can Learn From College Sports Rankings

Vikram David Amar (Dean, Illinois), Some Thoughts on the Recent Controversies Concerning Law (and Med) School Rankings: Part I in a Series:

US News (2023)[T]here is no doubt that certain specific aspects of U.S. News’ methodology have been the focus of fire by law and medical school critics. One recurring source of criticism has been that these rankings give a false sense of precision because they rank hundreds of schools ordinally. ... 

Another very loud source of criticism of U.S. News these days—that it attaches weight to the standardized test scores of enrolled students (as an indicator of the academic strength of the student body)—has, I must admit, puzzled me a bit. I fully understand that standardized test scores have a disparate impact along racial lines and for that reason they must be used in admissions with a great deal of care. But all the medical schools and law schools who are boycotting themselves attach a fair amount of significance to standardized test scores as part of their admissions processes (at least for now). Medical schools are under no regulatory requirement to do so, yet they have consistently made MCAT scores a relevant factor. And even as the ABA is considering a proposal (that was recently remanded from the ABA’s House of Delegates to the Council of the Section of Legal Education and Admissions to the Bar) to remove the requirement that law schools use standardized tests in admissions, a group of 60 or so law deans, including many of the most progressive and diversity-focused law deans in the country, sent a letter to the ABA last fall opposing the elimination of such a requirement (at least without more study of the matter.) As the letter pointed out, “standardized tests—including the LSAT—can be useful as one of several criteria by which to assess whether applicants are capable of succeeding in law school and to enhance the diversity of our incoming classes. . . . Used properly, as one factor in a holistic admissions process, this index score can help identify students who are capable of performing at a satisfactory level.” Moreover, the letter said, if standardized test scores were not used (and if some schools stop using them, others will be pressured into following suit), then other factors, such as college GPAs, may assume even greater weight in admissions decisions, and such other factors might have an equal or greater disparate impact along racial lines. In this regard, it should be remembered that standardized test scores came into wide use a few generations ago in significant part because other admissions criteria—college attended, letters of recommendation, extracurricular activities, etc.—seemed to many to provide unfair advantages to people who come from well-educated, well-heeled and well-connected families. Standardized test scores were added to the mix in part to level the playing field.

Vikram David Amar (Dean, Illinois), More on Ranking Law Schools, and What Can be Learned from Ranking of Sports Teams: Part Two in a Series:

I suggest at least four ways in which rankings of academic institutions can borrow from innovations in college sports rankings. ...

[M]y first suggestion is that academic-institutional ratings should make better use of numerical data as well, and that the “voters”—those who fill out academic reputational surveys—will consult such data with greater frequency and sophistication when casting ballots. But just as controlling for things like strength of schedule in sports rankings is hard, so too comparing numerical assessments of academic-institutional performance can be challenging. Two examples drawn from the law-school world are: job placement numbers and bar passage numbers.

The ABA collects, and US News weighs somewhat heavily, the percentage of a law school’s graduating class that is employed in full-time, long-term (that is slated to last a year or more) jobs that require or benefit greatly from having a law degree. Seems fair enough; law schools ought to be launching not just good careers but distinctively legal careers. ... [S]chools that are located in states (like California) where highly pedigreed graduates from all over the country are vying for jobs in tight markets (like San Francisco) are going to have lower placement rates than schools in lesser populated states where job seekers are not competing against nearly as many top-performing law graduates. Trying to account for these differences in markets is not an easy thing to do, but not doing it makes meaningful comparison hard too.

Or consider bar passage rates (another criterion on which US News compares schools). ... California has not only a high cut score (and thus a lower pass rate on that account, something US News has now controlled for), but it also has a pool of test takers that is much stronger than the national average (because many of the most ambitious and talented graduates around the country want to live there.) As of now, US News does not account for that latter factor, and so schools that have a large number of graduates who take the California bar are at a disadvantage (both as the bar-exam and placement-rate aspects of US News).

The second way in which academic rankings can learn from sports rankings concerns the timing of rankings. The College Football Playoff Rankings (which in recent years has determined which four teams vie for the two playoff games that result in a national championship game) do not come out until the second half of the season. ... What does this mean for law school rankings? Perhaps that they shouldn’t be done every year. How much real change occurs year to year anyway? Perhaps ranking schools every three or five years (using data averages drawn from the whole three- or five-year period) would be more sound. ... (I understand that ranking less frequently may result in less revenue for US News, but it might also bolster the journal’s credibility—even moving to every-other-year rather than every-year rankings would be an improvement.)

That leads me to a third suggestion: if averaging rankings criteria over a time period (see above) makes sense, so too does averaging rankings across different methodologies. In college basketball, for example, the March Madness Tournament selection committee makes use of multiple analytic systems (and is understandably somewhat guarded about its own processes) as well as “eyeball” tests like AP rankings. Just as in politics, a poll of polls is often more accurate than most individual polls, so too the answer to dissatisfaction about US News rankings perhaps should be the support of various other rankings so that no one rankings system dominates.

All of that leads me to my final suggestion, also drawn from college basketball. One of the great things about KenPom and other analytic rankings of college hoops is that each consumer can, with a push of a button, rank teams based on the criteria that they find most important. That is, they can create a personalized ranking. One big drawback of US News is not just that some of its component factors might be flawed, but also that the weighting of the various factors (while perhaps defensible) is somewhat arbitrary.  ... [S]houldn’t it be easy for that consumer to adjust the weights of the competing variable easily? Just as the answer to bad speech in America should usually be more and better speech, so too the answer to bad rankings might be more (not fewer), better, and more well-tailored rankings.

U.S. News coverage:

Boycott

U.S. News Response to Boycott

https://taxprof.typepad.com/taxprof_blog/2023/03/amar-what-law-schools-rankings-can-learn-from-college-sports-rankings.html

Law School Rankings, Legal Ed Rankings, Legal Education | Permalink