TaxProf Blog

Editor: Paul L. Caron, Dean
Pepperdine University School of Law

Wednesday, December 10, 2014

Is the Bar Exam Broken? Or Are Law Students Dumber?

Bloomberg, Is the Bar Exam Broken? Or Are Law Students Dumber?:

Law schools and the bar exam's creators agree: The plunge in test scores that hit several states this year is alarming, and it's probably the other side's fault. ...

[S]ome deans and legal experts are floating tentative theories about the historically bad results. Derek Muller, a law professor at Pepperdine University, says he has tested and rejected every explanation not tied to the test itself. Muller compared LSAT scores with bar exam scores and found that this year’s law grads should have done only slightly worse than last year. He also rejected the idea that a glitch in Examsoft, the software used to upload the July test, could have made the difference, because states that did not use Examsoft, such as Arizona and Virginia, still saw their pass rates dip. “By process of elimination, I’m running out of alternative explanations and looking more to the NCBE as a possibility,” he says.

Excess

Muller says the NCBE could have bungled scoring by grading the tests more harshly than in past years. The scoring specifics are complex, but basically the NCBE grades each test-taker on a scale that's determined by comparing their performance on certain questions with those of past test takers. Muller readily admits he doesn't have data to test the theory, but he says it’s possible the NCBE used particularly tough questions to scale this year’s results.

https://taxprof.typepad.com/taxprof_blog/2014/12/is-the-bar-exam-broken.html

Legal Education | Permalink

Comments

If NCBE did manipulate the test scoring/calibration of the test would there be any basis for civil liability?

Posted by: j | Dec 10, 2014 12:53:59 PM

Back in November, Jerry Organ at The Legal Whiteboard did an analysis that I find a bit more persuasive than Muller's (http://lawprofessors.typepad.com/legalwhiteboard/2014/11/what-might-have-contributed-to-an-historic-year-over-year-decline-in-the-mbe-mean-scaled-score.html), where he looks at a weighted average using different buckets of LSAT scores. He notes that there should have been a drop in scores, but not as big of a drop as we have seen (although his data shows other years with disproportionate year-over-year changes between projected and actual scores). He also notes that there was a change in LSAC methodology (average vs. highest scores) beginning with the class of 2013 (matriculated Fall 2010), though it looked like he brushed that aside and didn't factor it in. The Examsoft debacle is also touched upon.

I've got to think that there (1) are more people in the low-LSAT buckets, however one wants to define that (and averages and medians notwithstanding), (2) Examsoft may or may not have had a marginal effect on things, (3) change in LSAC methodology likely modestly overestimates the "true" ability of the test takers, (4) LOL NCBE, and (5) there is probably some random variation going on here. It also may well be a self-selection issue (e.g., certain, more "able" test takers didn't take the bar at rates similar to past years, maybe due to job market difficulties, changing fields, who knows), though I have no idea how one would go about getting that data.

Anyway, those are my stream-of-consciousness style thoughts for now.

Posted by: No, breh. | Dec 10, 2014 4:24:38 PM