Daniel Schwarcz (Minnesota) & Dion Farganis (Minnesota), The Impact of Individualized Feedback on Law Student Performance:
For well over a century, first-year law students have typically not received any individualized feedback in their core "doctrinal" classes other than their final exam grades. Although this pedagogical model has long been assailed by critics, remarkably limited empirical evidence exists regarding the extent to which enhanced feedback improves law students' outcomes. This Article helps fill this gap by focusing on a natural experiment at the University of Minnesota Law School.
The natural experiment arises from the random assignment of first-year law students to sections that take a common slate of classes, only some of which provide individualized feedback. Meanwhile, students in two different sections are occasionally grouped together into a "double section" first-year class. In these double section classes, students in sections that have previously or concurrently had a class providing individualized feedback consistently outperform students in sections that have not received any such feedback. The effect is both statistically significant and hardly trivial in magnitude, approaching about 1/3 of a grade increment even after controlling for students’ LSAT scores, undergraduate GPA, gender, race, and country of birth. The positive impact of feedback also appears to be stronger among lower-performing students.
These findings substantially advance the literature on law school pedagogy, demonstrating that individualized feedback in a single class during the first-year of law school can improve law students' performance in all of their other classes. Against the background of the broader literature on the importance of formative feedback in effective teaching, these findings also have a clear normative implication: law schools should systematically provide first-year law students with individualized feedback in at least one “core” doctrinal first-year class.
Note that the authors (at p.12) "defined individualized feedback to include assigning grades to individual students’ work products, providing individualized written comments to students, or providing individualized or small-group oral feedback to students. By contrast, we did not consider individualized feedback to include instances in which instructors provided students with only a model answer, grading rubric, or generalized oral comments regarding common mistakes."
Larry Solum (Georgetown):
I do 45-minute appointments to review the midterm in my year long civil procedure class. Each student reads the exam question, a model answer, their own exam, and then completes a self-assessment exercise. The students also write a memorandum describing their class preparation process, the process they used to review the exam, and a description of the exam experience itself. I reread their exam, their self-assessment, and the memo before each appointment. This year I did 70 of these appointments. It is a great relief to hear that there may be a systemic benefit from the time invested in giving feedback in this way. I am also happy to learn that I have avoided liability for educational malpractice.
Michael Simkovic (Seton Hall), Should Professors Give More Feedback Before the Final Exam?:
The interpretation of these results raises a number of questions which I hope the authors will address more thoroughly as they revise the paper and in future research.
For example, are the differences due to instructor effects rather than feedback effects? ...
Another question is: are students simply learning how to take law school exams? Or are they actually learning the material better in a way that will provide long-term benefits, either in bar passage rates or in job performance? At the moment, the data is not sufficient to know one way or the other.
A final question is how much providing individualized feedback will cost in faculty time, and whether the putative benefits justify the costs.
It’s a great start, and I look forward to more work from these authors, and from others, using quasi-experimental designs to investigate pedagogical variations.