National Law Journal, Feedback on Feedback:
Giving law students more feedback and assessments such as practice exams translate into better grades, right?
Not so fast. An upcoming edition of the University of Detroit Mercy Law Review takes up the subject of so-called formative reviews, but the authors of two articles reach different conclusions about the ability of professor interaction and feedback to improve student performance [Symposium, The Impact of Formative Assessment: Emphasizing Outcome Measures in Legal Education, 94 U. Det. Mercy L. Rev. 387-457 (2017)].
First up is an article by a quintet of professors from Ohio State University Michael E. Moritz College of Law who analyzed what happened when first-year law students in Constitutional Law took a voluntary practice exam question and received an estimated grade and written feedback [Ruth Colker, Ellen Deason, Deborah Merritt, Abigail Shoben & Monte Smith, Formative Assessments: A Law School Case Study, 94 U. Det. Mercy L. Rev. 387 (2017)].
Before I go into the results of the experiment, which covers three years of data, let me tell you who was most likely to take the voluntary exam—female students and those with high undergraduate grade-point averages. Law School Admission Test scores and law school grades weren’t relevant in predicting who was likely to participate.
So what did they find out? The students who opted for the practice exam—which was not counted toward their actual grades—did perform better in Constitutional Law than those who skipped the it. The average difference was the equivalent of a B+ for those who received the extra assessment, compared to a B for those who did not. (The difference held true even after controlling for factors such as LSAT score, gender and race.) Perhaps even more interesting, the practice exam takers performed better than non-takers in their four other courses the same semester, which included contracts and property.
That sounds like a good case for feedback improving performance. But the second article throws a little cold water on that notion. David Siegel, a professor at the New England Law Boston tested the theory that individualized, personal outreach to low performing students would improve their grades. [Should You Bother Reaching Out? Performance Effects of Early Direct Outreach to Low-Performing Students, 94 U. Det. Mercy L. Rev. 429 (2017)] For two years, he sent personalized emails to students in his criminal law course who scored low on early quizzes, and had follow-up one-on-one meetings with them to review the quizzes, discuss study methods, and any other larger issues they may have with the class. His control group consisted of students who had scored slightly higher than the lower performing students on the quizzes, and they didn’t receive the personalized outreach emails. But in the end, there was no statistical difference between the final grades of the two groups, leading Seigel to conclude that the early interventions didn’t boost grades.
My take: There’s enough research out there on the benefits of formative assessments to put stock in the conclusion the Ohio State professors reached, that more feedback on tests and performance helps. But I think Siegel’s study tells us that the manner and context of how that feedback is delivered makes a difference. It’s one thing to have a general conversation with low performing students. But issuing a grade on a practice exam—even if it doesn’t count toward their final grade—I suspect is a real wake-up call to students that they may need to step up and make some changes.