TaxProf Blog

Editor: Paul L. Caron, Dean
Pepperdine University School of Law

Thursday, June 21, 2018

Does Feedback Really Improve Law Student Performance?

National Law Journal, Feedback on Feedback:

Giving law students more feedback and assessments such as practice exams translate into better grades, right?

Not so fast. An upcoming edition of the University of Detroit Mercy Law Review takes up the subject of so-called formative reviews, but the authors of two articles reach different conclusions about the ability of professor interaction and feedback to improve student performance [Symposium, The Impact of Formative Assessment: Emphasizing Outcome Measures in Legal Education, 94 U. Det. Mercy L. Rev. 387-457 (2017)].

First up is an article by a quintet of professors from Ohio State University Michael E. Moritz College of Law who analyzed what happened when first-year law students in Constitutional Law took a voluntary practice exam question and received an estimated grade and written feedback [Ruth Colker, Ellen Deason, Deborah Merritt, Abigail Shoben & Monte Smith, Formative Assessments: A Law School Case Study, 94 U. Det. Mercy L. Rev. 387 (2017)].

Before I go into the results of the experiment, which covers three years of data, let me tell you who was most likely to take the voluntary exam—female students and those with high undergraduate grade-point averages. Law School Admission Test scores and law school grades weren’t relevant in predicting who was likely to participate.

So what did they find out? The students who opted for the practice exam—which was not counted toward their actual grades—did perform better in Constitutional Law than those who skipped the it. The average difference was the equivalent of a B+ for those who received the extra assessment, compared to a B for those who did not. (The difference held true even after controlling for factors such as LSAT score, gender and race.) Perhaps even more interesting, the practice exam takers performed better than non-takers in their four other courses the same semester, which included contracts and property.

That sounds like a good case for feedback improving performance. But the second article throws a little cold water on that notion. David Siegel, a professor at the New England Law Boston tested the theory that individualized, personal outreach to low performing students would improve their grades. [Should You Bother Reaching Out? Performance Effects of Early Direct Outreach to Low-Performing Students, 94 U. Det. Mercy L. Rev. 429 (2017)] For two years, he sent personalized emails to students in his criminal law course who scored low on early quizzes, and had follow-up one-on-one meetings with them to review the quizzes, discuss study methods, and any other larger issues they may have with the class. His control group consisted of students who had scored slightly higher than the lower performing students on the quizzes, and they didn’t receive the personalized outreach emails. But in the end, there was no statistical difference between the final grades of the two groups, leading Seigel to conclude that the early interventions didn’t boost grades.

My take: There’s enough research out there on the benefits of formative assessments to put stock in the conclusion the Ohio State professors reached, that more feedback on tests and performance helps. But I think Siegel’s study tells us that the manner and context of how that feedback is delivered makes a difference. It’s one thing to have a general conversation with low performing students. But issuing a grade on a practice exam—even if it doesn’t count toward their final grade—I suspect is a real wake-up call to students that they may need to step up and make some changes.

https://taxprof.typepad.com/taxprof_blog/2018/06/does-feedback-really-improve-law-student-performance.html

Legal Education, Scholarship | Permalink

Comments

A key issue is whether a student is willing to put in the extra work. I give my 1L criminal law students 6 quizzes during the semester. I send emails to the students who score in the bottom 25% on the first two quizzes and encourage them to meet with me. At most 30% do. I had better luck with students who write out an answer to a practice essay and then meet with me to review it. Student motivation matters.

Posted by: Shawn Boyne | Jun 21, 2018 9:21:11 AM

Study after study by general education researchers have concluded that frequent formative assessment with feedback improves student performance. See How to Help Students From Disadvantaged Groups Succeed in Law School, 1 Tesas A & M L. Rev. 83. One scholar has declared, “Assessment methods and requirements probably have a greater influence on how and what students learn than any other single factor.” Allison Bone, National Centre for Legal Education: Assuring Successful Assesment, 3 (Roger Burridge & Tracey Varnava eds., 1999). [http://www.ukcle.acuk/resources/assessment/bone.pdf]

Of course, the effectiveness of the formative assessment depends on the type of assessment and the student's motivation. Assessment that requires effortful retrieval and complex analysis is more effective than multiple choice quizzes that mainly involve recognition.

Posted by: Scott Fruehwald | Jun 21, 2018 11:51:45 AM

Thanks so much for this post. I’m interested in the impact of formative assessments, so I took a closer look at the second study.

Unless I’m missing something, the author of the second study has mischaracterized his own results. In his study, he offered formative assessments to the students who had the lowest scores on the first quiz that he administered. For comparison’s sake, he identified a control group composed of students who had performed “slightly better" on the first quiz.

He then reports that by the end of the semester, the first group of students had precisely matched the second group of students. As he says, "by the last quiz the average scores of both groups were almost identical,” and there was "no statistically significant difference" between them. (If you’re interested, you can see this pattern clearly in figures 2a and 2b on pages 434 and 435: the “red” students begin at the lowest point, but they eventually catch up with the “blue” students.)

Again, unless I’m missing something, this experiment was successful. Although the study has obvious methodological limits, the findings suggest the (modest) effectiveness of his own (modest) interventions.

But instead of just saying that, the author claims that his formative assessments were ineffective, because there was no statistically significant difference between the two groups. This is rather odd: The author has completely eliminated the difference between two groups of students—students whom he had previously distinguished based on performance—presumably through his own interventions.

The author obscures this finding by claiming that the two groups were “as close as possible,” and that the second group had performed “only slightly better” than the first group on the first quiz. But that does not change his results. The fact remains that the two groups started out differently, and they ended up the same—presumably because of the author's own interventions.

I’m not sure what would motivate a teacher to provide feedback to students and then write an article claiming that this feedback was ineffective. In any event, I am not persuaded by his analysis of his data.

Posted by: Clifford Rosky | Jun 21, 2018 1:08:50 PM

BTW: I agree with Shawn that "student motivation matters." But in my experience, the lowest performing students are hindered by shame, rather than laziness. Each year, low performing students tell me that they are "too embarrassed" to send me a practice exam, because it's not "good enough." Once I assure them that I am here to help them improve—not to judge them—more of them are willing to send me practice exams. Unsurprisingly, my feedback seems to have a greater impact on this group of students, because they have more room for improvement. So I've learned to be cautious in making assumptions about why students are not seeking feedback.

Posted by: Clifford Rosky | Jun 21, 2018 1:11:55 PM

The challenge with evaluating the effect of formative assessments is the inability to subject the same students during the same semester to parallel tracks of having and not having formative assessments. So, for me, the next best approach was the opportunity I had in the early 1990s, when I started using (in addition to voluntary, ungraded assessments) mandatory, graded formative assessments – to the chagrin of many of my then colleagues – to compare the performance of students in previous classes with those in the first two classes that were required to go through formative assessments. I did this by taking several short questions from the first group’s exams, changing irrelevant information (names, dates, etc.) and analyzing the differences. My conclusion persuaded me to continue with mandatory, graded formative assessments through the semester (and it’s a delight to see legal education finally getting on the bandwagon). Of course, students received both group and individual feedback.

Voluntary formative assessments are useful, but fail to reach or help the student who is stuck in the course-grade-depends-on-one-final-exam-and-I-will-cram-for-it approach and thus ignores the voluntary opportunity. Why students get stuck in this approach is a question I’ll set aside, though certainly the more they encounter mandatory formative assessments the less they will consider the voluntary ones to be worthy of dismissal. Another helpful piece of the analysis is to take the seating chart and replace each name with the course grade earned by the student; every time I do that, I get the same overall result, and it ties into the question of motivation and self-selection. It is why, when course enrollment is less than room capacity, I declare the back row or rows off limits.

Posted by: James Edward Maule | Jun 21, 2018 1:22:22 PM

I've been teaching a good while now. The students who take good advantage of opportunities for feedback and guidance are the ones who least need it, and the ones who need it the most, never take advantage. It is true not only of academic help but other help in the law school. If the placement office provides sessions on improving interview skills or job strategies, it is the top students who will take advantage of the sessions. Sometimes I wonder if the difference between the top and bottom is that those at the top fret about being at the bottom while those at the bottom are either unaware where they are or comfortable with it.

Posted by: PTTAX | Jun 22, 2018 2:01:30 PM