Following up on yesterday's post, Why We Must Stop Relying On Student Ratings Of Law School Teaching — Like The University Of Oregon Is Doing: Chronicle of Higher Education op-ed: In Defense (Sort of) of Student Evaluations of Teaching, by Kevin Gannon (Grand View University):
[W]e know student evaluations matter. Perhaps the better question is: Should they? Given their many demonstrable and potential flaws, why would we still use them to gather feedback on teaching and learning? It turns out the answer is more complicated than appearances suggest.
Certainly, students are not experts qualified to evaluate us on, say, whether we used the best and most applicable course readings. But they are experts on what they experienced and learned in a course, and they ought to have a voice. Just because their feedback is sometimes misused doesn’t mean it’s invalid or unnecessary.
In fact, course evaluations — despite their many problematic elements — may still provide the most accurate information available on teaching effectiveness. Elizabeth Barre, whose research into student evaluations — in particular, the metastudies of the subject — is essential reading, observed that "we have not yet been able to find an alternative measure of teaching effectiveness that correlates as strongly with student learning. In other words, they may be imperfect measures, but they are also our best measures."
And therein lies the rub: We need to assess teaching, and we often have to rely on not the best, but the least worst, option. ...
Given the well-known limitations of student evaluations, it behooves every department or institution to be careful how they are used. The best faculty-evaluation systems are multilayered and employ a number of different measures.
To be honest, student evaluations of faculty instruction ought more properly to be referred to as "ratings," since "evaluation" connotes a more complex, informed process than what’s possible via these instruments. In assessment terms, student evaluations are only indirect measures of teaching effectiveness, and any assessment process dependent on indirect measures will not produce accurate information.
Instead, student evaluations ought to be treated as supplemental material. They should complement — but never overshadow — faculty narratives, peer observations, reflective dialogue, and sample teaching materials. Even more important, course ratings should be used equitably; their documented bias against specific faculty groups has to be part of the calculus. To assume that all student-evaluation data can be unproblematically used in the same way for every faculty member ignores substantial evidence to the contrary, and undermines the evaluation process.
Departments and institutions have an ethical obligation to be discerning in evaluating faculty members. Flawed processes create flawed results. It’s incumbent on us to evaluate teaching with a process that centers faculty voices and experience, and assures that data will be interpreted with attention to context.
These suggestions won’t make student ratings any less flawed but should help us reckon with the nature of those flaws and then proceed accordingly. As is often the case in teaching, the key is knowing how to use our tools appropriately.