TaxProf Blog

Editor: Paul L. Caron, Dean
Pepperdine University School of Law

Thursday, May 24, 2018

USC Eliminates Student Evaluations In Faculty Promotion And Tenure Decisions

USC LogoFollowing up on my previous post, Why We Must Stop Relying On Student Evaluations Of Law School Teaching — Like The University Of Oregon Is Doing:  Inside Higher Ed, Teaching Eval Shake-Up:

Research is reviewed in a rigorous manner, by expert peers. Yet teaching is often reviewed only or mostly by pedagogical non-experts: students. There’s also mounting evidence of bias in student evaluations of teaching, or SETs — against female and minority instructors in particular. And teacher ratings aren’t necessarily correlated with learning outcomes.

All that was enough for the University of Southern California to do away with SETs in tenure and promotion decisions this spring. Students will still evaluate their professors, with some adjustments — including a new focus on students’ own engagement in a course. But those ratings will not be used in high-stakes personnel decisions.

The changes took place earlier than the university expected. But study after recent study suggesting that SETs advantage faculty members of certain genders and backgrounds (namely white men) and disadvantage others was enough for Michael Quick, provost, to call it quits, effective immediately. 

“He just said, ‘I’m done. I can’t continue to allow a substantial portion of the faculty to be subject to this kind of bias,” said Ginger Clark, assistant vice provost for academic and faculty affairs and director of USC’s Center for Excellence in Teaching. “We’d already been in the process of developing a peer-review model of evaluation, but we hadn’t expected to pull the Band-Aid off this fast.”

While Quick was praised on campus for his decision, the next, obvious question is how teaching will be assessed going forward. The long answer is through a renewed emphasis on teaching excellence in terms of training, evaluation and incentives.

“It’s big move. Everybody's nervous," Clark said. "But what we've found is that people are actually hungry for this kind of help with their teaching."

SETs — one piece of the puzzle — will continue to provide “important feedback to help faculty adjust their teaching practices, but will not be used directly as a measure in their performance review,” Clark said. The university’s evaluation instrument also was recently revised, with input from the faculty, to eliminate bias-prone questions and include more prompts about the learning experience. 

Umbrella questions such as, “How would you rate your professor?” and “How would you rate this course?” — which Clark called “popularity contest” questions — are now out. In are questions on course design, course impact and instructional, inclusive and assessment practices. Did the assignments make sense? Do students feel they learned something? ...

While some institutions have acknowledged the biases inherent in SETs, many cling to them as a primary teaching evaluation tool because they’re easy — almost irresistibly so. That is, it takes a few minutes to look at professors’ student ratings on, say, a 1-5 scale, and label them strong or weak teachers. It takes hours to visit their classrooms and read over their syllabi to get a more nuanced, and ultimately more accurate, picture.

Yet that more time-consuming, comprehensive approach is what professors and pedagogical experts have been asking for, across academe, for years. A 2015 survey of 9,000 faculty members by the American Association of University Professors, for instance, found that 90 percent of respondents wanted their institutions to evaluate teaching with the same seriousness as research and scholarship.

Legal Education | Permalink


Rather than using student evaluations, scribbled in the last minutes of a course on some silly form, perhaps they should solicit evaluations from graduated alumni, a year or two after entering the practice of law.

Posted by: ruralcounsel | May 24, 2018 5:14:03 AM

How were they using them?

I'll submit the utility of these forms is diagnostic: they help identify teachers whose skills are seriously under par. Department heads and others should be able to read between the lines to identify professors being slammed for severe grading, however it is tarted up as complaints about skills. Just comparing the grade distribution in the professor's class to the norm in the department should be a clue. The SET forms could be a trigger to refer the professor to a remedial center for performance improvement if you've one on campus and / or a trigger for admonishments delivered at the time of the 3d year review. If you've got a real troublesome character, it should be manifest in face-to-face complaints to the department head, in complaints to campus ombudsmen, in complaints to the provost, in the syllibi, or in the grade distributions. The SET forms would be supplemental to that.

Rather than using student evaluations, scribbled in the last minutes of a course on some silly form

I've never seen a 'silly' form. The utility of doing it then and there is that the problems you've had with the class are fresh in your mind.

One supplement or alternative to set forms is to film classes at irregular intervals and / or have the department head attend now and again.

Posted by: Art Deco | May 24, 2018 12:15:56 PM

If California colleges don't want to listen to their "customers," those customers will move to colleges that care about serving the people who keep them in business.

Posted by: Woody | May 24, 2018 1:29:31 PM

Being a good little Democrat just became a larger chunk of the hiring/promotion decision then.

Posted by: Anon | May 25, 2018 3:52:28 AM