Paul L. Caron
Dean


Sunday, February 25, 2018

NY Times Op-Ed: The Misguided Drive To Measure 'Learning Outcomes'

Here at Texas Tech University School of Law we are gearing up for our ABA site inspection.  In the past few years the ABA has required law schools to create "Learning Outcomes."  Here's the language from Section 3.02:

A law school shall establish learning outcomes that shall, at a minimum, include competency in the following:
(a) Knowledge and understanding of substantive and procedural law;
(b) Legal analysis and reasoning, legal research, problem-solving, and written and oral communication in the legal context;
(c) Exercise of proper professional and ethical responsibilities to clients and the legal system; and
(d) Other professional skills needed for competent and ethical participation as a member of the legal profession.

This is the first year that the site teams will be evaluating a law school's compliance with the new standard.  We knew it was coming and I have been on a committee for the past three years that has been trying to translate this standard into operation. While I believe we have done a good job with it, I also believe the standard to be of questionable value. 

I read with pleasure this New York Times op-ed by Molly Worthen, The Misguided Drive to Measure 'Learning Outcomes'.  I especially like its concluding line:  "[T]here's just no app for that." 

I teach at a big state university, and I often receive emails from software companies offering to help me do a basic part of my job: figure out what my students have learned.

If you thought this task required only low-tech materials like a pile of final exams and a red pen, you’re stuck in the 20th century. In 2018, more and more university administrators want campuswide, quantifiable data that reveal what skills students are learning. Their desire has fed a bureaucratic behemoth known as learning outcomes assessment. This elaborate, expensive, supposedly data-driven analysis seeks to translate the subtleties of the classroom into PowerPoint slides packed with statistics — in the hope of deflecting the charge that students pay too much for degrees that mean too little.

It’s true that old-fashioned course grades, skewed by grade inflation and inconsistency among schools and disciplines, can’t tell us everything about what students have learned. But the ballooning assessment industry — including the tech companies and consulting firms that profit from assessment — is a symptom of higher education’s crisis, not a solution to it. It preys especially on less prestigious schools and contributes to the system’s deepening divide into a narrow tier of elite institutions primarily serving the rich and a vast landscape of glorified trade schools for everyone else.

Without thoughtful reconsideration, learning assessment will continue to devour a lot of money for meager results. The movement’s focus on quantifying classroom experience makes it easy to shift blame for student failure wholly onto universities, ignoring deeper socio-economic reasons that cause many students to struggle with college-level work. Worse, when the effort to reduce learning to a list of job-ready skills goes too far, it misses the point of a university education. ...

Producing thoughtful, talented graduates is not a matter of focusing on market-ready skills. It’s about giving students an opportunity that most of them will never have again in their lives: the chance for serious exploration of complicated intellectual problems, the gift of time in an institution where curiosity and discovery are the source of meaning.

That’s how we produce the critical thinkers American employers want to hire. And there’s just no app for that.

https://taxprof.typepad.com/taxprof_blog/2018/02/nyt-op-ed-about-learning-outcomes.html

Bryan Camp, Legal Education, Teaching | Permalink

Comments

Measuring learning outcomes is one of the biggest contributors to climate change academia has ever produced. I have to imagine over half the Amazon rainforest has been cut down just for Learning Outcome reports.

Posted by: Dale Spradling | Feb 25, 2018 10:42:20 AM

It's not a gift if the student is paying for it.

Posted by: Michael | Feb 26, 2018 4:35:34 AM

If you want to understand what's gone wrong with our educational system at all levels, read Lucy Montgomery's marvelous Anne of Green Gables (1908). It describes an era when one of the most prestigious roles anyone could assume was that of a teacher at all ages. The work was only for the best and most industrious.

It wasn't a unionized job from which you couldn't be fired. It wasn't an intrusion into what really mattered—doing research that almost no one reads.

It's also a quite uplifting, moving story.

Posted by: Michael W. Perry | Feb 26, 2018 5:57:09 AM

How many 18-year-olds can afford to spend four years not "focusing on market-ready skills" but instead on "serious exploration of complicated intellectual problems"? Given the huge mental health advantage of having a good job, why put down "trade schools"?

Posted by: Daniel Messing | Feb 26, 2018 6:03:18 AM

If you think it’s tough for law schools, try doing it for K-12 education. At least law schools have semi-motivated, semi-intelligent students to work with and a narrow curriculum focus (compared to K-12).

Posted by: Carolynn | Feb 26, 2018 8:48:41 AM

What dishonest "trying to have it both ways"! We can't be a trade school with measured outcomes. You see, what employers really want is deep critical thinkers that only we can produce, and only if you let us do it our way. As my grandfather used to say, "they should have left it in the horse."

Posted by: Roger Sweeny | Feb 26, 2018 10:03:51 AM

I think you are missing the point of the exercise. We do this in Engineering as part of accreditation.
Every attempt to measure learning is fraught with issues to be sure. But the attempt can do at two things. One, it can tell you where students in a current class have problems that can be addressed immediately. Second, we use it to see what methods work well in the classroom and what doesn't. We change things if necessary, sort of a continuous product improvement effort. Doing the evaluations more or less forces us to look at what works and what doesn't.
If you don't do something on this order, how do you propose to claim that you have a good/successful program? How do you not have the same professor doing the same thing year after year?

Posted by: Bill | Feb 26, 2018 1:38:00 PM