Here at Texas Tech University School of Law we are gearing up for our ABA site inspection. In the past few years the ABA has required law schools to create "Learning Outcomes." Here's the language from Section 3.02:
A law school shall establish learning outcomes that shall, at a minimum, include competency in the following:
(a) Knowledge and understanding of substantive and procedural law;
(b) Legal analysis and reasoning, legal research, problem-solving, and written and oral communication in the legal context;
(c) Exercise of proper professional and ethical responsibilities to clients and the legal system; and
(d) Other professional skills needed for competent and ethical participation as a member of the legal profession.
This is the first year that the site teams will be evaluating a law school's compliance with the new standard. We knew it was coming and I have been on a committee for the past three years that has been trying to translate this standard into operation. While I believe we have done a good job with it, I also believe the standard to be of questionable value.
I read with pleasure this New York Times op-ed by Molly Worthen, The Misguided Drive to Measure 'Learning Outcomes'. I especially like its concluding line: "[T]here's just no app for that."
I teach at a big state university, and I often receive emails from software companies offering to help me do a basic part of my job: figure out what my students have learned.
If you thought this task required only low-tech materials like a pile of final exams and a red pen, you’re stuck in the 20th century. In 2018, more and more university administrators want campuswide, quantifiable data that reveal what skills students are learning. Their desire has fed a bureaucratic behemoth known as learning outcomes assessment. This elaborate, expensive, supposedly data-driven analysis seeks to translate the subtleties of the classroom into PowerPoint slides packed with statistics — in the hope of deflecting the charge that students pay too much for degrees that mean too little.
It’s true that old-fashioned course grades, skewed by grade inflation and inconsistency among schools and disciplines, can’t tell us everything about what students have learned. But the ballooning assessment industry — including the tech companies and consulting firms that profit from assessment — is a symptom of higher education’s crisis, not a solution to it. It preys especially on less prestigious schools and contributes to the system’s deepening divide into a narrow tier of elite institutions primarily serving the rich and a vast landscape of glorified trade schools for everyone else.
Without thoughtful reconsideration, learning assessment will continue to devour a lot of money for meager results. The movement’s focus on quantifying classroom experience makes it easy to shift blame for student failure wholly onto universities, ignoring deeper socio-economic reasons that cause many students to struggle with college-level work. Worse, when the effort to reduce learning to a list of job-ready skills goes too far, it misses the point of a university education. ...
Producing thoughtful, talented graduates is not a matter of focusing on market-ready skills. It’s about giving students an opportunity that most of them will never have again in their lives: the chance for serious exploration of complicated intellectual problems, the gift of time in an institution where curiosity and discovery are the source of meaning.
That’s how we produce the critical thinkers American employers want to hire. And there’s just no app for that.