TaxProf Blog

Editor: Paul L. Caron, Dean
Pepperdine University School of Law

Saturday, May 6, 2017

Financial Times:  Artificial Intelligence Closes In On The Work Of Junior Lawyers

ROSSFinancial Times, Artificial Intelligence Closes in on the Work of Junior Lawyers:

After more than five years at a leading City law firm, Daniel van Binsbergen quit his job as a solicitor to found Lexoo, a digital start-up for legal services in the fledgling “lawtech” sector.

Mr Van Binsbergen says he is one of many. “The number of lawyers who have been leaving to go to start-ups has skyrocketed compared to 15 years ago,” he estimates. Many are abandoning traditional firms to pursue entrepreneurial opportunities or join in-house teams, as the once-unthinkable idea of routine corporate legal work as an automated task becomes reality.

Law firms, which tend to be owned by partners, have been slow to adopt technology. Their traditional and profitable model involves many low-paid legal staff doing most of the routine work, while a handful of equity partners earn about £1m a year [$1.3m].

But since the 2008 financial crisis, their business model has come under pressure as companies cut spending on legal services, and technology replicated the repetitive tasks that lower-level lawyers at the start of their careers had worked on in the past.

“The 2020s will be the decade of disruption,” says Professor Richard Susskind, co-author of The Future of the Professions: How Technology Will Transform the Work of Human Experts. He believes there is growing demand from executives who control corporate legal budgets to cut costs by taking advantage of the savings offered by technology. ...

So far, firms say, technology has not meant job losses. But Prof Susskind believes a wave of lay-offs is to come — law firms are still experimenting with AI instead of rolling it out across their offices.

About 114,000 legal jobs are likely to be automated in the next 20 years, a 2016 study by Deloitte predicted. Technology has contributed to the loss of about 31,000 sector jobs. The report predicted another 39 per cent of jobs were at “high risk” of being made redundant in the next two decades. ...

For now, legal services providers are happy about tech disruption. Introducing AI into the work of junior lawyers is allowing them to do more interesting work, their bosses say. ...

But Prof Susskind cautions that true technological disruption of legal services could take a generation — until the current crop of equity partners at leading firms retires. “It is very hard to convince a room of millionaires they have their business model wrong,” he says.

http://taxprof.typepad.com/taxprof_blog/2017/05/financial-timesartificial-intelligence-closes-in-on-the-work-of-junior-lawyers.html

Legal Education | Permalink

Comments

This quote from your blog on November 8, 2015, states the issue succinctly when a Frank F posts: ”I’d also put the burden on the ‘automation of the professions’ crowd to explain precisely why the professions, but not all occupations, should be automated.”
Alas, the professions, particularly medicine, but also law, are particularly well-suited for automation precisely because they’re the products of intelligence and reason carefully applied to the solution of problems. It’s their very ‘if this then that’ approach that lends itself so well to automation. The better a profession can explain why it does something, the more easily much of its work can be automated.
About a decade or so ago, I recall some physicians who treated cancer being surprised when a expert in decision-making studied their practices and concluded that everything they were doing was the result of the answers to some combination among 150 questions. If A, B, and C were true, then the doctors would start treatment X. If A, B, and D were true, then it’d be treatment Y instead.
That hardly surprising me, since I worked for some 16 months caring for children with leukemia. In concert with doctors around the world, our doctors worked very hard to rationalize treatment as much as possible. For instance, most chemotherapies work best at killing the cancerous cells, called ‘blasts’ in the bloodstream. They’re much less effective when those blasts hide in the fluid of the central nervous system (CNS), from which they can come out and cause a relapse. While I was caring for those kids, they were researching the best way of going after those blasts in the CNS.
That meant randomizing those children to two groups. One received the best currently known treatment. The other received a treatment there was reason to believe might better. If the latter proved better, then it became the standard treatment and research moved on. And if some kids benefited most by the former treatment and others by the latter, then that became part of the decision tree. But keep in mind that what was being done was removing as much as possible the subjective from decision-making. A doctor wasn’t to use treatment X when research had shown for a child in a particular situation, treatment B was better.
The result had interesting implications for higher-level professions versus lower-level occupations. The latter is inherently much more subjective. Our chemotherapy virtually destroyed these children’s immune systems for a time. It was my ‘occupation’ as what was today called a nurse tech to watch them for a temperature spike that meant an infection. My orders were to check them every four hours, but I injected into that a subjective decision I made on my own. Anytime a child’s behavior changed even slightly, I suspected an infection and took his or her temperature. That mattered because catching an infection on Monday rather than Tuesday could make the difference between life or death.
Once I’d spotted that temperature spike, automatic protocols came into play, including a resident or one of our senior physicians issuing all-too-predictable orders. That temperature spike was followed by a blood draw, so the infecting organism could be cultured and the antibiotic most effective against it determined. Just after the blood draw, the child would be given Tylenol to bring his or her temperature down. The culture and discovery of the best antibiotic took about two days, so in the intervening time, our physicians would guess at the best antibiotics to give based on recent experience. Once that culture and sensitivity came back, it typically dictated what antibiotic should be used.
Notice what’s unusual about that process. In my lowly nurse tech position, I was the only one exercising any ‘feeling in the gut’ judgment. Once I found that spike over 38.5 C., everything was determined. Although I knew little about the complexities of treating leukemia, every other step in the process was the medical equivalent of legal boilerplate. Tell me what the current initial broad-spectrum antibiotics were and I could have written those orders as well as anyone. And no, that’s not because I was so intelligent. It was because a lot of intelligent people were working very hard to remove anything subjective and personal from the decision-making. Indeed, I once went to a conference in which our doctors met with an infectious control doctor. Our cancer doctors wanted an “if this, then order that antibiotic” statement from the infectious disease doctor. He kept saying that it was more complex than that. There’s a strong and legitimate urge in medicine to eliminate uncertainties.
But—this the the key factor—the very fact that good cancer doctors act based on the results of the latest research and the treatment protocols that result mean that there’s the potential for them to be replaced in actual, day-to-day care by computerized decision-making. If a doctor is thinking “if this, then that,” then so can a computer. Good medicine is quite literally medicine that can be automated.
I want to stress, however, that does not mean that doctors can be removed from the loop and computers left running our healthcare system. What I’ve said doesn’t apply, for instance, when the standard treatment doesn’t seem to be working. At that point, a subjective judgment needs to be made about what else to try.
Discovering new treatments also depends on human agency. For a good illustration of that, read John Laszlo’s _The Cure of Childhood Leukemia_. It describes the first two decades, the 1950s and 1960s, that took childhood leukemia from a disease that was always fatal, and brutally so, to one where today the cure rate from the most common type of childhood leukemia is approaching 90%. No computer could have done that. It required doctors who persisted at giving care even though all their best efforts might only give a child a few more weeks of life. And it meant children and their parents being brave enough to endure brutal treatments, well aware what they were suffering wouldn’t save their life but might, at some future date, make save another child. Try to inject computers into that story, and you’ll see how worthless they can be.
The same can be true in law. Just before he died, a friend of mine who was a parole officer told me one of the proudest deeds of his life. One of his clients was a thirteen-year-old boy who’d already had a history of brutally attacking young girls. My friend has a sense that the problem lay with the almost sexual relationships the boy had with his mother. The boy was taking his anger at that out on girls his own age.
It happened that, because of boy’s youth, the maximum permitted sentence for the boy was three months in a state reformatory. Based on a lifetime of experience, he felt that at least a year was needed. The Harvard-trained lawyer he was opposing thought he was crazy to even try, but he went ahead anyway. In his presentation to the judge, he offered almost two dozen possible sentences and carefully refuted all but one, that one-year sentence. He then argued that it would be a “manifest injustice” to sentence the boy to anything less. After hearing him, the judge ruled in his favor, telling him that in twenty years on the bench, she’d never heard a POV better argued. His efforts had achieved the almost impossible.
For medicine, that early research on leukemia and for law that boy’s case illustrate something that automation can never do. It can only do as it is programmed to do. It cannot say “this cannot be” be and do whatever is necessary to prevent a child’s death or a legal injustice.
There’s also a fact that nothing will ever change. For our human problems we want human assurances. We don’t want to hear what a computer is saying. We want to hear a person like ourselves, and it needs to be someone we trust.
I saw that vividly when I later transferred to the hospital’s teen unit and gave post-op care for girls who’d had spinal fusions. Most such surgeries are done over the summer by talented surgeons who have no time for patients once the surgery is done. I found it amusing that many of those girls seemed to regard me almost with awe as their Dr. Mike. “Please,” I felt like saying, “Don’t be silly. I know virtually nothing about your surgery.”
Only later did I realize that what they were liking was my calmness about their recovery. They needed someone to tell them, in words or deeds, “Your surgery is over. This problem that’s haunted your life for years is now past.” For all my ignorance of complex back surgeries, I was able to do that. Indeed, after sixteen months of seeing adorable children I liked relapse and die, I was delighted to have patients whose only problem was the boredom of waiting for their discharge. I was as relieved to be caring for them as they were to have me as their caregiver.
And feelings like those are important, particularly in professions that impact people’s lives as deeply as medicine and law. We need to know that someone—not some impersonal thing—is fighting for us. That can never be automated.
—Michael W. Perry, author of My Nights with Leukemia (about those children), Embarrass Less (about embarrassment in teen boys and girls) and Senior Nurse Mentor (about the horrors of hospital politics)

Posted by: Michael W. Perry | May 7, 2017 9:29:18 AM

Oh goody. As a society we're working towards 100% unemployment and Terminators.

Posted by: Emerson | May 7, 2017 10:13:32 AM

Ah, me! I remember all that dog work of many, many years ago. Nice to let the AI take care of it. But...sometimes it takes a flash of genius to see the way around a problem. I hope the programmers include it.

Posted by: Jim Brock | May 7, 2017 10:59:43 AM

Hmm...perhaps the source article is more specific, but I'll start believing AI (in law or in most other places) is more reality than hype/click-bait when *specific work* is discussed rather than vague generalities.

The excerpt doesn't really (or at all) specify the actual "dog work" that is on the verge of being done by "AI"

Posted by: cas127 | May 7, 2017 2:53:34 PM

I have been hearing this story for 10-15 years. Each year, it becomes about 1% more true, which means it might become a major force over the next generation or two. Those hawking imminent AI have something to sell.

Posted by: anon | May 8, 2017 10:14:10 AM