Blank and Osofsky test two automated legal systems, Emma, run by USCIS, and the Interactive Tax Assistant (ITA), run by the IRS. The systems differ somewhat. Emma is a chatbot that interprets natural language inputs. It usually does not allow its users to input in specific information. ITA, on the other hand, uses a decision tree. Individual users asking a question of ITA input, through a series of yes and no questions or other means, information for the program to use in reaching the answer, for a slightly more bilateral and interactive experience.
But while Emma and ITA differ in some ways, they share many features. Both Emma and ITA try to give answers succinctly and in plain language. They try to reduce the amount of work users must undertake to get an answer. But in doing so, they run into the problem of Simplexity. The danger, of course, is that people who are not sophisticated may rely on these results.
For example, on the immigration side, Emma often does not advise people about discretion that border agents can exercise in readmitting permanent residents after an extend stay outside of the U.S. in certain emergency situations like the COVID pandemic.
While it can take in more information, ITA often runs into similar problems. That is because while ITA takes greater inputs from users in answering their questions, they are decision trees of yes and no answers. But many tax questions, as readers know, require nuance and a great deal of factual examination. The authors present two: one involving a car salesperson going into management seeking to deduct MBA expenses and the other a family with a child who gets an athletic scholarship. The former produces a result from ITA that is taxpayer favorable, the latter unfavorable. But both suffer, because ITA does not receive the full set of facts or analyze the nuances. ITA instead flattens and simplifies the many grey areas. To compound the problem for a taxpayer, relying on ITA’s answers is not effective in asserting a reasonable cause defense against penalties. To make matters worse, neither ITA nor Emma cites to law that may give the user a sense of the greater complexity.
Blank and Osofsky then examine how these systems are made and interview numerous agency officials. There are a few striking findings. One is that agency officials want to have these succinct messages and simple interfaces. They note that agency officials know that engagement drops when responses are longer, more detailed, and more nuance, which encourages the short simple responses. Indeed, one agency official noted that people demand those answers, because that is how the Internet with things like Twitter has evolved. We demand short responses, not detailed explanations. Another finding is that because counsel often reviews the responses, they assume that the answers are always “right.” Thus, agency officials do not seem to think that there is a Simpleixty problem. Additionally, agency officials, while wanting more uses and engagement, have a contrary viewpoint that they think that users will not rely on the information.
Automated legal guidance does have benefits for agencies. They can now respond to inquiries 24/7. Sometimes the guidance tool can replace a call to a call center, and even be faster, like in the case of the IRS where calls are routinely dropped or unanswered because of poor staffing. They can provide answers quickly and simply and are thus extremely useful for answering simple questions.
But the authors emphasize the downsides that they garnered from their interviews and interactions with automated legal guidance. Most problematically, automated legal guidance could do almost the opposite of what they seek to achieve; they can increase, not decrease, inequities in access to justice.
The reason for increasing the inequities stems from the issues of Simplexity itself. While advice for those without access to high powered attorneys are flattened by the need for concise answers and easy to use inputs, those with sophistication and means will be able to find advice with all of the nuances. While those who rely on automated legal advice may not have a reasonable cause defense, those who can pay for legal advice will. To make matters worse, not only is automated legal guidance potentially more accessible than even the forms and instructions that Blank and Osofsky warned can cause these Simplexity disparities, but the short answers of automated legal guidance have a tone that feels more authoritative. Given that most people today consume information too in these types of bites from the Internet, automated legal guidance may seem more real and certain to many unsophisticated people than even a well-explained nuanced answer from an expert.
The authors then produce a series of suggestions. Most of them center around trying to reckon with the need to have the speed and efficiency of automated legal advice but still maintain shades of grey. These include more disclaimers and citations to legal authorities, including those that show these shades of grey. They also focus on trying to ensure that people can rely on the advice for raising affirmative defenses and determine better management and oversight processes for agencies.
This piece is incredibly useful, because it forces us to think a lot more on how agencies should communicate to the public. Especially in the areas like tax, which touches everyone but also has layers of nuance and complexity, agency communication and the tools we use have the potential to exacerbate inequities in not only access to justice but just trying to get the answer right. This work then serves as a nice build on the authors’ prior work on Simplexity. It also forces us to think too as to how we can communicate in a way people can understand but to raise the uncertainty and nuance required of any position.
Additionally, while the piece does not deal directly with ChatGPT, the days where tools like ChatGPT give out guidance to people contacting an agency may not be all that far off. These large language model chatbots may be better at talking about the grey areas. And yet, one of the problems with these tools is that they too speak with some level of simplicity and authority. Indeed, we even know that while a lot of the time ChatGPT gets legal issues right, it can also get it spectacularly wrong in ways that only trained people looking carefully can find. In such a world too, some of the equity concerns Blank and Osofsky said could continue to spin out. It thus behooves us, both inside and outside of government, to think hard about this article’s proposals for safeguards as these types of tools become cheaper and more common in the world of tax and the broader legal profession.