Monday, December 9, 2019
Joshua Blank (UC-Irvine) and Leigh Osofsky (North Carolina) present Automated Legal Guidance at Boston University today as part of its Tax Policy Colloquium hosted by David Walker:
The use of artificial intelligence as an aid to law enforcement has received significant attention from legal scholars. For instance, the introduction of machine learning to identify likely crime hot spots has caused scholars to consider questions such as how to apply Fourth Amendment standards, how to prevent racial discrimination and how to preserve transparency and accountability. On the other hand, scholars have not addressed the government’s increasing use of artificial intelligence for another purpose—providing guidance to the public regarding legal entitlements and obligations. For example, the Internal Revenue Service encourages taxpayers to seek answers regarding various tax credits and deductions not from human IRS representatives, but rather from its online “Interactive Tax Assistant.” Likewise, the U.S. Army directs individuals with questions about enlistment to its virtual guide, “Sgt. Star,” and the U.S. Citizenship and Immigration Services suggests that potential green card holders and citizens speak with its interactive chatbot, “Emma.”
This Article examines how the rise of artificial intelligence in administrative guidance is producing a new phenomenon: automated legal guidance. Through online tools, virtual assistants and other technology, governments attempt to help the public understand and apply the law by automating the guidance-giving function. After introducing this development, this Article makes several descriptive claims and then addresses normative concerns. Using the Interactive Tax Assistant as a case study, we show that automated legal guidance relies upon “simplexity,” where the government presents complex law as though it is simple, without actually engaging in simplification of the underlying law. While automated legal guidance offers gains in terms of efficiency and ease of use, it also causes the government to present the law as simpler than it is, leading to less precise advice, and potentially inaccurate legal positions. We argue that the use of simplexity in automated legal guidance is more powerful and pervasive than in static publications because it is reciprocal, personalized and instantaneous. Further, we argue that understanding the costs as well as the benefits of current forms of automated legal guidance is essential to evaluating even more sophisticated, but also more opaque, automated systems that governments are likely to adopt in the future.
The Article then considers important normative questions that automated legal guidance raises and offers suggestions for how government officials and policymakers should respond. First, it argues that governments should seek to prevent automated legal guidance from widening the gap between access to legal advice enjoyed by high-income and by low-income individuals. Second, it argues that governments should introduce more robust oversight and review processes for automated legal guidance. Finally, it argues that the law should evolve to allow individuals to avoid certain penalties and sanctions when they have taken actions or claimed legal positions in reliance upon automated legal guidance.