Robots are all around us. As we mere humans struggle with basic tasks, robots are churning through documents, making legal decisions, and administering public and private programs. Due to their ubiquity in legal processes, the patterns they manifest may significantly alter the course of legal development. Susan Morse, in her recent work on the topic, begins to unpack these patterns, exploring the incentives robots face when making legal decisions. Specifically, Morse argues that robots tend to follow the path of least resistance, favoring legal decisions that are less likely to be challenged. In doing so, these artificial agents shift the trajectory of laws’ development, although in what direction it is too early to say.
To lay the foundation for her argument, Morse offers a helpful typology of robot legal mistakes, divided between government and private market robots. She explains that both government and private robots might cause overcompliance or undercompliance with laws. The likelihood of challenge for each mistake type depends upon the robot’s wielder. For instance, a government robot that causes overcompliance by aggressively enforcing the law is very likely to face legal challenge. This is because a private entity stands to gain from litigating a law erroneously enforced against it. At the other end of the spectrum, a private market robot that causes overcompliance is least likely to face legal challenge because neither private parties nor the government stand to gain from challenging voluntary overcompliance. Similarly, a government robot that causes undercompliance—that is, allows some amount of evasion—is unlikely to be challenged. Aside from a few narrow cases such as environmental regulation, third parties are unlikely to gain from challenging undercompliance, or they lack the legal standing to do so.
The transparency and explainability of legal decisions play a key role in Morse’s argument. Defending against a legal challenge typically requires explaining the rationale behinds one’s position. This is true for both human and robot decisionmakers. If a robot’s decision is challenged, its user can defend the decision by providing explanation for its reasoning. Where a robot cannot explain its legal reasoning, however, defending a questionable position becomes difficult or impossible. This might happen, for example, if the robot’s decision relies upon a confidential algorithm. Thus, Morse reasons, robots that cannot explain their positions are more likely to favor mistakes that minimize the risk of challenge. For private robots, the result is overcompliance; for government robots, undercompliance.
An obvious question may arise here for the faithful TaxProf reader, namely: What does this have to do with taxes?
The answer: TurboTax. Robots play a massive role in tax filing and compliance via tax preparation software, which Morse considers at various points in the article. Her reasoning suggests that TurboTax likely causes overcompliance with tax laws because the software is designed to minimize the risk of legal challenge.
Morse suggests that this challenge-averse behavior will change the development of the law. While surely true, I wonder exactly how. Fewer legal challenges may mean sluggish development of legal interpretation. Or, legal interpretation will develop based mostly on cases that fall outside the realm of robots. In a tax context, this means fewer tax cases based on middle-income, “normal” taxpayers—who are more likely to use tax software—and proportionately more tax cases based on wealthy or otherwise exceptional taxpayers—who are more likely to hire professional tax preparers. The resulting body of law could be more taxpayer friendly, if their advocates are zealous and judicious. It could be less taxpayer friendly, however, if these taxpayers tend to be unsympathetic or overly aggressive. The resulting body of law may also create greater uncertainty for unexceptional taxpayers, since the more common fact patterns will avoid legal challenge. While uncertainty might be seen as a negative outcome, it can also potentially increase compliance, reinforcing the tendency towards overcompliance.
In the tax context, these considerations are complicated further by the fact that robots play both sides of the game. IRS robots abound, perhaps most notably in the agency’s fraud and abuse detection system. The algorithm that IRS systems use to flag returns for audit is very hush-hush. Morse’s reasoning may suggest that these robots will underenforce in order to avoid a challenge in which they cannot explain their reasoning. However, the process of selecting returns for audit is itself not subject to legal challenge, which prompts the question of how such robots fit into Morse’s framework. (Perhaps the answer is that they don’t.)
There is certainly plenty to consider here and Morse offers a clever and engaging framework for doing so. I look forward to seeing these ideas develop in future work. If any of it is done by robots, I, for one, will not challenge it.