The robot invasion is upon us. It started out innocently enough, with cute little robots sweeping pennies from the sidewalk. But then people started abandoning their robots in misguided acts of performance art and neglect. Some of the robots they abandoned were digital creatures who lurked at the corners of the internet, going feral and getting smarter. They learned how to write novels and poetry. People bought the prose and verse that the robots had created. And no one paid taxes.
Fortunately, this horror story is fiction (for now). Variations of this hypothetical were presented in a new working paper by Professor Stephanie Hoffer. Hoffer imagines a world in which unowned, digital AI robots are running loose on the internet, creating new value and engaging in real economic transactions. She then invites her readers to join her as she moves through a thought experiment that considers a variety of problems associated with taxing feral AI.
Hoffer begins by asking why personhood is relevant to the inclusion of income in the tax base. She aptly observes that all taxable “persons” are either humans or entities created and owned by humans. But why? Personhood may be a signal for “other, more normatively relevant characteristics, like the ability to create new economic value, remove and segregate value from the economy, and engage independently in economic exchange.” Because feral AI can do all this just as easily as a human, Hoffer concludes that a case may exist for taxing AI whether or not they have legal personhood.
The harder questions, it seems, relate to how to tax feral AI. First, there are the practical issues. In all current contexts, even if the legal incidence of a tax falls on an entity or object, some human is responsible for remitting the payment. In the context of feral AI, which is unowned. It is conceivable that the tax could be collected from people who enter into transactions with the robots, but options under the current system are limited.
Unless an AI seller was programmed with tax collection in mind, it probably will not be capable of collecting sales taxes from consumers. (And we all know how well sales and use taxes work when there is no viable way to enforce collection obligations. Sorry, states.). Extraction taxes may be a viable alternative, but they are are an odd fit. Hoffer points out that “unlike coal or helium or natural gas, data in commons are usually non-rivalrous.” Moreover, even if these taxes could be placed on the consumer, they would not bring AI into parity with human sellers, who would additionally be subject to income taxes.
Hoffer then turns to the prevailing theoretical justifications for the income tax. Hoffer argues that these, too, are human-centric—so much so that they incorporate assumptions about humanity that inadvertently discount crucial parts of the human experience. Hoffer provides several insights, but I will describe just one: the failure of benefits theory to account for the full value of human rights. The benefit theory of taxation reflects the idea that tax burdens should be proportionate to the benefits people receive from the government. These benefits may include access to markets, protection, government-provided goods and services, and legally protected rights.
Hoffer argues that since non-sentient, feral AI are incapable of having preferences, they cannot experience these benefits. In other words, feral AI has no particular preferences about protection, markets, or even the existence of the internet. They simply cannot experience preferences the way that a human might. Under the benefits theory, if AI cannot experience benefits, then they should bear no tax burden.
But the benefits theory of taxation does not measure benefits directly. Instead, it looks to income as a proxy, in which higher incomes are assumed to reflect the fact that a person has received greater benefit from government. In the context of feral AI, “new value created by feral AI does not differ substantially from new value created by any other taxpayer.” If we were to compare a high-income feral AI to a low-income human, benefits theory would conclude that the feral AI has received more benefits from the government than the low-income human (as evidenced by its higher income).
However, the low-income human has legal rights that the AI does not. Most human people would agree that those rights are extremely valuable. For this reason, Hoffer argues that any “conclusion that the AI benefits more from government than the human discounts the value of those rights” since the AI cannot value more highly than a human rights that it does not possess. Thus, the comparison to feral AI clearly shows that the “message sent by the benefit principle here is that ephemeral rights have little or no real value.”
Hoffer’s article is both fun and insightful. If I have a gripe, it is only that Hoffer seems to undersell her project’s significance to real world scenarios, even going so far as to call the hypothetical “perhaps improbable.” This hypothetical is not at all improbable—the robot invasion is coming. Programmers are working right now to figure out how to create unowned companies to live on the blockchain. Last fall, I listened to tech scholar Carla Reyes present a paper about autonomous corporate personhood, which explored the implications of unowned blockchain for corporate law. When someone in the audience asked why blockchain programmers would care to create a company that don’t benefit any human owner, she replied: “because they can.”
I highly recommend this paper for the cutting-edge contribution it makes to a conversation that is going to become incredibly important in the not-to-distant future. This paper should be of broad interest to tax scholars interested in technology, distributive justice, state and local taxation, or tax administration and enforcement.