Is AI on Its Way to Gaining Rights?

No, it’s not science fiction. Here’s what we can expect from looking at how corporations have gained personhood.

Written by James Boyle
Published on Oct. 22, 2024
A holographic brain with a computer chip reading AI in its center. The brain is never to a judge’s gavel and in front of a figure holding a sword and scale.
Image: Shutterstock / Built In
Brand Studio Logo

We already have artificial people with legal personality. They are called corporations.

Legal systems differentiate between natural persons — us, in all our fleshy, vulnerable glory — and legal persons, the entities on which the law confers some, but not all of the personhood rights of human beings. From the beginning of corporate personality, people have realized that there was something uncanny about the process — a kind of science fiction transformation of paper contracts and clusters of people into an entirely new, immortal, artificial being.

Yet this transformation is not performed by Dr. Frankenstein at the height of a lightning storm, but by dry legal prose. The critics of corporate personality find the result just as horrifying.

What Is a Corporation?

Justice Story, in one of the first Supreme Court cases discussing the legal rights of corporations, manages to capture all of these aspects. “A corporation is an artificial being, invisible, intangible and existing only in contemplation of law. Being the mere creature of law, it possesses only those properties which the charter of its creation confers upon it, either expressly, or as incidental to its very existence. These are such as are supposed best calculated to effect the object for which it was created. Among the most important are immortality, and, if the expression may be allowed, individuality.

We do not empathize with the corporation; though some of us write love letters to its nimble productivity. We recognize no common humanity, no moral imperative to honor a shared consciousness with a badge of legal equality. We do not need John Searle and his Chinese Room experiment to realize that this is an artificial creation, or to puncture some illusion that the two beings have the same kind of consciousness.

The corporation is a person because we choose — for practical reasons — to call it one, to allow it to be sued, as Cohen pointed out above. Is this a likely route for AI personality?

More on AIEverything You Need to Know About AI

 

Could AI Be Granted Legal Personality?

There might be very good economic reasons why — at a certain point — general purpose AIs could be granted legal personality merely because it would be a way to organize their use in the economy efficiently. This is not science fiction.

In 2016, a draft report of the European Parliament suggested that the E.U. needed to explore “the implications of all possible legal solutions” to possible harm done by robots, including:

[C]reating a specific legal status for robots, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons with specific rights and obligations, including that of making good any damage they may cause, and applying electronic personality to cases where robots make smart autonomous decisions or otherwise interact with third parties independently.

This portion in the draft report attracted a storm of protest and ended up going nowhere because some saw it as an attempt to give robots human rights. Mady Delvaux, the Luxembourgian MEP responsible for present[ing] the report to the public, says this is absolutely not the report’s intention.

“Robots are not humans and will never be humans,” Delvaux [said]. She explains that when discussing this idea of personhood, the committee that drafted the report considered the matter to be similar to corporate personhood — that is to say; making something an “electronic person” is a legal fiction rather than a philosophical statement.

But Burkhard Schafer, a professor of computational legal theory at the University of Edinburgh, says using the phrase was a mistake to begin with.

“People read about ‘electronic personhood’ and what they think is ‘robots deserve recognition’ like it’s a human rights argument,” he tells The Verge. “That’s not how lawyers think about legal personality. It’s a tool of convenience. We don’t give companies legal personality because they deserve it — it just makes certain things easier.”

Perhaps the protests are  based on the intuition that, even if personality started merely as a convenient label, it might morph into something more. The legal history of corporate personality shows that this particular intuition might be a good one.

Earlier I quoted Justice Story waxing lyrical about this “artificial being, invisible, intangible, and existing only in contemplation of law. Being the mere creature of law, it possesses only those properties which the charter of its creation confers upon it.” Immediately after that passage, Story reassured his audience that creating an artificial being didn’t mean giving it political rights.

However right he was about the other aspects of corporate personhood, it is easy to see that Justice Story was wrong about this point; one has only to look at the continuing struggles over corporate speech and corporate campaign donations, or the furor over the constitutional rights accorded to corporations in cases such as Citizens United v. FEC. A similar furor is likely to attend debates about what personality for technologically-created artificial entities would actually mean.

In other words, we must separate the question “does this being have any recognized legal status?” from the question “what rights does that status bring with it?”

More on AIWhat Is Sentient AI?

 

What Would Personhood Mean for AI?

In this chapter, I turn to the history of — and the bitter political and legal fights over — corporate personhood, to see if that history might offer a hint of what the AI personhood debates have in store for us. I will make a fairly simple argument.

First, courts and scholars have never had a single, universally accepted, theory of corporate personhood. Instead, we have “muddled our way through,” frequently coming up with explanations and justifications for social and legal decisions only after those decisions were already made, and often ignoring the internal contradictions in our arguments. The same is likely to be true for legal personality claims for AI and transgenic species.

Second, even if we did have a coherent theory of personality, we have little agreement about the implications of that theory. Let us say we decide to give a corporation legal personality. Let us even stipulate that we do it under a single consensus justification: the “real entity” theory, or the “nexus of contracts” theory or the “legal fiction” theory. What is the implication of that decision for the actual legal rights and moral claims we will recognize as legitimate on the part of the corporation?

The Line book cover
Image provided by MIT Press.

We do not agree on the answer. The same is likely to be true for legal personality claims for AI and transgenic species. If Hal is named a legal person — even if that is for practical and economic reasons and not because of moral sympathy — we will still be divided about whether it should have the rights of free speech and equal protection of the laws, whether it should have the right to lobby, to give campaign contributions or even, one day, to vote.

Third, the political fight about corporate personality and constitutional rights will immediately be drawn into the debate over rights for artificial intelligence.

Even if we give legal personality to technologically-created artificial entities — and I will focus mainly on AI — we would have to face the threshold questions that we do with business corporations.

Should we allow them to limit their liability, capping the possible losses that their creators might face in their personal capacity? What would be the threshold for legal recognition? When should we attribute the actions of the AI to the AI alone and when would we be required to “pierce the veil” and pin responsibility on those who originally created it?

Would AI legal personality be built around an assumption of profit-making enterprise, or could it be devoted to multiple ends: charitable, scientific or political, for example? If the AI could hold property in its own right, could it then pursue its own idiosyncratic goals or hobbies with that property? Would the artificial intelligence have only the rights and duties necessary to fulfill economic goals such as the right to sue to enforce contracts or the liability for its torts? 

Alternatively, would it have a much broader set of political rights — freedom of speech and of movement, for example, together with rights of self-determination that would allow it to reject the goal for which it had been created and to pursue other projects? Rights to “equal protection of the law”? It turns out that the history of corporate personhood offers clues to possible answers. Unfortunately, they are not always reassuring.

This extract is from The Line: AI and the Future of Personhood by James Boyle, published on October 22, 2024. Reproduced with permission from the MIT Press. The Line is available under a Creative Commons License so you can read it the digital version for free.

Explore Job Matches.