UPDATED BY
Brennan Whitfield | Aug 23, 2023

At the moment, works created solely by artificial intelligence — even if produced from a text prompt written by a human — are not protected by copyright.

When it comes to training AI models, however, the use of copyrighted materials is fair game. That’s because of a fair use law that permits the use of copyrighted material under certain conditions without needing the permission of the owner. But pending lawsuits could change this.

Generative AI has significantly altered the way we live, work and create in just a few months. As a result, the deluge of AI-generated text, images and music — and the process used to create them — has prompted a series of complicated legal questions. And they are challenging our understanding of ownership, fairness and the very nature of creativity itself.

Related Reading4 Ethical Questions Generative AI Poses

 

Can AI Art Be Copyrighted?

It has long been the posture of the U.S. Copyright Office that there is no copyright protection for works created by non-humans, including machines. Therefore, the product of a generative AI model cannot be copyrighted.

The root of this issue lies in the way generative AI systems are trained. Like most other machine learning models, they work by identifying and replicating patterns in data. So, in order to generate an output like a written sentence or picture, it must first learn from the real work of actual humans.

If an AI image generator produces art that resembles the work of Georgia O’Keefe, for example, that means it had to be trained using the actual art of Georgia O’Keefe. Similarly, for an AI content generator to write in the style of Toni Morrison, it has to be trained with words written by Toni Morrison.

Legally, these AI systems — including image generators, AI music generators and chatbots like ChatGPT and LaMDA — cannot be considered the author of the material they produce. Their outputs are simply a culmination of human-made work, much of which has been scraped from the internet and is copyright protected in one way or another.

So, how do we reconcile the rapidly evolving artificial intelligence industry with the knotty particulars of U.S. copyright law? That is something creatives, companies, courts and the United States government are trying to figure out.

Dive DeeperThe Unreal Slim Shady: How We Trained an AI to Simulate Eminem’s Style

 

The Lines Get Blurry When Humans and AI Collaborate

Creative work that is the result of a collaboration between a human and machine, which is often the case with AI-generated creations, is a complicated matter.

“If a machine and a human work together, but you can separate what each of them has done, then [copyright] will only focus on the human part,” Daniel Gervais, a professor at Vanderbilt Law School, told Built In. He mainly focuses on intellectual property law, and has written extensively on how it relates to artificial intelligence.

If the human and machine’s contributions are more intertwined, a work’s eligibility for copyright depends on how much control or influence the human author had on the machine’s outputs. “It really needs to be an authorial kind of contribution. In that case, the fact that you worked with a machine would not exclude copyright protection,” Gervais said.

This threshold was put to the test in September of 2022, when the U.S. Copyright Office made history by granting the first known registration of a work produced with the help of text-to-image generator Midjourney — a graphic novel called Zarya of the Dawn. The 18-page narrative had all the trappings of a typical comic book — characters, dialogue and plenty of images, all of which were generated using Midjourney. The text was written by the book’s author Kristina Kashtanova.

Just a few months later, the office reconsidered its decision and wound up partially canceling the work’s copyright registration, claiming in a letter to Kashtanova’s attorney that it had “non-human authorship” that had not been taken into account. The book’s text, as well as the “selection, coordination, and arrangement” of its “written and visual elements,” remained protected. The images themselves did not, though, because they were “not the product of human authorship,” but rather of text prompts that generated unpredictable outputs based on its training data. The office also deemed whatever editing Kashtanova did to the images as “too minor and imperceptible to supply the necessary creativity for copyright protection.”

Since then, the office has released a more sweeping policy change to address all AI-human creative collaborations moving forward — a response to what it sees as new trends in registration activity. The document essentially doubles down on its stance with Zarya of the Dawn, reiterating that the term “author” is not extended to non-humans, including machines. It also states that if a human simply types in a prompt and a machine generates complex written, visual or musical works in response, the “traditional elements of authorship” have been executed by AI, a non-human. Therefore, it is not protected by copyright.

Federal courts have also affirmed the US Copyright Office’s position that AI-created artwork cannot be copyrighted. In August 2023, a judge in the US District Court for the District of Columbia sided with the agency against computer scientist Stephen Thaler, who was seeking copyright protection for an image created by AI software. At the time, Thaler’s attorney told Bloomberg Law that they intended to appeal the case.

 

Lawsuits Surge in the Wake of Generative AI

Some creators and companies believe their content has been stolen by generative AI companies, and are now seeking to strip these companies of the protective shield of fair use in a series of pending lawsuits.

One such company is Getty Images, which is suing Stability AI (the company behind Stable Diffusion) for copying and processing millions of images that are protected by copyright, as well as their associated metadata owned by Getty Images, without getting permission or providing compensation. TikTok recently settled a lawsuit with voice actress Bev Standing, who claims the company used her voice without permission for its text-to-speech feature.

Meanwhile, artists Sarah Anderson, Kelly McKernan and Karla Ortiz have filed a class-action copyright infringement lawsuit against both Stability AI and Midjourney, both of which use Stable Diffusion to generate their images. The suit claims that these artists’ work was wrongfully used to train Stable Diffusion, and that the images generated in the style of those authors directly compete with their own work — an important point in the matter of fair use.

“Until now, when a purchaser seeks a new image ‘in the style’ of a given artist, they must pay to commission or license an original image from that artist. Now, those purchasers can use the artist’s works contained in Stable Diffusion along with the artist’s name to generate new works in the artist’s style without compensating the artist at all,” the complaint reads. “The harm to artists is not hypothetical — works generated by AI image products ‘in the style’ of a particular artist are already sold on the internet, siphoning commissions from the artist’s themselves.”

The U.S. Copyright Office’s stance on excluding machines from being considered authors could throw a wrench in the Stable Diffusion lawsuit and many others, according to Rob Heverly, an associate professor at Albany Law School who specializes in the intersection of technology and law.

“In order for there to be infringement, there has to be an author. So, if there isn’t an author, I don’t know that there can be infringement.”

“In order for there to be infringement, there has to be an author. So, if there isn’t an author, I don’t know that there can be infringement,” Heverly told Built In. “If we’re not going to hold the technology maker liable for the technology itself, then the creator of the output is the AI. But we’ve already said they’re not an author. So if they’re not an author then they can’t create an infringing work.”

Yet, amid all these lawsuits against AI companies, the scope of fair use in generative AI may come down to an upcoming Supreme Court decision that has absolutely nothing to do with artificial intelligence at all, but rather a 1981 photograph of rock musician Prince and a re-creation of the photograph made by pop artist Andy Warhol a few years later. The court is now trying to determine whether Warhol’s work was transformative enough to be considered a new piece of art — separate from the original Prince portrait, and not a direct competition.

“My guess is that the Supreme Court will say this is not a fair use, precisely because of the competition concern,” Gervais said. “If that’s true, it may change the way the scraping question will come up,” he added, thus determining the outcome of the other pending cases.

Competition is also at the heart of internal debates at The New York Times over a potential lawsuit against OpenAI, according to reporting by NPR. NPR’s sources said the Times is concerned that generative AI tools will repurpose its reporting and display it to readers who would otherwise visit its site. If courts find that OpenAI illegally used Times articles to train its models, OpenAI could be forced to destroy its LLM dataset and rebuild it from scratch.

Looking AheadIs Generative AI the Next Tech Bubble?

 

Creators and Companies Alike Take Action

The sheer number of lawsuits pertaining to this issue is a uniquely “U.S. phenomenon,” as Gervais put it, simply by virtue of how its government works. The United States operates under a common law system, meaning legislative precedents are often set first by judges in courts. So, while all these pending lawsuits and others continue to mount, the fair use doctrine’s place in the ongoing saga of the artificial intelligence industry is still very much up in the air.

Still, creators are worried about their work or style being used to train generators without permission or compensation.

“Artists are literally being replaced by models that have trained on their own work.”

“The large majority of independent artists make their living through commissioned works. And it is absolutely essential for them to keep posting samples of their art,” Ben Zhao, a computer science professor at the University of Chicago, told Built In. But, the websites they post their work on are being scraped by AI models in order to learn and then mimic that particular style. “Artists are literally being replaced by models that have been trained on their own work.”

To help, Zhao and his team designed a new tool called Glaze, which aims to prevent AI models from being able to learn a particular artist’s style. If an artist wants to put a creation online without the threat of an image generator copying their style, they can simply upload it to Glaze first and choose an art style different from their own. The software then makes mathematical changes to the artist’s work on a pixel level so that it looks different to a computer. To the human eye, the Glaze-d image looks no different from the original, but an AI model will read it as something completely different, rendering it useless as an effective piece of training data.

Elsewhere, other companies are taking a more offensive approach. Getty Images has placed an all-out ban on artificially generated content, citing their potential legal risk. Shutterstock, another stock imagery site that was “critical” to the training of OpenAI’s DALL-E, according to CEO Sam Altman, has gone so far as to pay content creators if their work is used in the development of generative AI models.

And Shutterstock isn’t alone. Generative AI startup Bria trains its models exclusively on what it calls “responsibly sourced” data sets, and it pays royalties to artists and stock image providers when their creations have been used to generate an image. “We pay back a royalty according to the output,” co-founder and CEO Yair Adato explained. “So if somebody generates a specific art in the style of the artist, then the artist will have the right to say how much money he wants on this synthetic creation. And then we will split the revenue.”

Bria counts Getty Images as one of its biggest backers, and it recently formed a partnership with Nvidia AI Foundations.

“People understand that something is changing and they need to approach it differently,” Adato told Built In. “We need to find a way for the data and the technology to work together.”

More on Generative AI5 Ways Generative AI Is Changing the Job Market

 

The Future of AI Copyright

If the use of creators’ work in generative AI models continues to go unchecked, many experts in this space believe it could spell big trouble — not only for the human creators themselves, but the technology too.

“AI models require human beings to keep feeding it for these AI models to get better, so unless there is cooperation, you can only do so much to cannibalize your own data source,” Zhao said. “When these AI models start to hurt the very people who generate the data that it feeds on — the artists — it’s destroying its own future. So really, when you think about it, it is in the best interest of AI models and model creators to help preserve these industries. So that there is a sustainable cycle of creativity and improvement for the models.”

“When these AI models start to hurt the very people who generate the data that it feeds on — the artists — it’s destroying its own future.”

In the U.S., much of this preservation will be incumbent on the courts, where several creators and companies are duking it out right now. Looking ahead, the level at which U.S. courts protect and measure human-made inputs in generative AI models could be reminiscent of what we’ve seen globally, particularly in other Western nations.

The United Kingdom — another leader in AI innovation — is one of only a handful of countries to offer copyright protection for works generated solely by a computer. The European Union, which has a much more preemptive approach to legislation than the U.S., is in the process of drafting a sweeping AI Act that will address a lot of the concerns with generative AI. And it already has a legislative framework for text and data mining that allows only nonprofits and universities to freely scrape the internet without consent — not companies.

If it is ultimately determined that AI companies have infringed on certain creators’ copyrighted work, it could mean a lot more lawsuits in the coming years — and a potentially expensive penalty for the companies at fault.

“One thing you have to know about copyright law is, for infringement of one thing only — it could be a text, an image, a song — you can ask the court for $150,000. For one work,” Gervais said. “So imagine the people who are scraping millions and millions of works.”

 

Frequently Asked Questions

Can AI content be copyrighted?

No — AI content and any works created solely by AI cannot be copyrighted in the United States.

Does generative AI violate copyright laws?

It depends — generative AI may violate copyright laws when the program has access to a copyright owner's works and is generating outputs that are "substantially similar" to the copyright owner's existing works, according to the Congressional Research Service. However, there is no federal legal consensus for determining substantial similarity.

Training generative AI models using copyrighted materials is protected under certain conditions by the fair use doctrine of the U.S. copyright statute.

Can AI be sued for copyright?

Companies that have developed and are responsible for AI systems may be able to be sued for copyright infringement. There are several cases of AI companies being sued due to potentially using copyrighted works to illegally train AI models or generate AI content.

Great Companies Need Great People. That's Where We Come In.

Recruit With Us