An AI-Generated Image Might Be Worth More Than A Thousand Words

March 13, 2026

When a hyper-realistic deepfake video featuring Brad Pitt and Tom Cruise throwing punches over Jeffrey Epstein surfaced weeks ago, many bemoaned the end of Hollywood movie-making as we know it. A little-known Bytedance-backed AI model called Seedance 2.0 became more famous than Tilly Norwood overnight, striking fear across the creative community and prompting cease and desist letters from at least four major movie studios

Welcome to the next phase of generative AI litigation. 

Generative AI Litigation 2.0 Has Arrived

We saw the preview, when three major movie studios handed Midjourney a 110-page complaint, alleging direct and indirect copyright infringement by Midjourney’s Image Service and upcoming Video Service. We saw the prequel, when fair use won, at least when it came to copying books to train the text-to-text generative AI tools that burst on the scene back in 2022. But in this next phase of generative AI technology, the digital replicas, deepfakes, and synthetic media made possible by today’s image, video, audio, and increasingly multimodal versions implicate right of publicity, copyrightability, and fair use questions not at issue in Bartz v. Anthropic or Kadrey v. Meta

The AI-generated humans, characters, and synthetic media from Seedance, Midjourney, and Tilly Norwood raise intellectual property (IP) issues that simply do not arise when the inputs and outputs are limited to words and punctuation marks. In the wake of these new AI image and video generators, three distinct AI personas are emerging, each following a different IP path:

·       The Pitt-Cruise deepfake—digital replicas of real humans, which implicate right of publicity law.

·       The Midjourney images—digital replicas of fictional characters, which implicate copyright law.

·       Tilly Norwood—synthetic media of digital humans, which may implicate neither. 

The first wave of generative AI litigation was about words, text, and training data. The next wave will be about faces, characters, and identity.

Human Likeness Is Not Copyrightable

Integral to the Pitt-Cruise deepfake, and its ability to garner more than 15 million views, is the identity and recognizability of the well-known actors in the video. Yet, neither Brad nor Tom have any copyright basis to object to the AI-generated video itself.

To the extent any copyright exists in the AI-generated images, video, and audio comprising a deepfake, those rights belong not to the humans depicted in the deepfake, but to the human who prompted it into being and posted it for all to see. Moreover, while copyright might attach to the video itself as a copyrightable audiovisual work, copyright protects “original works of authorship”—not the humans depicted in them. Copyright law does not recognize a human’s name, face, voice, or likeness as a copyrightable work.

Under U.S. copyright law, only “original works of authorship” that are “fixed in any tangible form of expression,” are “independently created by a human author,” and possess “at least some minimal degree of creativity” qualify as copyrightable subject matter. In contrast, “identity is not fixed in a tangible medium of expression” and “[a] person's likeness—her persona—is not authored and it is not fixed.” Accordingly, a “person’s name or likeness is not a work of authorship” and therefore falls outside the scope of copyright protection.

That does not leave Brad or Tom without any recourse. While actors rarely own the copyright to the tabloid photographs or blockbuster films in which they appear, they do possess right of publicity rights, not to the photographs or videos themselves, but to their likeness as captured in those works. “The right of publicity is an intellectual property right that protects against the misappropriation of a person’s name, likeness, or other indicia of personal identity—such as nickname, pseudonym, voice, signature, likeness, or photograph—for commercial benefit.” 

Courts—and Hollywood—have long recognized this distinction between the copyright rights to the audiovisual work and the right of publicity rights to the likenesses embedded in that work. As one court put it:  

“There is no ‘work of authorship" at issue in [a] right of publicity claim…. The fact that an image of the person might be fixed in a copyrightable photograph does not change this.... The fact that the photograph itself could be copyrighted, and that defendants owned the copyright to the photograph that was used, is irrelevant to the [right of publicity] claim.... The defendants did not have her consent to continue to use the photograph ....”

For AI image and video generators like Seedance, the uncopyrightability of human likeness becomes particularly meaningful in the context of AI-generated videos like the Pitt-Cruise deepfake. While Seedance need not worry about any copyright claims from famous people like Brad or Tom, it faces newfound exposure under right of publicity law, risks that did not arise when generative AI tools were limited to text. 

Moreover, any fair use defenses that allowed AI text generators to copy books to train their LLMs have no bearing in the context of AI-generated deepfakes featuring human likenesses. In the face of right of publicity claims, AI image generators are left with less than convincing arguments—that it was the user, not them, who was the bad actor, or that the deepfake constituted protected free speech or was somehow not commercial in nature. 

Characters Are Different

The Pitt-Cruise deepfake highlights the limits of copyright law when it comes to human identity. But the Midjourney images present a different picture. Unlike human likenesses, fictional characters occupy a more storied place under copyright law. Characters can, in and of themselves, qualify as copyrighted works, transcending the scripts, dialogue, photography, recordings, and performances that depict them. 

Since the radio days of the so-called Sam Spade case, courts have recognized the distinct copyrightability of characters where “the character really constitutes the story being told, but [not] if the character is only the chessman in the game of telling the story.” In the more recent so-called Batmobile case, the Ninth Circuit articulated the now widely adopted “three-part test for determining whether a character in a comic book, television program, or motion picture is entitled to copyright protection.” In order to be protectable under copyright law, the character must (1) possess “physical as well as conceptual qualities, (2) be “‘sufficiently delineated’ to be recognizable as the same character whenever it appears and “display consistent, identifiable character traits and attributes,” and (3) be “‘especially distinctive’ and ‘contain some unique elements of expression’ and not “a stock character such as a magician in standard magician garb.”

Iconic characters like Yoda and Darth Vader would easily meet these standards and qualify as copyrightable characters. In bringing both direct and secondary (i.e., vicarious and contributory) copyright infringement claims, the plaintiffs point not only to Midjourney’s use of their copyrighted characters to train its Image Service but to its other acts of infringement—its distribution of infringing outputs in response to user prompts, its refusal to institute copyright protection measures used by other AI image generators, and its use of user-generated outputs in the marketing and promotion of its Image Service.

The fair use arguments that protected the training activity behind text-to-text versions like Claude and Llama might not hold when it comes to Midjourney’s use of copyrighted characters to train its LLMs, let alone to how it “directly reproduces, publicly displays, and distributes reproductions and derivative works of [copyrighted] content” and “directly produces image outputs that infringe on [the plaintiffs’] copyrighted characters.” We will have to wait and see whether fair use continues to win out when the infringement involves the AI-enabled image-generation of copyrightable characters.

Outputs and Inputs Measure Differently on the Fair Use Scale

If the Midjourney lawsuit is any indication, in this next wave of generative AI, the fair use battleground is shifting—from the training inputs used by AI companies to the outputs generated by user prompts. In the first wave of generative AI litigation, the copyright claims focused on the use of copyrighted works by AI companies in the context of AI training. Key to those earlier fair use rulings was the finding that “training LLMs did not result in any exact copies nor even infringing knockoffs of their works being provided to the public.”

For the Bartz court, it mattered little to the fair use analysis that the LLM’s “mapping of contingent relationships was so complete” that it “indeed simply ‘memorized’ the work it trained upon almost verbatim” and could “recite works it had trained upon.” Rather, the important question was whether “any LLM outputs infringing upon their works ever reached users of the public-facing Claude service.” That “Claude created no exact copy, nor any substantial knock-off[,] [n]othing traceable to Authors’ works” was key. The court underscored the point repeatedly: 

“To repeat and be clear: Authors do not allege that any LLM output provided to users infringed upon Authors’ works. Our record shows the opposite…. Here, if the outputs seen by users had been infringing, Authors would have a different case. And, if the outputs were ever to become infringing, Authors could bring such a case. But that is not this case. Instead, Authors challenge only the inputs, not the outputs, of these LLMs.” (citations omitted)

Similarly, the Kadrey court reinforced this input-output distinction in the context of the plaintiffs’ regurgitation argument, “that the model will regurgitate their works (or outputs that are substantially similar), thereby allowing users to access those works or substitutes for them for free via the model.” Meta countered that its “mitigations” (the post-training of models to prevent them from ‘memorizing’ and outputting copyrighted material) successfully lowered Llama’s regurgitation rate to a mere 50 words and punctuation marks 60% of the time, even with “adversarial prompting” (the testing of models designed to get them to regurgitate copyrighted material from their training data). The court found this regurgitation rate of 50 words and punctuation marks 60% of the time to not be a “meaningful portion” when it came to the copying of books. 

In this next phase of generative AI litigation, it could be that the very ability of Midjourney’s Image Service to so easily produce such near-perfect replicas of so many of our most cherished movie characters could ultimately be the very feature that topples fair use protection for the AI-generated images and video of the future. Even as generative AI technology accelerates beyond simple text to lifelike images and videos, courts may be less wowed by how “transformative” these technologies are and more troubled by the “market substitution” posed by how well and easily they regurgitate so many copyrightable characters.

What to Look Out for Next

With Seedance, Midjourney, and Tilly Norwood, the future of generative AI technology promises to bring a lot more than an endless torrent of words. We now have at least three distinct AI personas to watch, each moving down a different legal trajectory. In the Pitt-Cruise deepfake, we see user-generated, AI-generated replicas of real humans, which will turn more to right of publicity law than copyright law. In the Midjourney images and video, we see user-generated, AI-generated replicas of fictional characters, which will rise and fall under copyright law. 

With Tilly Norwood, the uncanny valley—and broader workforce displacement concerns—that make us so uneasy with synthetic media might be the only forces holding “her” back, as her appearance neither invokes nor infringes any preexisting copyright or right of publicity rights. Ironically, the very human creativity that courts have recognized when extending copyright protection to fictional characters could pave a path for synthetic media like Tilly Norwood to move beyond the general uncopyrightability of AI-generated images.  

When it comes to AI-generated images, it turns out looks matter—and character counts.

View Full Article (PDF)

Previous
Previous

What’s In An AI-Generated Voice? Maybe More Than The Law Can Hear

Next
Next

Three Fair Use Ironies From 2025 To Watch Out For In 2026