Human-centric concepts of copyright
Notwithstanding, the development of cross-border trade prompted a number of countries, during the 19th century, to create a minimum and shared framework of reference for copyright law across the globe. The first iteration of this was the 1886 Berne Convention, which still applies today. Under this framework, it is possible to say that in copyright terms, the existence of a “work” requires the existence of the following concepts:
- An expression: i.e., any “production in the literary, scientific and artistic domain” (per Article 2(1) of the Berne Convention);
- An author: “protection shall operate for the benefit of the author and his successors in title” (per Article 2(6) of the Berne Convention). This requirement for a link between an expression and a physical person is therefore shared between all the signatories of the Berne Convention. By way of example, in the U.S., the registration of a work with the Copyright Office is only authorized if it has been created by a human. In Australia, the Supreme Court refused protection to a database automatically generated by an AI. More recently, the infamous case of the “monkey selfie,” in which the relevant camera equipment was set up such that a monkey rather than a human triggered the photograph, found that animals have no legal authority to hold copyright claims. Creation thus seems to be the prerogative of humans, the fruit of their imagination made art; and
- Originality [Berne 2 (3)]: referred to by the CJEU as “the author’s own intellectual creation,” is present when authors can exercise free and creative choices and put their personal stamp on the work. Copyright protects the creative work of a human being. The work must therefore be traceable to its author.
In most simple terms, international law appears currently only to contemplate the notion of a copyright work created by a human creator. Where an AI system is truly autonomous, and the works which it creates are devoid of human involvement or creative input, the applicability of most conceptions of a copyright work would break down and the resulting work would likely be deemed public domain.
The situation in which a human creates a work with the help or assistance of AI is somewhat different and raises the possibility that the human controlling the AI algorithm may be deemed the author of the work. In practice, this appears to be a factual matter resting on the degree of human input involved:
- Where the human input remains creative, i.e. the AI is a mere “tool,” the consensus appears to be that copyright protection is enjoyed by the creator using that tool;
- However, where the human input is more limited, it appears that in most jurisdictions the resulting work would not be deemed protectable by copyright.
How much input is enough input is a question that courts will be likely struggling with for years to come.
The Human Artistry campaign
Should AI-generated works should be protected by copyright? When considering whether to afford copyright protection to a machine, the traditional justifications for copyright protection appear to break down. On one hand, the Anglo-Saxon notion of copyright as an incentive to creation appears to have little meaning in the context of AI – an AI system does not seek protection of its personal expression nor financial reward for its work and will generate content regardless of its copyright protection. On the other hand, the French grounding of copyright protection in natural rights also appears to break down – AI systems are still far from being considered individuals with their own personalities. There may also be a more fundamental reason to distinguish between human and AI-generated works. Some argue that copyright should promote and protect human creativity, not machine creativity. According to this view, works created by humans should be given protection but those generated by machines – and potentially competing with human-created works – should not.
At SXSW 2023, a broad coalition announced the launch of the Human Artistry Campaign to ensure artificial intelligence technologies are developed and used in ways that support human culture and artistry – and not ways that replace or erode it. The campaign has grown exponentially and now includes hundreds of members across the globe across journalism, photography, and voice actors, as well as major global organizations representing songwriters, composers, publishers, and independent music.
In the United States
The U.S. Copyright Act protects “original works of authorship fixed in any tangible medium of expression (…).” While neither the Act nor the U.S. Constitution expressly address the requirement of human authorship, technological advancements throughout the years have prompted discussions and case law concerning whether the use of tools in the creative process should limit the extent of a work’s eligibility for copyright protection. Well before the Copyright Act of 1976 was enacted, in 1884, the U.S. Supreme Court settled a debate around machine-generated work by extending copyright protection to photographs, of which the photographer was the author.1
Recently, the U.S. Copyright Office, a federal agency charged with administering the nation’s copyright laws, including the registration of copyright-protected works relied on the definition of copyright contained in this same decision (the exclusive right of a man to the production of his own genius or intellect) in deciding that a work created with the assistance of AI should not be eligible for registration.
In a letter dated Feb. 21, 2023, the Office made a strict interpretation of the human authorship requirement by refusing to register images contained in the comic book “Zarya of the Dawn,” which were generated by generative AI tool Midjourney on the basis that the user, Kristina Kashtanova, lacked sufficient creative control to be deemed the author of the images. Kashtanova argued that she was the author of the work as she had “designed” the images: she guided the AI through prompts, chose the visual structure of each image, and selected the poses and points of view and juxtaposition of the various visual elements within each picture consistent with her creative vision. However, the Copyright Office argued that, due to the inability of a Midjourney user to predict the image that will be generated, the user lacks sufficient control over the output to be an author of the work. The Office rejected the position that Midjourney was a mere tool used by Kashtanova to achieve her desired result and issued a new registration covering only the comic book’s text and arrangement, excluding the artwork from the scope of copyright protection. In the absence of judicial rulings from U.S. courts concerning the copyright eligibility of AI-generated works, the U.S. Copyright Office’s application of the human authorship criterion to such works currently provides the most recent guidance on the issue. Although the Office does not hold legislative power to establish copyright law, its expertise and directives are frequently looked to by the courts for guidance.
The Office’s current official policy, published on March 10, 2023, is that it will register a work only if the work’s traditional elements of authorship were authored by a human and not by a machine. The Office distinguishes between works autonomously generated by AI, which are not protectable by copyright, and works created with the assistance of AI, for which a case-by-case analysis is necessary to determine whether the expressive elements are the product of a human or of a machine. In instances where works contain AI-generated material as well as the results of human authorship, the Office has stated that copyright will only protect the results of human authorship, which will be considered independent of the copyright status of the AI-generated material.
Importantly, the Office is now requiring copyright registration applicants to disclose the including of AI-generated material in a work submitted for registration, provide an explanation of the human author’s contributions to the work, and expressly exclude AI-generated content that is more than de minimis. They have also made it clear that creators risk the cancelation of their registration where disclosure is not properly made.
While the Office’s guidance does not expressly ban registration of works created with the assistance of AI, the application of the human authorship requirement in the Midjourney letter places serious hurdles on copyrightability. The Office’s focus on predictability has been criticized for it imposes a requirement for the copyrightability of AI-generated works which does not exist for human-generated works. Can a photographer predict the movement of the subject of its photograph? Should Jackson Pollock’s paintings be ineligible for copyright protection because he could not predict where the paint would drip onto the canvas? And what of the use of arpeggiators to play varied and random sequences of musical notes in a recording? If human creations can incorporate randomness to a certain extent, some argue that AI creations should do so as well without impairing their copyright protection.
Furthermore, the impending ubiquity of AI tools in creative works could soon render full disclosure of their use unfeasible. The near future promises a seamless integration of AI into creative technology workflows, including industry-standard software from leading companies such as Adobe. Will a screenwriter utilizing Google’s Bard or Bing’s ChatGPT integration need to specify the instances where these tools aided their research or ideation phases? Similarly, will music producers be required to declare each plug-in or Digital Audio Workstation (DAW) employing any level of automation? These questions underscore the complexities in determining the extent of disclosure necessary for AI contributions. If the works has been autonomously generated by AI without any human involvement, then the Copyright Office leaves no room for doubt: such works “lack the human authorship necessary to support a copyright claim.” This strict position is currently being challenged in Washington D.C., where software engineer Stephen Thaler filed a complaint against the Copyright Office and its register, Shira Perlmutter, on the basis that they refused to register “A Recent Entrance to Paradise,” an artwork autonomously generated by AI. Thaler’s copyright application listed himself as the copyright owner and the “Creativity Machine,” the name of the AI engine he created and owns, as the author. The Copyright Office denied Thaler’s application and highlighted that, for the Copyright Office to accept the registration, Thaler must either “provide evidence that the work is the product of human authorship or convince the Office to depart from a century of copyright jurisprudence.” Taking the Office at its word, Thaler sued the Office, arguing that not only the Copyright Act does not limit copyright protection to works made by humans, but also that Thaler, as the owner of the AI program, could be the owner of the work under the work-for-hire doctrine, which grants companies (or “non-humans”) ownership of copyrights created by an artist they hired. The Office maintain that its refusal to register works generated by AI is consistent with the current law.
The unavailability of copyright protection for AI-generated creative works would have profound implications for the development of such works. A work not protected by copyright falls within the public domain, which means that it is owned by the public and can be owned by anyone, without the need to obtain permission from the creator of the work and without any compensation obligation. The value of creative works is derived from the exclusive rights under copyright conferred to the author, and the lack of protection could disincentivizes investment in the AI space. On the other hand, this position protects continued investment and creativity by human creators, since only the fruits of human creativity would enjoy the exclusive rights necessary to monetize the resulting creations.
Who owns the copyright in an AI-generated work?
The human controlling the AI system is generally the person who will be considered the author of the resulting work. This appears to be presumed where the involvement of the human is sufficiently “creative” to give rise to the copyright protection. In the event of a challenge, however, the human may need to prove the extent of his/her involvement. Whether or not an AI-generated work has the required quality of originality is likely to be challenged more frequently than in the case of works which are not created using AI.
In the U.S., the author of a work is the default owner of the copyright in such work unless she has assigned the copyright to a third-party or the work was created as a “work made for hire.” Notwithstanding the Copyright Office’s position on the issue, which is could evolve, if the AI-generated output is eligible for copyright protection, as between the software and the user, the user is more likely to be the copyright owner of the resulting work. Others may argue that the creator and/or owner of the AI program at the origin of the copyright must be the authors, particularly if creative choices were made in the coding and training of the AI program, however, absent contractual provisions such as terms of use assigning outputs to the AI company, U.S. copyright law is unlikely to recognize the owner of the AI engine as the author and copyright owner of the work.
Can the AI-generated output infringe on the copyright of another work?
United States
Certain rights holders, such as the three named plaintiffs in the class action filed against DeviantArt, Midjourney and Stability AI, argue that if an original work was included in the training set, then the output is necessarily a derivative work which infringes on the copyright in such original work.3 This approach, which asks the court to bypass copyright precedents and hold AI companies liable regardless of whether the new work incorporates any elements from the original work, appears unlikely to succeed. Indeed, “to constitute a derivative work, ‘the infringing work must incorporate in some form a portion of the copyrighted work … [and] must be substantially similar to the copyrighted work.”4 On the other end of the spectrum, certain AI commentators assure that a new work will never infringe on an underlying work as no output will ever replicate any work contained in the training set, which cannot be taken at face value.
In the absence of any copyright infringement precedent in the context of AI-generated output, we must consider that a court will apply a standard copyright infringement analysis to determine whether outputs infringe on underlying works and compare AI-generated works to the underlying work to determine whether the works are substantially similar. We can imagine that AI-generated works will not be exempt from the courts’ disparate application of the substantial similarity test for purpose of copyright infringement, based on the facts of the case and the characteristics of the two works. While this analysis is highly fact-specific, the following principles are most likely to be addressed in copyright infringement proceedings with respect to AI-generated outputs:
(i) Only substantial similarity in protectable expression may constitute prohibited copying, so courts should distinguish between the protected and unprotected material in a plaintiff’s work.5 Unprotectable elements include elements of a genre or style, which should not be taken into consideration in a copyright infringement analysis. An artwork created in the “style of” a visual artist, or a sound recording in the “genre” of a recording artist would not necessarily be deemed infringing derivative work, even if the name of such artist was used in a text-to-image prompt.
(ii) Even if an output incorporates protectable elements from an underlying work, it will not necessarily be deemed infringing if the use of the underlying work in the new work is de minimis, meaning insubstantial or unrecognizable.
(iii) If the two works are substantially similar, the defendant can raise the fair use defense to demonstrate that the output is non-infringing. At the output stage, the analysis does not focus on copies being made for a functional purpose, but on the new work generated by the AI program, which is much more likely to serve the same purpose of creative expression or entertainment as the underlying work (first factor). Further, the new work would be directly competing with the underlying work, particularly if a text-to-image prompt is used to obtain a work in the “style” of an artist (fourth and most important factor).
At the output stage, most discussions address copyrightability and copyright infringement as two separate issues, which can lead to inconsistencies due to the complex framework of copyright laws. If a work generated by AI is not protectable by copyright, it is agreed that it would fall in the public domain. If the public domain work is substantially similar to an existing copyrighted work, can it be deemed an infringing derivative work and, if so, would that bring the AI-generated work back to the world of copyrighted works? Would the owner of the underlying work own the AI-generated work? As exemplified by this logical loop, the inevitable question of whether the existing copyright framework is flexible enough to propose coherent answers remains to be answered.
The development of a legal and regulatory framework with respect to AI-generated works must also address who should be liable if the output infringes on an underlying copyright, and whether any protections apply. While AI platforms’ terms of use could pass liability onto the end user, doing so would not solve the underlying issue and would significantly undermine the public’s confidence in AI platforms. The level of control exercised by the end user over the final product could be relevant in the analysis, such that the end user could potentially be liable if the infringing work is the result of the end user’s vision, while the AI-engine could potentially be liable if the infringing work is a result of a random process with marginal contributions from the end user. In cases were the end user is liable due to such user’s extensive involvement in the creation of the infringing work, the AI company could be exposed to liability under to the doctrine of vicarious infringement, which applies to any person who has the right and ability to supervise the infringing conduct and has a direct financial interest in the infringing activity. As an underlying direct infringement is necessary for a court to find a defendant liable for vicarious copyright infringement,6 the AI company (or the defendant of a vicarious copyright infringement claim) would be exempt from liability if no direct liability of the end user is found, including due to the validity of a fair use defense raised by the end user.
It appears that existing U.S. laws do not adequately consider some of the critical issues and logical conflicts that arise in the context of AI generated creative works. Courts will therefor struggle to apply existing laws to these new conundrums, and we very well may need new legislation that thoughtfully considers and addresses these issues to the quickly-evolving AI ecosystem.
Trademark and generative AI tools
We expect internal marketing departments to increasingly rely on generative AI to prepare creative content, which will yield content that could be protected by trademark law in addition to copyright law.
For example, a generative AI application might be asked to produce a slate of potential new product names, a fresh look for a webpage, a new slogan for an ad campaign, or a short audio signature or jingle to be used when consumers interact with a new product or game. It’s worth remembering that any of these might be protected by trademark law because they could serve as a source indicator for consumers. Trademarks aren’t just the company name and product name; they are also slogans, sound signatures (think, the MGM lion’s roar), packaging designs, and more. When AI is used to generate these signatures, trademark clearance will be even more critical.
Where before, your internal marketing team might intuitively recognize a slogan or sound as already trademarked and steer clear of such arrangements, a trademark generated by AI might be just different enough not to set off any alarm bells during human review. Models trained on trademarked content, however, could generate outputs that infringe existing trademark rights. Trademark clearance, which is already our recommended approach for all new brand indicia, will be especially critical for AI-generated or AI-assisted content. A robust clearance process will provide reassurance that whatever the output of an AI tool looks, reads, or sounds like, that output will be compared back to the trademark register to identify possible conflicting marks before they become a problem in the marketplace. Trademark clearance provides a risk assessment of using the newly generated source indicator so you can move your brand forward with a better understanding of the legal risks.
We also want to remind our clients that trademark issues can come up inside copyrightable pieces of entertainment content. Should this happen to you, we encourage you to reach out to us to evaluate your use and ensure it falls under the category of fair use. It’s worth remembering the major Ninth Circuit decision in ESS Entertainment, where the Grand Theft Auto video game depicted a satirized version of the Play Pen club, and the club sued the game maker for trademark infringement. ESS Entertainment 2000 Inc. v. Rock Star Videos, Inc., 547 F.3d 1095 (9th Cir. 2008).
Issues like those in ESS Entertainment are more likely to come up in the context of AI-generated or AI-assisted art, where each element of a video game, movie, or commercial might not get the thoughtful treatment it would otherwise receive if a human were responsible for adding every aspect of the design.
In ESS Entertainment, the court found the use was fair and therefore not infringing, but we highly recommend an outside evaluation before publishing your content to make sure you aren’t putting your business at risk when using generative AI tools to prepare or inspire it.
Patents and AI
Using generative AI to develop products or inventions for patenting presents both opportunities and risks on an unsettled legal landscape. Some argue that generative AI promises to accelerate the development of inventions that benefit society such as life-saving medicines, and that AI should be recognized as an inventor on patents for such inventions. Others say that because AI is not a human, it cannot be an inventor under the patent statutes of most countries. Still others observe that using generative AI to develop products creates the risk of liability for patent infringement because the data used to train generative AI models may include patents or patented functionality.
On the issue of whether AI can be a named patent inventor, the majority of countries that have considered the issue have found that it cannot. Most recently, the United States Supreme Court refused to consider the issue in denying a petition for certiorari of the decision Thaler v. Vidal, 43 F.4th 1207, 1210 (Fed. Cir. 2022). In that decision, the Federal Circuit – the U.S. appellate court that decides issues of patent law – affirmed a lower court’s ruling upholding the United States Patent Office’s decision to deny petitions to name an AI system called Device for Autonomous Bootstrapping of Unified Sentience (DABUS) as a patent inventor. Based on U.S. Supreme Court precedent and language in the U.S. Patent Act, the Federal Circuit affirmed the holding that an inventor must be a natural person. Id. at 1211. However, the court left open the possibility that AI could contribute to a patented invention, stating that it was not addressing “the question of whether inventions made by human beings with the assistance of AI are eligible for patent protection.” Id. at 1213.
The U.S. Patent and Trademark Office (USPTO) held two listening sessions in April and May 2023 on the current state of AI technologies and related inventorship issues. The USPTO asked for input on 11 questions related to AI and patents, including whether U.S. patent law should be changed so that AI systems are eligible to be listed as an inventor and whether the USPTO should require applicants to provide an explanation of contributions AI systems made to inventions claimed in patent applications. Speakers at the sessions largely agreed that AI cannot be a named patent inventor under the U.S. patent laws as currently written. But they disagreed on whether patent applicants should be required to disclose the contributions of AI to an invention that is the subject of a patent application. Policy efforts in this area are ongoing.
As to the risk of patent infringement claims from using generative AI to develop products, this, too is unclear. The data used to train generative AI systems undoubtedly includes patents and content that describes patented functionality. But tracing the output of generative AI to patents included in data used to train the AI model seems highly unlikely to impossible.
Perhaps the issue most everyone can agree on is that current patent laws are not equipped to deal with AI as an inventor of patented inventions. It remains to be seen whether future legislation will achieve clarity on this issue.
Trade secrets and AI
Given the issues with the patentability of AI output, should inventors turn to trade secret protection? Trade secret protection is often used to safeguard unique intellectual property and can be obtained without application or registration. In the context of AI, trade secret protection could include protecting output, data sets, unique algorithms and machine learning techniques.
Is AI protectable as a trade secret?
The U.S. Uniform Trade Secrets Act defines a trade secret as: “a formula, pattern, compilation, program, device, method, technique, or process, that: (i) derives independent economic value, actual or potential, from not being generally known to, and not being readily ascertainable by proper means by, other persons who can obtain economic value from its disclosure or use, and (ii) is the subject of efforts that are reasonable under the circumstances to maintain its secrecy.”7 Trade secret owners can file suit in a U.S. federal court for damages if their trademarks have been misappropriated under the Defend Trade Secrets Act of 2016.8 In the U.S., it is well established that trade secrets are property rights.9
The EU has issued a Council Directive with similar standards to the U.S. with regard to the definition of what constitutes a trade secret.10 However, the Directive generally does not regard trade secrets as property, and most EU states do not classify trade secrets as property or intellectual property.11
A number of issues need to be considered when applying trade secret protection to AI, including:
- Need for secrecy: Trade secrets, by definition, require maintaining secrecy. However, it can be challenging to maintain the secrecy of AI output or systems, especially in collaborative environments or open-source culture where the sharing of information and techniques is common.
- Reverse engineering: A significant drawback to trade secret protection is that it does not protect against reverse engineering. Independent development will allow competitors to legally reverse-engineer an AI system’s results or the system itself.
- Difficult to enforce: It can be challenging to demonstrate that a trade secret has been stolen or misappropriated. For AI companies, this could require proving that a competitor had direct access to their proprietary information, which is often difficult. Outside of the U.S. and EU, many jurisdictions have weak trade secret laws and/or enforcement practices.
- Employee leakage: In a tech-driven field like AI, where talent is in high demand, employees often move from one company to another. These employees may inadvertently or intentionally carry over knowledge or techniques that could be considered trade secrets, which is a risk for companies seeking to protect their intellectual property in this way.
Trade secret best practices
While trade secret protection for AI may be challenging, much of the industry is using trade secret protection and employing a “zero-trust approach.”12 For example, Google, Facebook and Yahoo!’s algorithms – the “secret sauce,” so to speak, for their AI systems, is trade secret protected. Some of the output from those systems, which is exploited for their own commercial use, is also kept secret.
Trade secret protection begins with traditional trade techniques such as limiting access and requiring employees and independent contractors working with AI to sign confidentiality and work-for-hire agreements. IBM, KDDI Research and the National Institute of Informatics have each introduced methods of watermarking deep learning models to help protect algorithms by identifying the owner of the intellectual property.
Yet while trade secret protection for intellectual property that is used by companies internally may be very useful, overall trade secret protection is imperfect, in particular for AI-generated content. Trade secrets protect against misappropriation and unlawful use. Trade secret law was not intended to be used as an instrument to protect intellectual property that is “let out into the wild.” Trade secret protection will only be effective when access to AI outputs and systems is restricted, which means that it may not be helpful where the desired outcome is the commercial exploitation of AI-generated content.
Rights of publicity
Rights of publicity safeguard a celebrity’s name, image, likeness, voice, and other unique personal attributes from unauthorized commercial use. However, the legal landscape is diverse and often complicated, with significant differences in treatment across jurisdictions.
United Kingdom
In the UK, the closest equivalent to the right of publicity is a legal concept known as passing off. Initially developed by courts to prevent individuals from falsely claiming that they are selling goods belonging to someone else, passing off has evolved to protect celebrities’ images or names from unauthorized use in commercial contexts.
In most passing off cases, the claimant must satisfy a three-part test called the “classical trinity” test. This test requires the claimant to demonstrate the following:
- They have a reputation or goodwill associated with their name or image.
- There has been a misrepresentation to the public, leading them to believe that the goods or services being offered are associated with the claimant.
- The claimant has suffered some form of harm or damage.
Recent passing off cases involving celebrities have predominantly focused on false endorsement claims. Passing off can potentially assist a celebrity in challenging false product or brand endorsement through the unauthorized use of an AI-generated emulation of their likeness. However, when AI is used to create a synthetic performance that resembles an artist’s voice or likeness, the situation becomes more complex.
Previous passing off cases involving false attribution by authors are particularly relevant to these uses.
In the case of Sim v Heinz, a court dismissed an actor’s request for an injunction to prevent a food advertisement from using an imitation of his voice. However, the judge acknowledged the concern surrounding the use of someone’s voice without consent and highlighted that allowing such actions solely for commercial gain would be a significant flaw in the law.
Proving passing off is notoriously challenging. Unfortunately, unless the law adapts to these evolving technological developments, relying on passing off to object to the synthesized use of an artist's voice or likeness will be an uphill battle. The artist would need to demonstrate sufficient reputation or goodwill associated with their voice or likeness, which is typically limited to highly famous artists, and that a substantial portion of those accessing the AI-generated content would be deceived into believing it is authentic. This task becomes more difficult when the content explicitly states that it is not the work of a specific artist but an AI performance.
Since passing off is unlikely to assist the majority of performers who are not widely known to the public, these individuals are potentially exposed to having their image or voice or likeness used commercially without authorization. For instance, a successful DJ could utilize a synthesized voice trained on a talented but unknown singer to produce a new track.
United States
In the U.S., no federal law governs rights of publicity, but instead a patchwork of state legislation and common law does. Prior to 1988, vocal imitation was not considered an infringement on a celebrity’s rights of publicity. However, in a landmark case in 1988, the Court of Appeals for the Ninth Circuit held that Ford Motor Co. misappropriated singer Bette Midler’s distinctive voice when it hired one of her former backup singers to imitate her performance of a song for use in a TV commercial. The court rejected Midler’s claim under California’s rights of publicity statute California Civil Code §3344, holding that the statute only protects against the misappropriation of one’s actual voice (as opposed to an imitation), but it allowed Midler to maintain a claim under common law. Four years later, in Waits v. Frito-Lay, Inc., the Ninth Circuit confirmed that “when voice is a sufficient indicia of a celebrity’s identity, the right of publicity protects against its imitation for commercial purposes without the celebrity’s consent,” and clarified the common law rule that for a voice to be misappropriated, it must be (1) distinctive, (2) widely known, and (3) deliberately imitated for commercial use.
While these rulings may have established a legal framework to combat AI-powered sound-alikes, significant questions remain. For instance, would artists be able to recover attorney fees under California Civil Code §3344 in cases where an AI was trained on their recordings or would they be relegated to only pursing common law claims which don’t afford the opportunity to recover attorney’s fees?
The legal landscape is further complicated by the variation in rights coverage across different states. For example, the First Circuit and New York courts initially rejected extending New York’s statutory right of publicity law to cover soundalikes. However, “voice” has since been included in New York’s private cause of action for a violation of the right of publicity, although it was not added to the criminal arm of the statute.
Post-mortem rights of publicity also present a unique challenge. These rights differ significantly from those of living individuals in terms of range, duration, and accessibility. Depending on an artist’s domicile at the time of death, there may be no post-mortem rights of publicity, leaving the estates of deceased artists without the authority to prevent AI-generated imitation of the artist’s voice in a commercial context.
Lanham Act / Unfair Competition
Another angle to consider is the Lanham Act, a U.S. federal law often applied in connection with trademarks. The Act’s primary aim is to protect against unfair competition among commercial parties, with Section 43(a) prohibiting the use of any symbol or device that could deceive consumers about the association, sponsorship, or approval of goods or services by another person. The Act’s applicability to AI sound-alikes is contingent on whether the imitation is likely to mislead consumers about the original artist's association with the new work. If the AI-generated voice causes confusion, the Act could potentially be used to protect artists’ rights. However, liability could be avoided if AI sound-alike artists explicitly disclaim in their recordings, titles, or marketing materials that the tracks are not by the artist whose voice they’ve replicated.
Successful claims under the Lanham Act could lead to remedies including injunctions, actual damages, defendant’s profits attributable to the violation, costs of the action, and in exceptional cases, recovery of attorney’s fees.
Defenses and other issues
However, most voice misappropriation cases involve sound-alikes in a purely commercial context, such as to sell products. It remains unclear whether courts will extend rights of publicity and Lanham Act claims to the use of sound-alikes in original music, which as a form of creative expression, would receive stronger First Amendment protections than pure commercial speech.
Additionally, some legal scholars have suggested that the Copyright Act should preempt rights of publicity and Lanham Act claims altogether when the material allegedly infringing is expressly authorized under the U.S. Copyright Act, Section 114 of which explicitly permits “sound-alike” recordings. While this viewpoint has been adopted by some courts, it has been avoided by the Ninth Circuit thus far.
- Burrow-Giles Lithographic Co. v. Sarony, 111 U.S. 53 (1884).
- U.S. Copyright Office, Statement of Policy (March 16, 2023).
- Andersen et al v. Stability AI Ltd.
- Litchfield v. Spielberg, 736 F.2d 1352, 1357 (9th Cir. 1984)
- Swirsky v. Carey - 376 F.3d 841 (9th Cir. 2004)
- Metro-Goldwin-Mayer Studios, Inc. v. Grokster, Ltd., 545 U.S. 913, 930 (2005)
- Uniform Trade Secrets Act (1985), Section 1.
- Defend Trade Secrets Act of 2016, Pub, L. 114-153, 130 Stat. 376 (2016).
- Ruckleshaus v. Monsanto Co., 467 U.S. 986, 1003-4 (1984).
- Directive 2016/943 of the European Parliament and of the Council of 8 June 2016 on the Protection of Undisclosed Know-how and Business Information (Trade Secrets) Against their Unlawful Acquisition, Use and Disclosure, OJ, L 157, 1–18;.
- Katarina Foss-Solbrekk, Three routes to protecting AI systems and their algorithms under IP law: The good, the bad and the ugly, Journal of Intellectual Property Law & Practice, Volume 16, Issue 3, March 2021, Page 257, academic.oup.com.
- Stacy Collett, How to Protect Algorithms as Intellectual Property, CSO (July 13, 2020), www.csoonline.com.