To Seedance, Kling, Runway, Pika, Luma, Sora, Veo, and Firefly: Do Not Let Disney-Style Threat Letters Decide What Creators Are Allowed to Make: Hold the Line on Fair Use and Free Expression
The big studios’ new strategy is to shift copyright and trademark risk upstream onto toolmakers. The law, properly applied, puts most of the hard calls where they belong: on context and publication.
Dear Leaders in AI Video Innovation,
We write to encourage you to stand firm against recent threats of copyright lawsuits from major media companies. As stewards of groundbreaking technology, you are fostering a new era of creative expression—enabling filmmakers, educators, journalists, and everyday fans to generate videos that enrich culture and information. U.S. copyright law has long protected exactly this kind of creative activity. Courts and Congress have recognized that individuals have broad rights to create fan art, parody, news, commentary, educational content and more under the fair use doctrine. Simply put, users’ lawful creations with your tools are not improper copying, and the law does not make the platform responsible for every possible misuse.
In the last few weeks, the pressure campaign against generative video has gotten more specific and more aggressive. Disney and other major rights-holders have reportedly targeted ByteDance’s newly launched Seedance 2.0 with cease-and-desist demands over outputs that evoke famous franchises, and similar warnings are spreading across the industry. At the same time, the legal front is widening beyond copyright into trademarks and naming rights, with a federal judge recently blocking OpenAI from using “Cameo” as the name of a Sora feature in a trademark dispute. The message from legacy entertainment is clear: they want the next generation of video tools to behave like pre-clearing publishers, not like cameras or editing suites.
Generative video is crossing the threshold from novelty to infrastructure, and that is exactly why the largest IP incumbents are trying to define the defaults. Threat letters and public narratives are converging on a single demand: AI video platforms should preemptively block “anything that looks like” Disney, Marvel, Star Wars, and other iconic brands, and they should treat themselves as legally responsible for the creative choices of their users. That demand is not just about stopping infringement. It is about turning the next medium into a permissioned ecosystem where major rights-holders set the boundaries of what can be imagined, even when the use would be lawful as parody, commentary, news, education, or other transformative speech.
Crucially, the law is clear that your AI tools are like pens or cameras: only the people who use them to make a film or image are responsible for how they use the resulting content. In Sony Corp. v. Universal (1984), the Supreme Court held that “the sale of copying equipment” does not make the seller liable if the product is “capable of substantial noninfringing uses”. Generative AI tools plainly have many such uses: educational animations, lawful parodies, original art, journalistic storyboards, historical re-enactments, user-generated mini-documentaries, and countless other valuable creations. Our platforms should not be penalized simply because some users might choose to create derivative works. The law presumes that users are free to remix and transform content for commentary, criticism, news reporting, teaching, scholarship, or entertainment. For example, creating a parody of a famous movie scene or teaching a film history lesson with AI visuals falls squarely under fair use.
This piece is an open letter to the leading AI video platforms, including ByteDance Seedance, Kuaishou Kling, Runway, Pika, Luma, OpenAI Sora, Google Veo, and Adobe Firefly. The thesis is straightforward: do not ignore the law, but do not be bullied into overcompliance that strips users of lawful rights. Fair use is not a loophole; it is a core part of copyright’s design, and it exists precisely because culture requires quotation, critique, and transformation. Likewise, trademark law is meant to prevent consumer confusion, not to grant brand owners veto power over every reference, joke, or critical depiction. The practical implication is that the most responsible approach is not blanket censorship at the prompt level, but a distribution-aware approach: clear user terms, strong attribution and labeling tools, friction where someone is trying to pass off work as official, and a robust process for handling concrete, good-faith complaints about specific published uses.
AI video platforms should not uncritically accept the framing from major trademark holders like Disney, and should not let threat letters quietly rewrite the rules of lawful speech. Copyright law already contains a built-in safety valve for culture: fair use protects criticism, comment, news reporting, teaching, scholarship, and research, and it is the doctrine that allows parody, remix, and transformative commentary to exist at all. Trademark law also has room for truthful reference and expressive use, while still policing genuine consumer confusion. The point is not “anything goes.” The point is that the legality of a particular output usually turns on context, purpose, and distribution, which are decisions made by the user at the moment of publication, not by the tool at the moment of generation.
This is a strategic moment for the AI industry to stop conceding the wrong legal premise. A video model is not a movie studio. A text box is not a broadcast network. These tools are closer to general-purpose creative technology: cameras, synthesizers, NLE editors, game engines, and VFX suites. The law has long distinguished between building a tool that can be used lawfully at scale and inducing infringement as the business model. If AI video platforms want to survive the next wave of litigation and lobbying, they should design and communicate around that distinction, rather than surrendering to the idea that every output is presumptively illegal unless a major rightsholder grants permission.
Moreover, by design and practice AI Video companies have not instructed or encouraged users to infringe anyone’s copyright. Legal precedent is unanimous that liability attaches only when a distributor actively induces infringement or benefits financially while controlling the infringing activity. By contrast, courts have refused to punish providers who merely offer a tool that some people might use unlawfully, as long as the tool itself is legal and is not marketed for piracy. We should continue to emphasize that our products serve legitimate, innovative markets, not replace or devalue the original creative works. In fact, transformative user creations often promote the original works by keeping them culturally alive. The U.S. Supreme Court recognizes that powerful new technologies should be protected when they “promote the Progress of Science and useful Arts” – the very aim of the Copyright Clause.
If platforms are treated as automatically responsible for every downstream choice a user might make, then the only “safe” product is one that pre-censors creativity to the lowest common denominator, blocks broad categories of prompts, and over-removes lawful expression because it cannot evaluate context. That is not what the law requires, and it is not how the United States historically handled general-purpose technologies. The better legal posture is to behave like a dual-use tool that is designed for substantial lawful uses, does not induce infringement, provides clear user guidance about rights and licensing, and, where the product includes hosting or sharing, follows the notice-and-takedown architecture Congress built for user-generated content. Put bluntly: do not volunteer to become the world’s private clearance department. Build strong compliance pathways for distribution, not a creativity choke point at creation.
Given this legal framework and public policy, we advise you to resist demands that would force you to block or disable creative uses, so long as those uses fall under fair use or comply with law. Instead, focus on measures that respect both copyrights and user rights: adopt clear user policies and moderation only where there is clear infringement, implement efficient notice-and-takedown procedures, and educate users about lawful content creation. These steps—and the law itself—allow you to continue innovating without surrendering to vague threats.
In short, remember that powerful judicial precedents favor your platforms. The burden of policing copyright violations falls primarily on those who publish or distribute the videos (the end users themselves), not on the creators of a versatile tool that enables creativity. Maintain a reasoned policy, protect legitimate creators, and stay bold in pursuing innovation. By doing so, you help preserve both the promise of AI and the public’s freedom to create.
Executive Summary
Generative AI video platforms face aggressive claims from rights holders (like Disney), but U.S. law provides strong defenses. End users—not the tool creators—are the primary actors in any infringing use. Supreme Court precedent holds that selling a technology with substantial lawful uses is not contributory infringement (Sony Betamax). Likewise, copyright safe-harbors (17 U.S.C. §512) protect platforms that promptly remove infringing content after notice. The fair-use doctrine (17 U.S.C. §107) explicitly permits uses for commentary, criticism, parody, news, education, and other socially valuable purposes. In recent cases, courts have found that scanning and indexing massive copyrighted works (e.g. Google Books) and creating transformative artworks (e.g. parodies, 2nd Circuit in Cariou) qualify as fair use, and that minor, incidental copying by users (without provider inducement) does not impose liability on tools. Conversely, liability has been imposed only when a platform intentionally induced infringement (MGM v. Grokster) or had actual knowledge and control (Napster). These precedents and doctrines strongly favor AI platforms that emphasize broad lawful uses and implement reasonable controls. The primary risk to platforms can be managed through responsible design (avoiding invitations to copy specific works), robust terms of service, and adherence to notice-and-takedown processes. Compared to international regimes, U.S. law’s flexible fair-use and safe-harbor framework is unusually protective of new technology, and it aligns with public policy encouraging innovation and free expression.
Legal Analysis and Precedents
Fair Use (17 U.S.C. §107): U.S. law expressly allows “fair use” of copyrighted works for purposes such as criticism, comment, news reporting, teaching, scholarship, or research. Fair use is determined case-by-case by weighing four factors: (1) purpose/character of use (especially whether it is transformative or commercial), (2) nature of the original work, (3) amount/substantiality used, and (4) effect on the market for the original. Notable decisions show that extensive copying can still be fair use if sufficiently transformative. For example, Campbell v. Acuff-Rose (1994) held that a commercial parody (“Pretty Woman” spoof) could be fair use because parody has “an obvious claim to transformative value”. Similarly, Cariou v. Prince (2d Cir. 2013) found that artist Richard Prince’s creative alterations of photographs were transformative even though Prince did not explicitly comment on the originals. In Cariou, the court emphasized that a “secondary work may constitute a fair use even if it serves some purpose other than” the illustrative examples in the statute, so long as it adds new expression, meaning or message. Applying these principles, courts have upheld massive uses of copyrighted text for new functions: Authors Guild v. Google (2d Cir. 2015) held that Google’s scanning of millions of books to create a searchable database was fair use, because it made new knowledge available without serving as a substitute for the original works.
For AI video generators, these cases strongly suggest that many user outputs will be fair uses. If a user’s AI-generated video transforms inputs with new artistic or factual commentary (e.g. comedic or critical reimaginings of movie scenes), factor (1) and (4) will weigh in favor of fair use. The fact that the output has novel content “that a reasonable observer would say repurposes the original” can satisfy the transformation requirement (Cariou). Even extensive copying of character likeness or plot can be permissible if used in a transformative way (e.g. parody, collage, or journalism) that does not usurp the market for the original film. However, if an AI output were a literal, verbatim copy of copyrighted footage, it would likely be infringing; thus it is prudent to disallow crude, direct replicas. In practice, the vast majority of AI-generated videos incorporate new elements, style changes, or context that distinguish them from any one copyrighted source.
Safe Harbor (17 U.S.C. §512): The DMCA shields online service providers from liability for user-created content so long as they qualify as service providers and follow certain rules. To rely on §512(c), a platform must adopt and reasonably enforce a policy for terminating repeat infringers, designate an agent for infringement notices, and expeditiously remove or disable access to allegedly infringing material upon receiving proper notice. Importantly, a provider need not investigate every use in advance; rather, knowledge is imputed only when a valid takedown notice arrives. If these conditions are met, §512 provides complete immunity from monetary damages for user uploads. Thus, if an AI platform is designed to respond to notices and promptly remove infringing outputs, it can avoid liability for materials posted by users.
Contributory and Vicarious Liability: Beyond the safe harbor, U.S. secondary liability law distinguishes contributory from vicarious infringement. Contributory infringement requires knowledge of specific acts of infringement and material contribution to them. In A&M Records v. Napster (9th Cir. 2001), the court found Napster contributorily liable because it had actual knowledge that users were exchanging copyrighted music, and its system was used primarily for infringing activity. By contrast, mere knowledge that a product can be used to infringe (without more) is not enough. The Betamax (Sony) court held that sellers of a general-purpose device cannot be charged with contributory infringement solely on constructive knowledge of wrongdoing. Thus, unless a platform actively encourages or knows of infringement, contributory liability should not attach.
Vicarious liability requires the right and ability to supervise infringing activity and a direct financial interest in it. In Napster, vicarious liability was found partly because Napster profited from infringing traffic. But courts have rejected imposing vicarious liability on distributors who merely sell a product with knowledge that it might be misused. Here, an AI toolmaker has no practical way to supervise every use of the tool (the “infringing act” only occurs when a user hits a button to create or share content). Nor should a plaintiff escape liability by suing the toolmaker instead of the actual infringer, who is the user. Indeed, courts look for volitional action: the person who “presses the button” is the infringer. Where the platform does not control users’ generation or distribution beyond providing the tool, vicarious liability should not lie.
Inducement Doctrine: MGM Studios, Inc. v. Grokster (2005) established that a defendant who distributes a product “with the object of promoting its use to infringe copyright” can be liable if there is clear evidence of intent. Advertising specifically calls it a way to pirate content or instructing how to use it for infringement are classic examples of inducement. On the other hand, showing a device has lawful uses, or failing to develop filters, does not by itself create liability absent intent. For AI video platforms, this means marketing and product design should avoid signals that encourage infringement. Explicitly framing the AI as an all-you-can-copy engine for copyrighted characters would invite liability; emphasizing creative freedom, social good, and compliance with law would mitigate it.
Recent AI Litigation: Several studios have begun suing AI image/video companies. For instance, in 2025 Disney, NBCU, and DreamWorks sued Midjourney (an AI image generator) and MiniMax/Hailuo (a Chinese AI platform) for allegedly using studio content in training and outputting copyrighted characters without permission. These complaints allege “willful and brazen” infringement and seek injunctions and statutory damages. Importantly, these cases are very new and no court has yet ruled on the merits. They reflect aggressive rights-holder strategies rather than established law. The claims mix allegations about both training data and user outputs. Courts may find some uses infringing (e.g. near-duplicate reproductions), but many generative outcomes will be defended as fair use and covered by safe-harbor. In any event, these suits underscore the need for AI companies to be prepared with fair-use arguments and compliance procedures, but they do not negate the legal principles discussed above.
International Perspective: Most foreign jurisdictions lack U.S.-style fair use. In the EU, copyright law is based on specific exceptions. The EU Copyright Directive (2019/790) introduced narrow exceptions for text and data mining (TDM) in research contexts, requiring that rights-holders not have opted out. The upcoming EU AI Act similarly requires GPAI providers on the EU market to respect copyright and the TDM opt-out. However, there is no broad right to parody or transformative use as in the U.S., and European courts might scrutinize output more strictly. Similarly, the UK has a “fair dealing” approach (for parody, news, etc.) and limited TDM exceptions. These differences mean that a tool allowed in the U.S. under fair use might face tighter constraints in Europe. Nevertheless, for U.S.-based companies and markets, U.S. law applies. (Platforms should of course monitor developments abroad; for example, Japan and Canada have also debated AI-specific copyright rules.) For now, the American market is governed by U.S. law’s pro-innovation bias, though the global environment may evolve.
Policy Considerations: Both law and policy favor innovation. Congress and courts have repeatedly balanced copyright rights against the public interest in new technology. The Sony and Grokster line of cases reflects an underlying principle: new technologies should not be stifled by overbroad liability. Indeed, commentators note that treating AI-generated content strictly like copies would undercut “the very progress of science and useful arts” promoted by the Constitution. A recent law review emphasizes that training AI on copyrighted works is a fundamentally transformative scientific purpose (advancing AI research and tools). Moreover, the U.S. executive branch has declared leadership in AI a national priority; President Biden’s 2023 AI Executive Order specifically instructed that America “seize AI’s promise and deepen the U.S. lead in AI innovation”. Excessively fettering AI development would cut against these objectives.
Recommended Risk Mitigation Steps and Messaging
Design Filters and Warnings: Implement optional technical measures to discourage blatant infringement (e.g. flagging or blocking requests that mention known copyrighted characters or scenes). Encourage users to comply with law via on-screen tips. However, avoid overzealous censorship that might chill fair uses.
Clear Terms of Service: State explicitly that users must not infringe copyright and that the platform will terminate repeat infringers. This strengthens safe-harbor eligibility. Offer guidelines about fair use to educate users.
Notice-and-Takedown System: Register a DMCA agent and prominently publish takedown procedures. Commit to promptly remove alleged infringing videos upon receipt of valid notices. Periodically review and update these policies.
Provenance Tags or Watermarks: Consider embedding metadata or subtle watermarks indicating that content was AI-generated. This transparency helps signal non-original work and can deter confusion with actual studio content.
Licensing or Partnerships: Where possible, negotiate licenses or joint ventures with content owners. Publicly communicate any deals or permissions. This demonstrates good faith and can isolate disputes to actual third parties providing unlicensed material.
Monitoring and Enforcement: Use content ID or hashing to identify near-exact copies of protected works. If such content is generated or distributed, delete it. Maintain records of good-faith efforts to prevent infringement (for example, logs of takedowns and user warnings).
Public Messaging: In communications (blogs, press releases, letters, etc.), emphasize that the platform champions creativity, free expression, and respect for copyright. Assert that fair use is a long-standing legal right for users, and that the platform has policies to prevent abuse. Clarify that the platform itself does not “own” outputs and does not authorize piracy.
Legal Readiness: Assemble a legal team to track developments and prepare defenses. Stay informed on the latest AI copyright cases globally. Proactively brief investors and partners on the company’s position and policies.
By taking these practical steps, a platform can both assert its fair-use-friendly position and limit legal exposure. Remember that proactive compliance (e.g. removing admittedly infringing uploads when identified) bolsters legal defenses without compromising the platform’s stance on user creativity.
Relevant Case Law
Sony Corp. v. Universal City (United States Supreme Court), 1984. Holding or rule: Selling a copying technology that is capable of substantial lawful uses is not contributory infringement, and home “time-shifting” was found to be fair use. Relevance: This is the core “substantial noninfringing use” doctrine. It supports the argument that a general-purpose AI video tool is not automatically liable simply because some users might misuse it.
Campbell v. Acuff-Rose Music (United States Supreme Court), 1994. Holding or rule: A commercial parody can qualify as fair use, and parody has a strong claim to transformative value. Relevance: This supports user rights to create parody, satire, and critical commentary even when monetized, which maps directly onto many fan and remix uses of AI video tools.
Cariou v. Prince (United States Court of Appeals for the Second Circuit), 2013. Holding or rule: Works that add new expression, meaning, or message can be transformative fair use, even if they do not explicitly comment on the original. Relevance: This strengthens the idea that remix and recontextualization can be lawful when the result presents a distinct aesthetic or message, which is often how generative video is used in practice.
Authors Guild v. Google (United States Court of Appeals for the Second Circuit), 2015. Holding or rule: Scanning books to build a searchable index was fair use, and limited snippet display did not substitute for the originals. Relevance: This is frequently cited for the proposition that large-scale copying can be fair use when the purpose is transformative and the output does not replace the market for the original, which is often invoked by analogy in AI training debates.
A&M Records v. Napster (United States Court of Appeals for the Ninth Circuit), 2001. Holding or rule: Napster was held contributorily and vicariously liable where there was widespread infringement by users plus platform knowledge, control, and financial benefit tied to infringing activity. Relevance: This case illustrates the risk factors that increase intermediary exposure. It underlines why AI video platforms should avoid designs, policies, or business practices that look like they are built around known infringement or platform-level control and profit from infringing uses.
MGM v. Grokster (United States Supreme Court), 2005. Holding or rule: Distributing a technology with the intent to induce infringement creates liability, especially where marketing and product decisions encourage piracy. Relevance: This is the inducement line that platforms must not cross. The argument for tool neutrality is strongest when platforms do not market or optimize for infringing uses.
Andy Warhol Foundation v. Goldsmith (United States Supreme Court), 2023. Holding or rule: The Court rejected fair use for a licensing use of Warhol’s Prince image, emphasizing that “transformative” is not a magic word and the specific use and market substitution still matter. Relevance: This is the caution flag. It supports the position that not all stylistic changes are safe, and that fair use is context-specific. Platforms should describe fair use accurately and avoid implying that “style change equals legality.”
17 U.S.C. § 107 (United States statute), 1976. Holding or rule: Codifies the fair use doctrine and the four-factor analysis, expressly listing purposes such as criticism, comment, news reporting, teaching, scholarship, and research. Relevance: This is the statutory foundation for user rights to produce transformative works, including parody, commentary, journalism, and education, which are central to your open-letter framing.
17 U.S.C. § 512 (DMCA safe harbors) (United States statute), 1998. Holding or rule: Limits liability for qualifying online service providers that meet notice-and-takedown and related requirements. Relevance: For platforms that host or distribute user uploads, this is the core liability-limiting framework. It supports the argument that compliant platforms are not automatically responsible for user infringement, provided they run a proper compliance process.










