The new year may bring pivotal developments in a series of copyright lawsuits, which could shape the future of AI business.
Lawsuits from authors, media outlets, visual artists, musicians and other copyright holders accuse OpenAI, Anthropic, Meta Platforms and other technology companies of using their work to train chatbots and other AI-based content generators without permission or payment.
Courts will likely begin hearing arguments starting next year on whether the defendants’ copying rises to the level of “fair use,” which may be the defining legal question of the AI copyright war.
Technology companies have argued that their AI systems make fair use of copyrighted material by studying it to learn how to create new, transformative content. Copyright holders counter that companies are illegally copying their works to generate competing content that threatens their livelihoods.
OpenAI, Meta, Silicon Valley investment firm Andreessen Horowitz and others warn that being forced to pay copyright holders for their content could cripple the booming artificial intelligence industry in the United States. Some content owners began voluntarily licensing their material to tech companies this year, including Reddit, News Corp and the Financial Times.
Reuters licensed its articles to Meta in October.
Other copyright holders, such as major record labels, The New York Times and several best-selling authors, have continued to press their claims or have filed new lawsuits in 2024.
AI companies could escape copyright liability in the US entirely if the courts agree with them on the issue of fair use. Judges hearing cases in different jurisdictions could reach conflicting conclusions on fair use and other issues, potentially requiring multiple rounds of appeals.
The ongoing dispute between Thomson Reuters and former legal research competitor Ross Intelligence could provide an early indication of how judges will treat fair use arguments.
Thomson Reuters – the parent company of Reuters News – alleged that Ross misused copyrighted material from its legal research platform Westlaw to build an artificial intelligence-powered legal search engine. Ross denied any wrongdoing, citing fair use.
US Circuit Judge Stefanos Bibas said last year that he could not decide in front of a jury whether Ross had made fair use of the content. But Bibas canceled the scheduled trial and heard new fair use arguments in November, which could lead to a new ruling on the case next year.
Another early fair use indicator could come in a dispute between music publishers and Anthropic over the use of their song lyrics to train chatbot Claude. U.S. District Judge Jacqueline Corley is considering fair use as part of the publishers’ request for a preliminary injunction against the company. Corley conducted oral arguments on the proposed injunction last month.
In November, US District Judge Colleen McMahon in New York dismissed a case brought by news outlets Raw Story and AlterNet against OpenAI, finding that they failed to prove they were harmed by OpenAI’s alleged copyright infringements.
The outlets’ lawsuits differ from most other lawsuits because they accused OpenAI of illegally removing copyright management information from its articles rather than directly violating their copyrights. But other cases could also end without a fair use decision if judges decide that copyright holders are not harmed by the use of their works in AI training.