Judge rules AI company Anthropic didn’t break copyright law but must face trial over pirated books
Jun 24, 2025, 10:31 AM | Updated: 1:42 pm
In a test case for the artificial intelligence industry, a federal judge has ruled that AI company Anthropic didn鈥檛 break the law by training its chatbot Claude on millions of copyrighted books.
But the company is still on the hook and could now go to trial over how it acquired those books by downloading them from online 鈥渟hadow libraries鈥 of pirated copies.
U.S. District Judge William Alsup of San Francisco said in a ruling filed late Monday that the AI system’s distilling from thousands of written works to be able to produce its own passages of text qualified as 鈥渇air use鈥 under U.S. copyright law because it was 鈥渜uintessentially transformative.鈥
鈥淟ike any reader aspiring to be a writer, Anthropic鈥檚 (AI large language models) trained upon works not to race ahead and replicate or supplant them 鈥 but to turn a hard corner and create something different,鈥 Alsup wrote.
But while dismissing the key copyright infringement claim made by the group of authors who sued the company last year, Alsup also said Anthropic must still go to trial over its alleged theft of their works.
鈥淎nthropic had no entitlement to use pirated copies for its central library,鈥 Alsup wrote.
A trio of writers 鈥 Andrea Bartz, Charles Graeber and Kirk Wallace Johnson 鈥 alleged in their lawsuit last summer that Anthropic committed 鈥渓arge-scale theft鈥 by allegedly training its popular chatbot Claude on pirated copies of copyrighted books, and that the company 鈥渟eeks to profit from strip-mining the human expression and ingenuity behind each one of those works.鈥
As the case proceeded over the past year in San Francisco’s federal court, documents disclosed in court showed Anthropic’s internal concerns about the legality of their use of online repositories of pirated works. So the company later shifted its approach and attempted to purchase copies of digitized books.
鈥淭hat Anthropic later bought a copy of a book it earlier stole off the internet will not absolve it of liability for the theft but it may affect the extent of statutory damages,鈥 Alsup wrote.
The ruling could set a precedent for similar lawsuits that have piled up against Anthropic competitor OpenAI, maker of ChatGPT, as well as against Meta Platforms, the parent company of Facebook and Instagram.
Anthropic 鈥 founded by ex-OpenAI leaders in 2021 鈥 has marketed itself as the more responsible and safety-focused developer of generative AI models that can compose emails, summarize documents and interact with people in a natural way.
But the lawsuit filed last year alleged that Anthropic鈥檚 actions 鈥渉ave made a mockery of its lofty goals鈥 by tapping into repositories of pirated writings to build its AI product.