AI for Automation
Back to AI News
2026-03-17AnthropicAI CopyrightFair UseFSFAI Training DataCopyright LawsuitAI Ethics

Anthropic's $1.5 Billion Copyright Settlement — AI Training Ruled Legal, FSF Demands Open AI Models

Anthropic settled its copyright lawsuit for $1.5 billion. A U.S. court ruled AI training constitutes Fair Use, while the FSF demands AI models be made open. Here's what the AI copyright ruling means.


The Anthropic copyright lawsuit has reached its conclusion. Is it legally problematic for AI to scrape books from the internet for training? A U.S. court has answered: “AI training is legal (Fair Use).” Anthropic, the maker of Claude, settled with authors for $1.5 billion, while the Free Software Foundation (FSF) rejected the money and demanded that “AI models and training data be made available to everyone.”

Anthropic Copyright Lawsuit — Why the Court Ruled AI Training as Fair Use

The lawsuit, known as Bartz v. Anthropic, was a class action filed by fiction and nonfiction authors against Anthropic. The core issue was simple: did Anthropic infringe copyright by downloading books from Library Genesis (aka Libgen, a site that freely shares books online) and Pirate Library Mirror to use for AI training?

Anthropic AI copyright lawsuit — smartphone screen displaying the Anthropic logo, maker of Claude AI

In June 2025, a U.S. federal judge ruled in Anthropic's favor: “Using books for AI training constitutes Fair Use.” However, one issue remained unresolved — whether the act of downloading books from illegal sites was itself legal. Ultimately, both sides chose to settle for $1.5 billion rather than go to trial.

FSF Rejects $1.5 Billion Settlement — Why They Demand Open AI Models

The most striking aspect of this lawsuit was the stance of the Free Software Foundation (FSF). Founded by Richard Stallman, the FSF has upheld the philosophy that ‘all software should be free’ for over 40 years.

Free Software Foundation (FSF) logo — nonprofit organization demanding disclosure of AI training data

It was confirmed that the FSF's book «Free as in Freedom: Richard Stallman's Crusade for Free Software» (by Sam Williams) was included in Anthropic's training data. This book was published under the GNU Free Documentation License (a license that allows anyone to freely use the work).

Normally, one would accept the settlement money, but the FSF refused the payment. Instead, they made this demand:

“The right thing to do is to protect freedom in computing. The complete training data, the model itself, the training hyperparameters, and the source code of AI models should be shared with all users.”
— Free Software Foundation (FSF), March 2026

In short: “Don't give us money — open up the AI.” Currently, AI systems like ChatGPT and Claude disclose nothing about what data they were trained on or how they work internally. The FSF argues this is a contradiction — taking freely available internet knowledge while keeping the results proprietary. Understanding how AI learns and operates helps grasp the heart of this debate.

AI Copyright Lawsuits — Anthropic and OpenAI Face Billions in Claims

Anthropic isn't the only one being sued. In January 2026, music publishers including Universal Music Group (UMG) filed an additional $3 billion lawsuit against Anthropic, claiming the company used 20,000 copyrighted songs for training without authorization.

Anthropic AI economic futures program illustration — AI copyright lawsuits and training data debate

Similar lawsuits are spreading across the industry. Encyclopaedia Britannica has already sued OpenAI for 100,000 counts of copyright infringement. The total cost the AI industry may have to pay for training data issues is expected to exceed billions of dollars.

The significance of this AI copyright ruling in three points:

1AI reading and learning from books is legal (Fair Use) (U.S. court ruling)
2However, downloading from illegal sites remains a contested issue
3If this ruling sets precedent, it could favor companies in other AI copyright lawsuits

Copyright Issues Creators and Professionals Need to Know in the AI Era

This case affects everyone — those who build AI, those who use AI, and the original creators whose content AI was trained on.

If you're a creator — a writer or artist — you should be aware that your work may have been used for AI training. You can check the Anthropic Copyright Settlement site to see if your works were included and whether you're eligible to file a claim.

If you use AI at work — the origin of AI training data is becoming an increasingly important issue. Especially in corporate environments adopting AI, the question ‘What data was this AI trained on?’ could become central to assessing legal risk.

Whether the day will come when AI models and training data are fully disclosed, as the FSF demands, remains uncertain. But one thing is clear: In the AI era, the question ‘Whose knowledge was used to build this AI?’ will echo louder and more frequently.

If you want to learn more about AI and vibe coding, check out our Free Learning Guide.

Stay updated on AI news

Simple explanations of the latest AI developments