Anthropic's proposed $1.5 billion copyright settlement with major publishers and authors hit a significant roadblock when a federal judge delayed approval, citing concerns that lawyers are rushing the deal to secure substantial fee payouts.

The settlement, which would resolve lawsuits from The New York Times, authors, and publishers over alleged unauthorized use of copyrighted material in training AI models, faced skepticism during a fairness hearing. The judge expressed reservations about the timeline and process, with attorneys reportedly seeking around $320 million in legal fees from the settlement pool.

The delay reflects broader tension in how such settlements get structured. The lawyers negotiating on behalf of the plaintiff class have financial incentives to conclude deals quickly, a dynamic that can conflict with ensuring affected parties receive adequate compensation. In this case, the proposed fee structure raised red flags for the court.

Anthropic, founded by former OpenAI safety researchers, has positioned itself as focused on responsible AI development. The settlement would require the company to establish mechanisms for compensating copyright holders and potentially implement new safeguards around training data sourcing. However, the specifics of how funds get distributed remained contentious.

The judge's delay suggests the court wants more transparency about whether the settlement truly protects authors and publishers, or if it primarily benefits the lawyers orchestrating it. Additional hearings will likely examine the fee structure, the fairness of compensation amounts, and whether Anthropic's obligations are substantive enough.

This case serves as a test for how copyright law will apply to generative AI. Other companies, including OpenAI and Google, face similar lawsuits. The outcome here will influence how the industry handles training data acquisition and copyright compensation moving forward. A settlement that appears one-sided risks setting weak precedent for future disputes.

The delay suggests courts are taking copyright claims seriously, rather than rubber-stamping AI-industry-friendly resolutions quickly.

CATEGORY