Start-up behind Claude chatbot had agreed to pay $1.5bn to authors and publishers
Life
Image: Mira DeShazer
A US federal judge in San Francisco has rejected a proposed $1.5 billion settlement between AI company Anthropic and authors whose work was allegedly used without permission to train their chatbot Claude.
Anthropic, the AI start-up behind the chatbot Claude, had agreed to a historic settlement of at least $1.5 billion in a lawsuit by authors and publishers who claimed their books were used without permission to train AI.
About 500,000 works were eligible, authors would get $3,000 per work, plus interest.
Anthropic was further required to destroy all datasets containing pirated material; it does not admit guilt and will not be granted new licences for future use without permission.
In June, a federal judge ruled that training on legally obtained books can fall under fair use. However, downloading and storing millions of pirated titles was found to be unlawful.
Judge Alsup, however, sharply criticised the plan, calling it “still far from complete” and raising concerns about ambiguities in the claims process and the risk of non-visible claims. If the parties involved cannot provide clarity on this, there is still the threat of a court case in December with potentially billions in claims.
Business AM


