I Had Doubts About Accepting Payment from Anthropic for My Books, But Now I’m Onboard

One billion dollars may not hold the same weight it once did, but it certainly sharpens focus. This was my reaction when I learned that the AI firm Anthropic reached an agreement for at least $1.5 billion to compensate authors and publishers whose works were utilized to train an earlier version of its large language model, Claude. This followed a judge’s summary judgment stating that the company had pirated those books. The proposed settlement—which remains under the careful consideration of a cautious judge—could potentially ensure authors receive a minimum of $3,000 for each book. I have eight books, and my wife has five. We’re talking about funds for a bathroom renovation here!
While this settlement addresses issues of pirated works, it leaves unresolved the larger question of whether AI companies should be allowed to train their algorithms on copyrighted materials. Yet, it’s noteworthy that real financial stakes are now in play. Previously, discussions on AI copyright revolved around legal and ethical hypotheticals. With real dollars at stake, it’s high time we confront the core issue: If top-tier AI depends on book content, is it just for companies to generate trillion-dollar revenues without compensating authors?
Putting legal concerns aside, I’ve been wrestling with this dilemma. But as we transition from courtroom battles to financial settlements, my perspective has shifted. I believe I deserve to be compensated! Ensuring that authors are paid feels fundamentally right, despite powerful opposing voices (including former President Donald Trump).
Disclaimer in Fine Print
Before I continue, I must lay out a significant disclaimer. As I mentioned, I am an author myself, and my financial future is tied to the outcome of this debate. Additionally, I serve on the council of the Author’s Guild, which actively advocates for authors’ rights and is currently suing OpenAI and Microsoft for using authors’ works in their training datasets. (Due to my role in covering tech firms, I abstain from votes on litigation involving those companies.) It’s important to note that I’m expressing my personal views today.
Historically, I’ve been somewhat of an outlier on the council, truly conflicted about the right of companies to use legitimately purchased books for training their models. The perspective that humanity is creating an expansive repository of knowledge genuinely resonates with me. During an interview with artist Grimes in 2023, she voiced excitement about her involvement in this innovative endeavor: “Oh, sick, I might get to live forever!” That sentiment struck a chord with me as well; a key reason I love what I do is the dissemination of my thoughts.
However, integrating a book into a large language model produced by a major corporation is an entirely different scenario. Remember that books are arguably the richest resource an AI model can utilize. Their depth and coherence serve as unique guides to human thought. The breadth of subjects they encompass is immense, providing more reliability than social media and a deeper insight than news articles. I would assert that without books, large language models would be significantly less effective.
This leads to the argument that companies like OpenAI, Google, Meta, and Anthropic should pay substantially for book access. Recently, at a controversial White House tech dinner, CEOs highlighted monstrous sums they are supposedly investing in U.S.-based data centers to satisfy AI’s computational requirements. Apple pledged $600 billion, with Meta claiming it would match that. OpenAI is part of a $500 billion initiative called Stargate. In comparison, the $1.5 billion that Anthropic has agreed to distribute to authors and publishers as part of this infringement settlement seems modest.
The Issue of Unfair Use
That said, it’s possible that the law favors these companies. Copyright law includes a provision called “fair use,” allowing for the unlicensed use of books and articles based on various criteria, one of which assesses if the use is “transformational”—implying it builds upon the original content in a way that does not compete with it. The presiding judge in the Anthropic infringement case has determined that training with legally acquired books is protected by fair use. This determination is challenging, as it involves legal standards established prior to the digital age—let alone AI development.
Undoubtedly, a resolution that reflects current realities is necessary. The White House’s AI Action Plan released in May did not provide one. However, Trump did weigh in on the matter, suggesting that authors shouldn’t be compensated due to the difficulty in establishing a fair payment system. “You can’t expect to have a successful AI program when every single article, book, or anything else that you’ve read or studied, you’re supposed to pay for,” he stated. “We appreciate that, but just can’t do it—because it’s not doable.” (An administration source informed me this week that this view “sets the tone” for official policy.)
