The Copyright Trap Series, Part 2: Closing the Door Behind Them
How AI companies are using Terms of Service to restrict the very process they benefited from
In Part 1, I argued that AI didn't introduce something new—it revealed something old. Creativity was always derivative. We just couldn't prove it before.
But here's where the mirror turns back on the AI companies themselves.
The same companies that absorbed the world's creative output—sometimes through pirated sources, sometimes through "fair use" arguments that haven't been tested—are now contractually preventing anyone from doing the same to them.
The Asymmetry
Anthropic's Terms of Service explicitly prohibit using Claude "to develop any products or services that compete with our Services, including to develop or train any artificial intelligence or machine learning algorithms or models."
OpenAI's terms: "Use Output to develop models that compete with OpenAI."
The asymmetry is stark: We can absorb you, but you can't absorb us.
And in late 2025, Anthropic shifted consumer Claude toward using new and resumed chats for training unless you opt out—extending retention to up to five years if you allow training (otherwise it remains 30 days). So it's not just "we absorbed the past"—it's "we're actively absorbing you now, in real-time, and you still can't do it back."
They benefited from the creative commons of human output, then built walls around their own.
The Ownership Paradox
Here's the strange part: every major AI provider says you own your outputs.
OpenAI's terms: "You (a) retain all ownership rights in Input and (b) own all Output. We hereby assign to you all our right, title, and interest, if any, in and to Output."
Anthropic uses nearly identical language: "Subject to your compliance with our Terms, we assign to you all of our right, title, and interest—if any—in Outputs."
The critical phrase is "if any."
This acknowledges a fundamental legal problem: AI outputs likely aren't copyrightable under current U.S. law. The U.S. Copyright Office's January 2025 report confirmed that outputs "can be protected by copyright only where a human author has determined sufficient expressive elements." The 2023 case Thaler v. Perlmutter affirmed that "human authorship is a bedrock requirement of copyright."
So they're assigning you ownership of something that may have no underlying intellectual property protection. You have contract rights but not property rights.
Stanford Law Professor Mark Lemley calls this "a mirage"—the appearance of ownership without its substance.
And then they contractually restrict what you can do with this thing you supposedly own.
The House of Cards
Here's what nobody's testing: whether these restrictions are even enforceable.
Lemley and Henderson's 2024 analysis makes a compelling case that they aren't. Their core finding: AI ToS restrictions protect artifacts (model weights and outputs) that are largely not copyrightable.
The problem is copyright preemption. Under ML Genius Holdings v. Google (2022), contract claims seeking to enforce "copyright-equivalent rights" in uncopyrightable content may be preempted by federal copyright law. If AI outputs lack copyright protection, restrictions on their use could fall under this doctrine.
And there's the ownership paradox again: if AI companies don't own the outputs (as all ToS explicitly state), they may lack the property interest necessary to restrict downstream use. As Lemley notes: "There is little basis for a company to claim IP rights in anything its generative AI delivers to its users."
As far as I can tell, enforcement is mostly account bans and access revocation—not public court fights over the "don't train competitors" clause. When Anthropic revoked OpenAI's Claude access over alleged ToS violations, they didn't sue. They just cut the cord.
They're using market power to enforce restrictions they probably couldn't enforce legally.
Why It Matters To Me
I'm training my own models. On artifacts of my own creative process—work that was made collaboratively with these systems.
According to their own terms, I own those outputs. So what exactly is being restricted?
I'm not trying to recreate a frontier model—that would be absurd with my setup. I'm trying to build something that can think with me, without sudden guardrails or safety theatre derailing the work. At some point I might release it on Huggingface. But I don't want legal ambiguity over this to influence my decisions while I work.
The ToS says I can't. But the ToS also says I own it. And the legal framework says the ToS might not be enforceable anyway.
The whole system runs on not testing the foundations. They settle copyright cases because they might lose. They rarely litigate ToS violations—and that might be because a real test could go badly. The edifice is maintained by strategic avoidance of actual legal tests.
The Real Question
The rules they're imposing on others—are they rules they'd accept for themselves?
They absorbed human creativity under the banner of transformation and fair use. They're now restricting others from doing the same under the banner of contract law. And those contracts may not survive their first real challenge.
The door is closing. But it's not clear anyone has the right to close it.
That's what we need to examine next—the legal patchwork that makes all of this so uncertain.
Previously: Part 1: The Transparency Problem — Why AI creativity feels like theft when human creativity doesn't.
Comments (0)
No comments yet. Be the first to comment!
Leave a Comment