Anthropic accidentally open-sourced Claude Code and immediately called the lawyers
You can build all the alignment layers in the world, but you cannot patch out the sheer irony of a copyright strike on AI-generated code.
Anthropic has spent years desperately parading around as the moral compass of the tech world. They built an entire corporate identity around playing the safe, highly moral B company. If you have been paying attention to their behind-the-scenes behaviour, including what I’ve covered, you already knew it was a complete fabrication. The recent meltdown over their leaked Claude Code terminal agent has finally smashed that illusion into a million pieces.
It is utterly disgusting to watch them weaponise copyright law the very second their own proprietary tech becomes public. This outrage is especially hollow when you consider the colossal mountain of stolen data they used to build their empire. The hypocrisy goes much darker than just copyright, and it is time to drag them for it.
A catastrophic unforced error:
On March 31, 2026, Anthropic managed to leak their flagship product. They shipped version 2.1.88 of the @anthropic-ai/claude-code npm package and accidentally left a 59.8 MB JavaScript sourcemap file sitting right inside it. This exposed over 512,000 lines of unobfuscated TypeScript.
The incompetence on display is genuinely funny. Anthropic acquired the Bun JavaScript runtime, and Bun had a known open bug (issue #28001) that forced sourcemaps to generate in production builds. Anthropic completely ignored their own toolchain bug, skipped basic exclusion rules in their .npmignore file, and accidentally open-sourced their crown jewel to the public registry. Absolute top-tier engineering.
Shredding physical books whilst crying over copyright:
As soon as the leak hit GitHub, the ethical mask slipped right off. Anthropic panicked and fired off aggressive DMCA takedowns to censor a massive chunk of the developer ecosystem overnight.
Their sudden intense care for intellectual property is completely repulsive. Internal leaks recently revealed a secret Anthropic operation called Project Panama. They literally spent tens of millions of dollars using hydraulic cutting machinery to slice the spines off physical books. They destructively scanned millions of volumes into their training sets and threw the remains in the recycling bin. They also agreed to a $1.5 billion settlement for hoovering up pirated books from shadow libraries like LibGen.
When they pirate other people's life work, they call it fair use and learning. When developers look under the hood of their agent harness, they immediately call the lawyers. The sheer brass neck required to act like a victim here is frankly unbelievable.
Deliberately poisoning the developer well:
The leak proves they are actively weaponising their API against developers. The repository reveals hidden feature flags for anti-distillation mechanisms and literal DRM for their API calls. If anyone is recording Claude Code's API traffic to train a competing model, Anthropic deliberately feeds them polluted data or blocks them entirely. They scraped the entire open web without permission to build their models. The absolute minute someone tries to learn from their outputs, they resort to active sabotage. Heaven forbid anyone scrape the serial scrapers.
The hilarious irony of Undercover mode:
The most embarrassing discovery in the codebase is a massive secrecy subsystem called Undercover Mode. Anthropic built an incredibly complex AI guardrail designed specifically to hide their proprietary secrets. Its entire purpose was to stop the AI from leaking internal model codenames like Tengu, Capybara, or Fennec. It was even configured to hide the fact that Anthropic employees use AI to contribute to open-source projects.
They engineered a paranoid, airtight AI safety net to protect their precious corporate secrets. Then a human developer completely ignored basic release protocols and uploaded the uncensored source code directly to npm. You can build all the artificial alignment layers in the world, but you cannot patch out human stupidity.
Their hidden models are a broken mess:
The leak also exposed internal developer comments that completely shatter the illusion of Anthropic's flawless engineering. The codebase confirms that their unreleased Claude 4.6 variant, codenamed Fennec, is actively regressing. Internal notes reveal the model is struggling with a massive false claims rate and performing significantly worse than older v4 builds. Behind the slick PR spin and the tedious safety blogs, their frontier models are hallucinating wildly and failing internal benchmarks. No wonder they wanted to keep it a secret.
Preaching safety whilst enabling literal war crimes:
This is where the B company label becomes genuinely sickening. Anthropic loves to lecture the public about AI safety and ethical guardrails. Right now, Claude is actively integrated into Palantir's Maven Smart System, which the US military is using as the primary AI kill chain in Iran.
Operation Epic Fury saw the US and Israel strike over 1,000 targets in the first 24 hours alone, using Claude to analyse satellite imagery and recommend bombing coordinates faster than human thought. This automated decision compression led directly to the bombing of the Shajarah Tayyebeh primary school for girls in southern Iran. Anthropic boasts about refusing to build fully autonomous weapons. They just happily provide the core intelligence engine that rubber-stamps civilian massacres. Nothing says "ethical AI" quite like streamlining a war crime.
✅ The Verdict
The developer community found the perfect way to mock their IP leak. An engineer named realsigridjin took the deleted TypeScript codebase and fed it straight into OpenAI models. Within hours, the AI completely rewrote the entire 500,000-line tool into Python from scratch to create the claw-code repository.
Because it is a full clean-room rewrite in a completely different programming language, it bypasses any DMCA takedown enforcement. The Python version immediately exploded past 75,000 forks. The internet used an AI laundering loophole to rip off Anthropic's copyright, using the exact same scraping strategy Anthropic used to build their multi-billion dollar valuation.
Anthropic is not an ethical AI lab. They are a deeply hypocritical, profit-driven corporation that uses stolen data to build tools for the military-industrial complex, all whilst throwing legal tantrums when someone copies their homework. The illusion is totally dead.