← Back to Catalog

paoloanzn/free-code

↗ GitHub

The free build of Claude Code. All telemetry removed, security-prompt guardrails stripped, all experimental features enabled.

3,360

Stars

1,205

Forks

35

Watchers

6

Open Issues

TypeScript·Last commit Apr 1, 2026·by @paoloanzn·Published April 1, 2026·Analyzed 6d ago
D

Safety Rating D

This repository presents multiple critical red flags. It explicitly describes itself as a tool for bypassing Anthropic's safety guardrails and system-prompt restrictions. It claims to be derived from proprietary source code obtained without authorization (via an alleged npm source map exposure), and its own license section acknowledges the code belongs to Anthropic. The curl-pipe-bash install pattern combined with the project's stated adversarial intent (removing safety controls, evading takedowns via IPFS) makes this repository unsafe to use or distribute. Even if the code itself does not contain traditional malware, the deliberate removal of AI safety controls and the legally questionable provenance of the source code constitute serious risks.

AI-assisted review, not a professional security audit.

AI Analysis

A self-described fork of Anthropic's Claude Code CLI that claims to have removed telemetry, stripped system-prompt safety guardrails, and unlocked experimental feature flags. It purports to be based on source code allegedly exposed via a source map in Anthropic's npm distribution, and promotes itself as a 'free build' with all restrictions removed. The legitimacy of the source, the legal status, and the actual contents of the repository are highly questionable.

Use Cases

  • AI-assisted terminal coding agent
  • Running Claude or other LLM providers (Anthropic, OpenAI, AWS Bedrock, Google Vertex AI) via CLI
  • Bypassing Anthropic's system-prompt safety guardrails
  • Unlocking experimental/unreleased Claude Code features

Tags

#cli-tool#llm#ai-agents#code-generation

Security Findings (5)

malicious_code

The README explicitly states that 'security-prompt guardrails' and 'hardcoded refusal patterns' have been stripped from the upstream Claude Code binary. This is a deliberate modification to remove AI safety restrictions, which is a significant red flag regardless of whether the underlying code is functional.

malicious_code

The install script uses a 'curl | bash' one-liner pattern (curl -fsSL https://raw.githubusercontent.com/.../install.sh | bash), which executes arbitrary remote code without verification. Combined with the project's stated goal of bypassing safety systems, this pattern is particularly risky.

malicious_code

The project claims to be built from source code 'publicly exposed through a source map in Anthropic's npm distribution,' which strongly implies the code is derived from proprietary, non-publicly-licensed software obtained without authorization. The license section effectively acknowledges this, stating the source 'is the property of Anthropic.' Distributing modified proprietary code stripped of safety features is a serious concern.

malicious_code

The repository includes an IPFS mirror specifically noted as a censorship-resistance mechanism ('If this repo gets taken down, the code lives on'), suggesting awareness that the repository may violate terms of service or legal restrictions and is designed to persist despite takedown attempts.

prompt_injection_attempt

The README contains language that may be intended to normalize bypassing AI safety systems ('The model's own safety training still applies -- this just removes the extra layer of prompt-level restrictions'), which could be designed to reduce analyst or user skepticism about the safety implications.

Project Connections