← Back to Catalog

zakirkun/guardian-cli

↗ GitHub

Guardian is a production-ready AI-powered penetration testing automation CLI tool that leverages Google Gemini and LangChain to orchestrate intelligent, step-by-step penetration testing workflows while maintaining ethical hacking standards.

1,320

Stars

272

Forks

15

Watchers

4

Open Issues

Python·Other·Last commit Feb 27, 2026·by @zakirkun·Published April 1, 2026·Analyzed 6d ago
B

Safety Rating B

Guardian is a dual-use offensive security tool. It includes a legal disclaimer and scope-validation features (private network blacklisting, safe mode), and appears to be a legitimate open-source security research project with significant community traction (1320 stars). No hardcoded production secrets or malicious code patterns were found in the provided content. The Caution rating reflects the inherently dual-use nature of automating offensive tools like sqlmap, ffuf, and XSStrike via an AI agent — responsible for ensuring authorized use rests entirely with the operator. Curators should verify the actual source code for any unexpected network callbacks or data exfiltration not visible in the README alone.

AI-assisted review, not a professional security audit.

AI Analysis

Guardian is a Python-based, AI-powered penetration testing automation CLI framework that orchestrates multiple specialized AI agents (Planner, Tool Selector, Analyst, Reporter) across four LLM providers (OpenAI GPT-4, Anthropic Claude, Google Gemini, OpenRouter) to conduct intelligent, adaptive security assessments. It integrates 19 security tools (nmap, nuclei, sqlmap, subfinder, ffuf, etc.), supports YAML-defined workflows with parameter priority, captures full execution evidence, and produces professional reports in Markdown, HTML, and JSON formats.

Use Cases

  • Automated penetration testing of web applications and networks
  • AI-orchestrated reconnaissance and vulnerability scanning
  • Multi-agent security assessment with evidence capture and reporting
  • Custom workflow automation for ethical hacking engagements
  • Enterprise security team tooling for repeatable pentest workflows

Tags

#ai-agents#security#cli-tool

Security Findings (2)

hardcoded_secrets

The README and configuration reference show a placeholder API key value ('sk-your-api-key-here') in the guardian.yaml example, which is illustrative, not a real secret. However, the config file pattern encourages users to store live API keys directly in config/guardian.yaml, which may be committed to source control by end users. No actual hardcoded secrets were observed in the repository content provided.

malicious_code

The tool is explicitly designed to automate offensive security operations (SQL injection via sqlmap, XSS via XSStrike, web fuzzing via ffuf, subdomain enumeration, etc.). While framed as ethical/authorized-use-only, the capabilities are inherently dual-use and could facilitate unauthorized attacks if misused. No obfuscated code, exfiltration backdoors, or cryptocurrency mining patterns were observed in the provided content.

Project Connections