← Back to Catalog

open-jarvis/OpenJarvis

↗ GitHub

Personal AI, On Personal Devices

2,041

Stars

362

Forks

24

Watchers

15

Open Issues

Python·Apache License 2.0·Last commit Apr 1, 2026·by @open-jarvis·Published April 1, 2026·Analyzed 6d ago
A

Safety Rating A

No hardcoded secrets, malicious code patterns, suspicious dependencies, or prompt injection attempts were detected. The repository is a legitimate academic/open-source framework from Stanford research labs with clear provenance, an Apache 2.0 license, and transparent documentation. The installation flow relies on standard Python tooling (uv, maturin) and well-known open-source inference backends.

AI-assisted review, not a professional security audit.

AI Analysis

OpenJarvis is a local-first personal AI agent framework developed at Stanford's Hazy Research and Scaling Intelligence Lab. It provides a software stack for building on-device AI agents that run locally by default, calling cloud APIs only when necessary. The framework ships with shared primitives for building on-device agents, an evaluation system that treats energy, FLOPs, latency, and cost as first-class constraints alongside accuracy, and a learning loop for improving models using local trace data. It supports multiple local inference backends (Ollama, vLLM, SGLang, llama.cpp) as well as cloud providers, and includes a Rust extension for performance-critical components.

Use Cases

  • Building and running personal AI agents that execute locally on consumer hardware
  • Evaluating local LLM performance with energy, latency, cost, and accuracy metrics
  • Developing research platforms for on-device AI efficiency (Intelligence Per Watt)
  • Deploying a local-first AI assistant via CLI or FastAPI server
  • Fine-tuning and improving models using locally collected inference trace data

Tags

#ai-agents#llm#framework#local-first#self-hosted#cli-tool#evaluation#fine-tuning#server#docker

Project Connections