Getting Started
Prerequisites
- Rust 1.94+ (if building from source)
- A supported LLM provider account (OpenAI, or a local provider like Ollama / LM Studio)
- Git (for session diffs and patch application)
Installation
From Source
Verify Installation
Authentication
Before using Savfox, authenticate with your LLM provider:
This opens an interactive flow to configure your API credentials. Savfox stores credentials securely in your system keyring.
To use an open-source local provider instead:
To log out:
Your First Interactive Session
Simply run savfox with no arguments to launch the interactive terminal UI:
This opens a TUI (terminal user interface) where you can:
- Type messages and chat with the AI agent
- Review proposed file changes with diffs
- Approve or reject commands and patches
- Switch models and personalities
See Interactive Mode for details.
Your First Non-Interactive Execution
Use the exec subcommand (alias e) to run a one-shot task:
The agent will process your request, propose changes, and output results directly to the terminal. For JSON output (useful for scripting):
See CLI Reference for all available commands and options.
Quick Tips
-
Use
--modelor-mto specify a different LLM model: -
Use
--full-autofor low-friction automated execution: -
Resume a previous session:
Next Steps
- Interactive Mode — Learn the TUI features
- CLI Reference — All commands and flags
- Configuration — Customize Savfox behavior
- Gateway — Remote access and chat bridges