Providers

Savfox supports 60+ cloud and local model providers via OpenAI-compatible API endpoints.

Provider Guides

API key/OAuth auth and model selection. Claude configuration and provider-specific headers. Gemini models via Google AI or Vertex AI. DeepSeek models for coding and reasoning. Ultra-fast inference with Groq hardware. Local models via OpenAI-compatible endpoint.

Selection Tips

  • Coding and general usage: gpt-4.1, claude-sonnet-4, deepseek-coder
  • Fast low-cost runs: gpt-4.1-mini, claude-haiku, Groq models
  • Complex reasoning: o3, claude-opus-4, deepseek-reasoner
  • Local-first/privacy: Ollama, LM Studio

Other Supported Providers

Savfox works with any OpenAI-compatible API endpoint. Popular providers include:

ProviderProvider IDNotes
xAI (Grok)xaihttps://api.x.ai/v1
Mistralmistralhttps://api.mistral.ai/v1
OpenRouteropenrouterAggregator with 100+ models
Together AItogetheraiOpen-source model hosting
Fireworks AIfireworks-aiFast inference
CerebrascerebrasUltra-fast inference
PerplexityperplexitySearch-augmented models
SiliconFlowsiliconflowChinese provider
Moonshot AImoonshotaiKimi models
VolcenginevolcengineByteDance cloud
ZhipuAIzhipuaiGLM models

Configure any provider by setting provider_id and optionally base_url in your config:

[model]
provider_id = "xai"
model = "grok-3"
api_key = "${XAI_API_KEY}"