Ollama (Local LLMs)

Run Mudabbir entirely on your machine with Ollama — no API keys, no cloud, no costs. Both the Claude Agent SDK and Mudabbir Native backends work with Ollama out of the box.

How It Works

Since Ollama v0.14.0, Ollama exposes an Anthropic Messages API compatible endpoint. Mudabbir points the same AsyncAnthropic client (or Claude SDK subprocess) at your local Ollama server instead of Anthropic’s cloud. Same tool format, same streaming, zero format conversion.

Prerequisites

Install Ollama

Terminal window
curl -fsSL https://ollama.com/install.sh | sh

Pull a model

Terminal window
ollama pull qwen2.5:7b # Good balance of speed and quality
# or
ollama pull llama3.2 # Default model

Start Ollama

Terminal window
ollama serve

Quick Start

Terminal window
export MUDABBIR_LLM_PROVIDER=ollama
export MUDABBIR_OLLAMA_MODEL=qwen2.5:7b
mudabbir

Edit ~/.mudabbir/config.json:

{
"llm_provider": "ollama",
"ollama_host": "http://localhost:11434",
"ollama_model": "qwen2.5:7b"
}

Open the web dashboard, go to Settings → General:

  1. Set LLM Provider to Ollama
  2. Set Ollama Host (defaults to http://localhost:11434)
  3. Set Ollama Model to the model you pulled (e.g., qwen2.5:7b, deepseek-r1:8b)
Warning

The Ollama Host and Ollama Model fields only appear when LLM Provider is set to Ollama. Make sure to set the model name to match what you have installed — run ollama list to check.

Verify Setup

Run the built-in connectivity check:

Terminal window
mudabbir --check-ollama

This performs 4 checks:

CheckWhat it tests
Server reachablePings {ollama_host}/api/tags
Model availableVerifies configured model is pulled locally
Messages APITests Anthropic-compatible completion endpoint
Tool callingSends a dummy tool and checks the model uses it

Configuration

SettingEnv VarDefaultDescription
llm_providerMUDABBIR_LLM_PROVIDER"auto"Set to "ollama" for explicit Ollama usage
ollama_hostMUDABBIR_OLLAMA_HOST"http://localhost:11434"Ollama server URL
ollama_modelMUDABBIR_OLLAMA_MODEL"llama3.2"Model to use

Auto-Detection

When llm_provider is "auto" (the default):

  1. If anthropic_api_key is set → uses Anthropic
  2. If no API key is set → falls back to Ollama automatically

This means if you install Mudabbir and Ollama without any API keys, it just works.

Compatible Backends

BackendOllama SupportHow
Claude Agent SDKYesSets ANTHROPIC_BASE_URL env var for the SDK subprocess
Mudabbir NativeYesUses AsyncAnthropic(base_url=ollama_host) directly
Open InterpreterYes (existing)Has its own Ollama integration via OI_MODEL

Claude Agent SDK + Ollama

The default backend. The SDK subprocess receives these environment variables:

  • ANTHROPIC_BASE_URL → your Ollama host
  • ANTHROPIC_API_KEY"ollama" (accepted but not validated)

All SDK built-in tools (Bash, Read, Write, Edit, Glob, Grep, WebSearch, WebFetch) work as usual.

Mudabbir Native + Ollama

The custom orchestrator creates AsyncAnthropic(base_url=ollama_host) and sends tool definitions in standard Anthropic format. The model receives tool schemas and returns tool_use blocks just like Claude.

ModelSizeTool CallingNotes
qwen2.5:7b4.7 GBGoodBest balance for most users
qwen2.5:14b9 GBBetterMore reliable tool use
llama3.22 GBFairFast, lightweight
mistral:7b4.1 GBGoodStrong reasoning
deepseek-r1:8b4.9 GBGoodStrong at coding tasks

Limitations

  • Smart Model Router is skipped — When using Ollama, the Model Router cannot switch between models. Smart routing is automatically disabled.
  • Tool calling quality varies — Smaller models may not use tools reliably. If tools aren’t being called, try a larger model.
  • Ollama v0.14.0+ required — Older versions don’t expose the Anthropic Messages API endpoint.

Error Messages

Mudabbir provides Ollama-specific error messages instead of generic API errors:

ErrorMeaningFix
Model ‘X’ not found in OllamaThe configured model isn’t pulled locallyRun ollama pull <model> or change the model in Settings → General → Ollama Model
Ollama error: connection refusedOllama server isn’t runningRun ollama serve
Cannot connect to OllamaWrong host or Ollama is downCheck Ollama Host in Settings matches where Ollama is running

Troubleshooting

”Model not found in Ollama”

This means Mudabbir is trying to use a model you haven’t pulled. The default model is llama3.2 — if you use a different model (e.g., deepseek), make sure to update the setting:

Terminal window
# Check what models you have
ollama list
# Set the correct model
export MUDABBIR_OLLAMA_MODEL="deepseek-r1:8b"

Or update it in the dashboard: Settings → General → Ollama Model.

”Cannot reach Ollama server"

Terminal window
# Check Ollama is running
ollama serve
# Verify it's listening
curl http://localhost:11434/api/tags

"Messages API failed”

Your Ollama version may be too old. Update:

Terminal window
# macOS
brew upgrade ollama
# Linux
curl -fsSL https://ollama.com/install.sh | sh

“Model responded but did not use the tool”

Try a more capable model:

Terminal window
ollama pull qwen2.5:14b

Then set ollama_model to qwen2.5:14b in Settings or config.

Implementation

FileDescription
agents/mudabbir_native.pyOllama branch in _initialize(), model selection in chat()
agents/claude_sdk.pyOllama env vars passed to SDK subprocess in chat()
agents/router.pyOllama detection logging in _initialize_agent()
__main__.py--check-ollama CLI command
tests/test_ollama_agent.py18 tests covering both backends