Install¶
Choose one install path, then verify the binary is on your PATH.
Homebrew is the shortest path on macOS and on Linux x86_64 systems that already use Homebrew:
The Homebrew formula installs llama.cpp as a required dependency, so this path also provides llama-server for kbolt setup local.
Use Cargo when you want the CLI directly from crates.io or when you are on Windows:
If cargo install succeeds but kbolt is still not found, make sure ~/.cargo/bin is on your PATH.
Cargo installs only kbolt. Install llama-server separately if you plan to use kbolt setup local.
Prebuilt release archives are published on GitHub Releases.
Current release archives are built for:
- Linux x86_64
- macOS x86_64
- macOS aarch64
If you need Windows today, use cargo install.
Release archives install only kbolt. Install llama-server separately if you plan to use kbolt setup local.
Install llama-server when needed¶
llama-server is required only if you plan to use kbolt setup local.
Homebrew installs it through the llama.cpp dependency. Cargo and GitHub Release installs do not.
For non-Homebrew installs, follow the official llama.cpp install guide, then verify the binary is available:
If you plan to use only remote OpenAI-compatible providers, llama-server is not required.
Verify the install¶
Run:
Then check the command surface:
Next steps¶
- For the default local setup path, continue to Quickstart.
- For platform-specific support, see Platform support.
- If you need installation recovery steps, see Troubleshooting.