Files
homelab/active/software_opencode/opencode.md
ducoterra f2015e2c71
All checks were successful
Podman DDNS Image / build-and-push-ddns (push) Successful in 1m3s
checkpoint commit
2026-05-05 06:26:40 -04:00

1.1 KiB

Opencode

install

curl -fsSL https://opencode.ai/install | bash

configure custom llama.cpp server

Opencode supports any OpenAI-compatible API. Set the following environment variables to point it at your llama.cpp server:

export OPENAI_API_KEY=""
export OPENAI_BASE_URL="http://driveripper.reeselink.com:8000/v1"

persist across sessions

Add the exports to your shell profile (~/.bashrc, ~/.zshrc, etc.):

echo 'export OPENAI_API_KEY=""' >> ~/.bashrc
echo 'export OPENAI_BASE_URL="http://driveripper.reeselink.com:8000/v1"' >> ~/.bashrc
source ~/.bashrc

pick a model

After configuring the environment, launch opencode and select the model available from your llama.cpp instance:

opencode

Inside opencode, use /model to list available models and switch between them.

verify the connection

Run this one-liner to confirm opencode can reach the server:

OPENAI_API_KEY="" OPENAI_BASE_URL="http://driveripper.reeselink.com:8000/v1" opencode --help

If no auth-related errors appear, the endpoint is reachable.