checkpoint commit
All checks were successful
Podman DDNS Image / build-and-push-ddns (push) Successful in 1m3s
All checks were successful
Podman DDNS Image / build-and-push-ddns (push) Successful in 1m3s
This commit is contained in:
46
active/software_opencode/opencode.md
Normal file
46
active/software_opencode/opencode.md
Normal file
@@ -0,0 +1,46 @@
|
||||
# Opencode
|
||||
|
||||
## install
|
||||
|
||||
```bash
|
||||
curl -fsSL https://opencode.ai/install | bash
|
||||
```
|
||||
|
||||
## configure custom llama.cpp server
|
||||
|
||||
Opencode supports any OpenAI-compatible API. Set the following environment variables to point it at your llama.cpp server:
|
||||
|
||||
```bash
|
||||
export OPENAI_API_KEY=""
|
||||
export OPENAI_BASE_URL="http://driveripper.reeselink.com:8000/v1"
|
||||
```
|
||||
|
||||
### persist across sessions
|
||||
|
||||
Add the exports to your shell profile (`~/.bashrc`, `~/.zshrc`, etc.):
|
||||
|
||||
```bash
|
||||
echo 'export OPENAI_API_KEY=""' >> ~/.bashrc
|
||||
echo 'export OPENAI_BASE_URL="http://driveripper.reeselink.com:8000/v1"' >> ~/.bashrc
|
||||
source ~/.bashrc
|
||||
```
|
||||
|
||||
### pick a model
|
||||
|
||||
After configuring the environment, launch opencode and select the model available from your llama.cpp instance:
|
||||
|
||||
```bash
|
||||
opencode
|
||||
```
|
||||
|
||||
Inside opencode, use `/model` to list available models and switch between them.
|
||||
|
||||
### verify the connection
|
||||
|
||||
Run this one-liner to confirm opencode can reach the server:
|
||||
|
||||
```bash
|
||||
OPENAI_API_KEY="" OPENAI_BASE_URL="http://driveripper.reeselink.com:8000/v1" opencode --help
|
||||
```
|
||||
|
||||
If no auth-related errors appear, the endpoint is reachable.
|
||||
Reference in New Issue
Block a user