Compare commits

...

11 Commits

Author SHA1 Message Date
f359a64218 init agentic server monitor scripts
All checks were successful
Podman DDNS Image / build-and-push-ddns (push) Successful in 1m4s
2026-04-06 11:46:51 -04:00
3bc92c5889 last updates to wyoming before deprecation 2026-04-06 11:46:21 -04:00
430be75fab updates to fedora 43 image builder 2026-04-06 11:45:59 -04:00
25e812ab55 clean up k3s for new deployment 2026-04-06 11:45:24 -04:00
67b644005a add firewalld log following 2026-04-06 11:45:06 -04:00
9eb79d34f1 add vllm notes 2026-04-06 11:44:53 -04:00
9776f8ed9f update metallb to work for kube 1.35+ 2026-04-06 11:44:45 -04:00
57ec92fc5d add kubectl config set context namespace note 2026-04-06 11:44:28 -04:00
10786dead3 add roku list tv apps 2026-04-06 11:43:52 -04:00
a2be3dc1ea add launch.json 2026-04-06 11:43:40 -04:00
b78c205c9a update driveripper vm docs to fedora 43 2026-04-06 11:43:29 -04:00
16 changed files with 475 additions and 66 deletions

15
.vscode/launch.json vendored Normal file
View File

@@ -0,0 +1,15 @@
{
// Use IntelliSense to learn about possible attributes.
// Hover to view descriptions of existing attributes.
// For more information, visit: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"name": "Python Debugger: Current File",
"type": "debugpy",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal"
}
]
}

View File

@@ -52,21 +52,21 @@ Default user: `ducoterra`
Default password: `osbuild` Default password: `osbuild`
- [ ] `passwd ducoterra` - [ ] `passwd ducoterra`
- [ ] `hostnamectl hostname <hostname>`
- [ ] Updates - [ ] Updates
- [ ] `hostnamectl hostname <hostname>`
- [ ] Static IP and DNS address - [ ] Static IP and DNS address
```bash ```bash
# Convert the build to raw # Convert the build to raw
qemu-img convert -f qcow2 -O raw \ qemu-img convert -f qcow2 -O raw \
/srv/smb/pool0/ducoterra/images/builds/fedora-43-base.qcow2 \ /srv/smb/pool0/ducoterra/images/builds/fedora43-base.qcow2 \
/srv/vm/pool1/fedora-boot.raw /srv/vm/pool1/fedora-boot.raw
# Install (Change password for default user ducoterra!) # Install (Change password for default user ducoterra!)
virt-install \ virt-install \
--boot uefi,firmware.feature0.name=secure-boot,firmware.feature0.enabled=no \ --boot uefi,firmware.feature0.name=secure-boot,firmware.feature0.enabled=no \
--cpu host-passthrough --vcpus sockets=1,cores=8,threads=2 \ --cpu host-passthrough --vcpus sockets=1,cores=8,threads=2 \
--ram=8192 \ --ram=4096 \
--os-variant=fedora41 \ --os-variant=fedora41 \
--network bridge:bridge0 \ --network bridge:bridge0 \
--graphics none \ --graphics none \
@@ -135,7 +135,7 @@ virt-install \
--boot uefi,firmware.feature0.name=secure-boot,firmware.feature0.enabled=no \ --boot uefi,firmware.feature0.name=secure-boot,firmware.feature0.enabled=no \
--cpu host-passthrough --vcpus sockets=1,cores=8,threads=2 \ --cpu host-passthrough --vcpus sockets=1,cores=8,threads=2 \
--ram=8192 \ --ram=8192 \
--os-variant=fedora41 \ --os-variant=fedora43 \
--network bridge:bridge0 \ --network bridge:bridge0 \
--graphics none \ --graphics none \
--console pty,target.type=virtio \ --console pty,target.type=virtio \

View File

@@ -25,6 +25,8 @@
- [LG TV Switch](#lg-tv-switch) - [LG TV Switch](#lg-tv-switch)
- [Raspberry Pi Docker](#raspberry-pi-docker) - [Raspberry Pi Docker](#raspberry-pi-docker)
- [Extended OpenAI Conversation](#extended-openai-conversation) - [Extended OpenAI Conversation](#extended-openai-conversation)
- [Templates](#templates)
- [List Roku TV Apps](#list-roku-tv-apps)
## Certificates ## Certificates
@@ -750,3 +752,48 @@ Setting up a new llama.cpp agent:
| API Provider | OpenAI | | API Provider | OpenAI |
Model Name: `ggml-org/gpt-oss-120b-GGUF` Model Name: `ggml-org/gpt-oss-120b-GGUF`
## Templates
### List Roku TV Apps
```yaml
{{ state_attr("media_player.right_living_room", "source_list") }}
{% for app in tv_apps -%}
"{{ app }}"
{% endfor %}
```
For extended openai conversation template:
```yaml
- spec:
name: set_tv_app
description: |-
Use this function to open an app on a tv media player.
parameters:
type: object
properties:
entity_id:
type: string
description: entity_id of the TV Media Player
source:
type: string
description: The app you want to open
required:
- entity_id
- source
function:
type: composite
sequence:
- type: script
sequence:
- service:
- service: media_player.select_source
target:
entity_id: "{{ entity_id }}"
data:
source: "{{ source }}"
```

View File

@@ -11,3 +11,10 @@ helm: <https://helm.sh/docs/intro/install/>
For k3s, see [k3s](/active/software_k3s/k3s.md) For k3s, see [k3s](/active/software_k3s/k3s.md)
For k0s, see [k0s](/active/software_k0s/k0s.md) For k0s, see [k0s](/active/software_k0s/k0s.md)
## Notes
```bash
# Quickly set a new namespace
kubectl config set contexts.default.namespace metallb-system
```

View File

@@ -2,18 +2,18 @@
apiVersion: metallb.io/v1beta1 apiVersion: metallb.io/v1beta1
kind: IPAddressPool kind: IPAddressPool
metadata: metadata:
name: unifi-pool name: default-pool
namespace: kube-system namespace: metallb-system
spec: spec:
addresses: addresses:
- 2603:6013:3140:105:10:5:0:10-2603:6013:3140:105:10:5:0:210 - 10.4.1.1-10.4.3.254
- 10.5.0.10-10.5.0.210
--- ---
apiVersion: metallb.io/v1beta1 apiVersion: metallb.io/v1beta1
kind: L2Advertisement kind: L2Advertisement
metadata: metadata:
name: l2advertisement name: l2advertisement
namespace: kube-system namespace: metallb-system
spec: spec:
ipAddressPools: ipAddressPools:
- unifi-pool - default-pool

View File

@@ -15,33 +15,31 @@ spec:
app.kubernetes.io/name: ingress-nginx-demo-1 app.kubernetes.io/name: ingress-nginx-demo-1
spec: spec:
containers: containers:
- name: httpd - name: httpd
image: httpd image: httpd
ports: ports:
- containerPort: 80 - containerPort: 80
name: http name: http
resources: resources:
requests: requests:
memory: "100Mi" memory: "100Mi"
cpu: "1m" cpu: "1m"
limits: limits:
memory: "256Mi" memory: "256Mi"
cpu: "1" cpu: "1"
--- ---
apiVersion: v1 apiVersion: v1
kind: Service kind: Service
metadata: metadata:
name: ingress-nginx-demo-1 name: ingress-nginx-demo-1
namespace: default namespace: default
annotations: annotations:
metallb.universe.tf/address-pool: "unifi-pool" metallb.universe.tf/address-pool: "default-pool"
spec: spec:
ipFamilyPolicy: PreferDualStack ipFamilyPolicy: PreferDualStack
ipFamilies: ipFamilies:
- IPv6 - IPv4
- IPv4
type: LoadBalancer type: LoadBalancer
ports: ports:
- name: http - name: http

View File

@@ -0,0 +1,43 @@
# Metal LB
## Install
```bash
# Set a manual IP if you have a no-dhcp network
nmcli connection modify "Wired connection 2" \
ipv4.method manual \
ipv4.gateway 10.4.0.1 \
ipv4.addresses 10.4.0.3/22
# Clear it if you have a dhcp network
nmcli connection modify "Wired connection 2" \
ipv4.addresses "" \
ipv4.gateway "" \
ipv4.method auto
# Bring up the interface
nmcli connection up "Wired connection 2"
# Create the metallb namespace
kubectl apply -f active/kubernetes_metallb/namespace.yaml
# Install metallb
helm repo add metallb https://metallb.github.io/metallb
helm repo update
helm upgrade --install metallb \
--namespace metallb-system \
metallb/metallb
# Check that the pods installed correctly
kubectl get pod -n metallb-system
# Install the address pool
# NOTE: Edit this to match your own allocated addresses!
kubectl apply -f active/kubernetes_metallb/addresspool.yaml
# Test that the service is working. This will spin up a web server on port 8001
kubectl apply -f active/kubernetes_metallb/metallb-test.yaml
# Delete the test
kubectl delete -f active/kubernetes_metallb/metallb-test.yaml
```

View File

@@ -0,0 +1,8 @@
apiVersion: v1
kind: Namespace
metadata:
name: metallb-system
labels:
pod-security.kubernetes.io/enforce: privileged
pod-security.kubernetes.io/audit: privileged
pod-security.kubernetes.io/warn: privileged

View File

@@ -45,7 +45,10 @@
- [Benchmark Results](#benchmark-results) - [Benchmark Results](#benchmark-results)
- [Testing with Curl](#testing-with-curl) - [Testing with Curl](#testing-with-curl)
- [OpenAI API](#openai-api) - [OpenAI API](#openai-api)
- [VLLM](#vllm)
- [Run VLLM with Podman](#run-vllm-with-podman)
- [Misc](#misc) - [Misc](#misc)
- [Quantizing your own Models](#quantizing-your-own-models)
- [Qwen3.5 Settings](#qwen35-settings) - [Qwen3.5 Settings](#qwen35-settings)
## Notes ## Notes
@@ -215,6 +218,11 @@ hf download --local-dir . ggml-org/Ministral-3-3B-Instruct-2512-GGUF
##### Qwen ##### Qwen
```bash ```bash
# qwen3.5-27b-opus
mkdir qwen3.5-27b-opus && qwen3.5-27b-opus
hf download --local-dir . Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-GGUF Qwen3.5-27B.Q4_K_M.gguf
hf download --local-dir . Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled-GGUF mmproj-BF16.gguf
# qwen3.5-4b # qwen3.5-4b
mkdir qwen3.5-4b && cd qwen3.5-4b mkdir qwen3.5-4b && cd qwen3.5-4b
hf download --local-dir . unsloth/Qwen3.5-4B-GGUF Qwen3.5-4B-Q8_0.gguf hf download --local-dir . unsloth/Qwen3.5-4B-GGUF Qwen3.5-4B-Q8_0.gguf
@@ -264,6 +272,17 @@ hf download --local-dir . unsloth/GLM-4.7-Flash-GGUF GLM-4.7-Flash-Q8_0.gguf
```bash ```bash
# Note "it" vs "pt" suffixes. "it" is instruction following, "pt" is the base model (not as good for out-of-the-box use) # Note "it" vs "pt" suffixes. "it" is instruction following, "pt" is the base model (not as good for out-of-the-box use)
# gemma-4-26b-a4b
mkdir gemma-4-26b-a4b && cd gemma-4-26b-a4b
hf download --local-dir . ggml-org/gemma-4-26B-A4B-it-GGUF gemma-4-26B-A4B-it-Q8_0.gguf
hf download --local-dir . ggml-org/gemma-4-26B-A4B-it-GGUF mmproj-gemma-4-26B-A4B-it-f16.gguf
# gemma-4-31b
mkdir gemma-4-31b && cd gemma-4-31b
hf download --local-dir . ggml-org/gemma-4-31B-it-GGUF gemma-4-31B-it-Q8_0.gguf
hf download --local-dir . ggml-org/gemma-4-31B-it-GGUF mmproj-gemma-4-31B-it-f16.gguf
# gemma-3-27b-it # gemma-3-27b-it
mkdir gemma-3-27b-it && cd gemma-3-27b-it mkdir gemma-3-27b-it && cd gemma-3-27b-it
hf download --local-dir . unsloth/gemma-3-27b-it-GGUF gemma-3-27b-it-Q8_0.gguf hf download --local-dir . unsloth/gemma-3-27b-it-GGUF gemma-3-27b-it-Q8_0.gguf
@@ -353,7 +372,7 @@ podman build -f .devops/vulkan.Dockerfile -t llama-cpp-vulkan:${BUILD_TAG} -t ll
# ROCM # ROCM
podman build -f .devops/rocm.Dockerfile -t llama-cpp-rocm:${BUILD_TAG} -t llama-cpp-rocm:latest . podman build -f .devops/rocm.Dockerfile -t llama-cpp-rocm:${BUILD_TAG} -t llama-cpp-rocm:latest .
# Run llama demo server (Available on port 8000) # Run llama demo server (Available on port 8010)
podman run \ podman run \
--rm \ --rm \
--name llama-server-demo \ --name llama-server-demo \
@@ -361,10 +380,11 @@ podman run \
--device=/dev/dri \ --device=/dev/dri \
-v /home/ai/models/text:/models:z \ -v /home/ai/models/text:/models:z \
-p 8010:8000 \ -p 8010:8000 \
--ipc host \
localhost/llama-cpp-vulkan:latest \ localhost/llama-cpp-vulkan:latest \
--host 0.0.0.0 \ --host 0.0.0.0 \
--port 8000 \ --port 8000 \
-c 16000 \ -c 128000 \
--perf \ --perf \
--n-gpu-layers all \ --n-gpu-layers all \
--jinja \ --jinja \
@@ -756,8 +776,64 @@ curl \
}' }'
``` ```
## VLLM
### Run VLLM with Podman
```bash
# 'latest' and 'nightly' are both viable tags
podman run --rm \
--device /dev/kfd \
--device /dev/dri \
-v ~/.cache/huggingface:/root/.cache/huggingface:z \
--env "HF_TOKEN=$HF_TOKEN" \
-p 8010:8000 \
--ipc=host \
docker.io/vllm/vllm-openai-rocm:nightly \
--enable-offline-docs \
# Pick your model
Qwen/Qwen3.5-35B-A3B --max-model-len 262144 --reasoning-parser qwen3 --enable-auto-tool-choice --tool-call-parser qwen3_coder
Qwen/Qwen3.5-9B --max-model-len 262144 --reasoning-parser qwen3 --enable-auto-tool-choice --tool-call-parser qwen3_coder
Qwen/Qwen3.5-35B-A3B-FP8
google/gemma-4-26B-A4B-it
openai/gpt-oss-120b
```
## Misc ## Misc
### Quantizing your own Models
```bash
# Create a scratch dir for downloading models
mkdir scratch && cd scratch
# qwen 3.5 35b
mkdir qwen3.5-35b-a3b && cd qwen3.5-35b-a3b
hf download --local-dir . Qwen/Qwen3.5-35B-A3B
# nemotron cascade
mkdir nemotron-cascade-2-30b-a3b && cd nemotron-cascade-2-30b-a3b
hf download --local-dir . nvidia/Nemotron-Cascade-2-30B-A3B
# Run the full
podman run -it --rm \
--device=/dev/kfd \
--device=/dev/dri \
-v $(pwd):/models:z \
--entrypoint /bin/bash \
ghcr.io/ggml-org/llama.cpp:full-vulkan
# Run ./llama-quantize to see available quants
# 7 = q_8
# 18 = q_6_k
# 17 = q_5_k
# 15 = q_4_k
./llama-quantize /models/$MODEL_NAME.gguf /models/$MODEL_NAME-Q6_K.gguf 18
./llama-quantize /models/$MODEL_NAME.gguf /models/$MODEL_NAME-Q8_0.gguf 7
```
### Qwen3.5 Settings ### Qwen3.5 Settings
> We recommend using the following set of sampling parameters for generation > We recommend using the following set of sampling parameters for generation

View File

@@ -23,6 +23,10 @@ firewall-cmd --info-service=samba
# Get zone information # Get zone information
firewall-cmd --info-zone=drop firewall-cmd --info-zone=drop
# Logging
firewall-cmd --set-log-denied=all
dmesg --follow | egrep -i 'REJECT|DROP'
``` ```
## Inspecting Zones ## Inspecting Zones

View File

@@ -3,10 +3,12 @@
- [K3S](#k3s) - [K3S](#k3s)
- [Guide](#guide) - [Guide](#guide)
- [Firewalld](#firewalld) - [Firewalld](#firewalld)
- [Set SELinux to Permissive](#set-selinux-to-permissive) - [SELinux](#selinux)
- [Install K3S (Single Node)](#install-k3s-single-node) - [Install Single Node K3S](#install-single-node-k3s)
- [Dual Stack IPv6 Support](#dual-stack-ipv6-support) - [Dual Stack IPv6 Support](#dual-stack-ipv6-support)
- [Single Stack IPv4](#single-stack-ipv4) - [Single Stack IPv4](#single-stack-ipv4)
- [Install Multi Node K3S](#install-multi-node-k3s)
- [Network Checks](#network-checks)
- [Kube Credentials](#kube-credentials) - [Kube Credentials](#kube-credentials)
- [Metal LB](#metal-lb) - [Metal LB](#metal-lb)
- [VLAN Setup](#vlan-setup) - [VLAN Setup](#vlan-setup)
@@ -35,25 +37,49 @@
## Firewalld ## Firewalld
```bash ```bash
firewall-cmd --permanent --zone=public --add-port=6443/tcp # apiserver # All required ports (https://docs.k3s.io/installation/requirements?_highlight=ports#local-ports)
firewall-cmd --permanent --zone=trusted --add-source=10.42.0.0/16 # pods firewall-cmd \
firewall-cmd --permanent --zone=trusted --add-source=fd02:c91e:56f4::/56 # pods --permanent \
firewall-cmd --permanent --zone=trusted --add-source=10.43.0.0/16 # services --zone=public \
firewall-cmd --permanent --zone=trusted --add-source=fd02:c91e:56f5::/112 # services --add-port=80/tcp \
--add-port=443/tcp \
--add-port=2379-2380/tcp \
--add-port=6443/tcp \
--add-port=8472/udp \
--add-port=10250/tcp
# IPv4 config
# 10.42 is for pods
# 10.43 is for services
firewall-cmd \
--permanent \
--zone=trusted \
--add-source=10.42.0.0/16 \
--add-source=10.43.0.0/16
# [Optional] IPv6 config
# fd02:c91e:56f4 is for pods
# fd02:c91e:56f5 is for services
firewall-cmd \
--permanent \
--zone=trusted \
--add-source=fd02:c91e:56f4::/56 \
--add-source=fd02:c91e:56f5::/112
firewall-cmd --reload firewall-cmd --reload
``` ```
## Set SELinux to Permissive ## SELinux
Make sure to add `--selinux` to your install script. Make sure to add `--selinux` to your install script.
## Install K3S (Single Node) ## Install Single Node K3S
### Dual Stack IPv6 Support ### Dual Stack IPv6 Support
```bash ```bash
curl -sfL https://get.k3s.io | sh -s - \ curl -sfL https://get.k3s.io | sh -s - \
--selinux \
"--disable" \ "--disable" \
"traefik" \ "traefik" \
"--disable" \ "--disable" \
@@ -67,8 +93,7 @@ curl -sfL https://get.k3s.io | sh -s - \
"--service-cidr" \ "--service-cidr" \
"10.43.0.0/16,fd02:c91e:56f5::/112" \ "10.43.0.0/16,fd02:c91e:56f5::/112" \
"--cluster-dns" \ "--cluster-dns" \
"fd02:c91e:56f5::10" \ "fd02:c91e:56f5::10"
--selinux
``` ```
### Single Stack IPv4 ### Single Stack IPv4
@@ -84,16 +109,67 @@ curl -sfL https://get.k3s.io | sh -s - \
--selinux --selinux
``` ```
## Install Multi Node K3S
TODO: haproxy (<https://docs.k3s.io/blog/2025/03/10/simple-ha?_highlight=tls&_highlight=san#load-balancer>)
Load balance a single registration point across all active nodes.
```bash
# Generate a shared token for joining nodes
# Copy this token to each node at ~/.k3s-token
pwgen --capitalize --numerals --secure 64 1 > ~/.k3s-token
# Create the first node
curl -sfL https://get.k3s.io | K3S_TOKEN=$(cat ~/.k3s-token) sh -s - \
--cluster-init \
--selinux \
"--disable" \
"traefik" \
"--disable" \
"servicelb" \
"--cluster-cidr" \
"10.42.0.0/16" \
"--service-cidr" \
"10.43.0.0/16"
# Copy the generated token to the other nodes
cat /var/lib/rancher/k3s/server/token
# Join nodes
curl -sfL https://get.k3s.io | K3S_TOKEN=$(cat ~/.k3s-token) sh -s - \
--selinux \
"--disable" \
"traefik" \
"--disable" \
"servicelb" \
"--cluster-cidr" \
"10.42.0.0/16" \
"--service-cidr" \
"10.43.0.0/16" \
--server https://kube1.reeselink.com:6443
```
## Network Checks
At this point it's a good idea to make sure node communication is working as expected.
```bash
firewall-cmd --set-log-denied=all
# You shouldn't see any dropped traffic from your nodes.
dmesg --follow | egrep -i 'REJECT|DROP'
```
## Kube Credentials ## Kube Credentials
On the operator On the operator
```bash ```bash
export KUBE_SERVER_ADDRESS="https://k3s.reeselink.com:6443" export KUBE_SERVER_ADDRESS="https://kube1.reeselink.com:6443"
# Copy the kube config down # Copy the kube config down
ssh k3s cat /etc/rancher/k3s/k3s.yaml | \ ssh kube1-root cat /etc/rancher/k3s/k3s.yaml | \
yq -y ".clusters[0].cluster.server = \"${KUBE_SERVER_ADDRESS}\"" > \ yq -r ".clusters[0].cluster.server = \"${KUBE_SERVER_ADDRESS}\"" > \
~/.kube/admin-kube-config ~/.kube/admin-kube-config
export KUBECONFIG=~/.kube/admin-kube-config
``` ```
## Metal LB ## Metal LB
@@ -141,15 +217,7 @@ IP. When that node goes down metallb simply advertises a new mac address for the
address, effectively moving the IP to another node. This isn't really "load balancing" but address, effectively moving the IP to another node. This isn't really "load balancing" but
"failover". Fortunately, that's exactly what we're looking for. "failover". Fortunately, that's exactly what we're looking for.
```bash [Install MetalLB](/active/kubernetes_metallb/metallb.md)
helm repo add metallb https://metallb.github.io/metallb
helm repo update
# Install metallb
helm upgrade --install metallb \
--namespace kube-system \
metallb/metallb
```
MetalLB doesn't know what IP addresses are available for it to allocate so MetalLB doesn't know what IP addresses are available for it to allocate so
we'll have to provide it with a list. The we'll have to provide it with a list. The

View File

@@ -1,6 +1,6 @@
name = "fedora43-base" name = "fedora43-base"
description = "Fedora Base Installation" description = "Fedora Base Installation"
version = "0.2.2" version = "0.2.3"
distro = "fedora-43" distro = "fedora-43"
modules = [] modules = []
groups = [] groups = []

View File

@@ -47,7 +47,7 @@ dnf install openscap-scanner scap-security-guide
2. Push the toml to composer 2. Push the toml to composer
```bash ```bash
composer-cli blueprints push active/software_osbuild/fedora42-base.toml composer-cli blueprints push active/software_osbuild/fedora43-base.toml
# List blueprints # List blueprints
composer-cli blueprints list composer-cli blueprints list
@@ -60,7 +60,7 @@ dnf install openscap-scanner scap-security-guide
composer-cli compose types composer-cli compose types
# Build the image # Build the image
composer-cli compose start fedora42-base qcow2 composer-cli compose start fedora43-base qcow2
# Check status # Check status
watch composer-cli compose status watch composer-cli compose status
@@ -94,7 +94,7 @@ dnf install openscap-scanner scap-security-guide
--network bridge:virbr0 \ --network bridge:virbr0 \
--graphics none \ --graphics none \
--console pty,target.type=virtio \ --console pty,target.type=virtio \
--import --disk "path=active/software_osbuild/secrets/fedora43-base.qcow2,bus=virtio" --import --disk "path=active/software_osbuild/secrets/fedora43base.qcow2,bus=virtio"
``` ```
### Image Build and Watch One Liner ### Image Build and Watch One Liner

View File

@@ -72,7 +72,8 @@ script/run \
--uri 'tcp://127.0.0.1:10400' \ --uri 'tcp://127.0.0.1:10400' \
--threshold '0.8' \ --threshold '0.8' \
--preload-model 'jarvis_v2' \ --preload-model 'jarvis_v2' \
--debug-probability --debug \
--custom-model-dir /home/ducoterra/models
``` ```
## Install Wyoming ## Install Wyoming
@@ -87,21 +88,21 @@ uv pip install .
uv pip install webrtc-noise-gain==1.2.3 uv pip install webrtc-noise-gain==1.2.3
# Copy listen and done sounds # Copy listen and done sounds
cp ~/Homelab/active/systemd_wyoming/{listening.wav,finished.wav} ~/wyoming-satellite/sounds scp /active/software_wyoming/{listening.wav,finished.wav} ~/wyoming-satellite/sounds
# typical wyoming command # typical wyoming command
# Add wake-uri and wake-word-name to your wyoming run # Add wake-uri and wake-word-name to your wyoming run
script/run \ script/run \
--name 'Living Room' \ --name 'Living Room' \
--uri 'tcp://0.0.0.0:10700' \ --uri 'tcp://0.0.0.0:10700' \
--mic-command 'arecord -r 16000 -c 1 -f S16_LE -t raw' \ --mic-command 'arecord -r 16000 -c 1 -f S16_LE -t raw -D plughw:CARD=Speaker,DEV=0' \
--snd-command 'aplay -r 22050 -c 1 -f S16_LE -t raw' \ --snd-command 'aplay -r 22050 -c 1 -f S16_LE -t raw -D plughw:CARD=Speaker,DEV=0' \
--awake-wav /root/wyoming-satellite/sounds/listening.wav \ --awake-wav /home/ducoterra/wyoming-satellite/sounds/listening.wav \
--done-wav /root/wyoming-satellite/sounds/finished.wav \ --done-wav /home/ducoterra/wyoming-satellite/sounds/finished.wav \
--synthesize-command tee \ --synthesize-command tee \
--transcript-command tee \ --transcript-command tee \
--wake-uri 'tcp://127.0.0.1:10400' \ --wake-uri 'tcp://127.0.0.1:10400' \
--wake-word-name 'hey dick head' \ --wake-word-name 'hey jarvis' \
--wake-refractory-seconds 1 --wake-refractory-seconds 1
# Allow through firewall # Allow through firewall

View File

@@ -0,0 +1,141 @@
import os
import subprocess
from functools import wraps
from typing import Callable
from langchain.agents import create_agent
from langchain_openai import ChatOpenAI
def make_verbose(func: Callable):
@wraps(func)
def wrapper(*args, **kwargs):
print("==========")
print(f"Calling {func.__name__} with params ({', '.join(args)}) and ({kwargs})")
result = func(*args, **kwargs)
print("==========")
return result
return wrapper
def run_command(command: list[str]) -> str:
"""Runs a command with subprocess.run and returns the stdout, stderr in a single string"""
process = subprocess.Popen(
command,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
)
stdouts: list[str] = []
stderrs: list[str] = []
if process.stdout:
print("**STDOUT**")
for line in process.stdout:
line = line.strip()
print(line)
stdouts.append(line)
if process.stderr:
print("**STDERR**")
for line in process.stderr:
line = line.strip()
print(line)
stderrs.append(line)
output = f"**STDOUT**\n{'\n'.join(stdouts)}\n\n**STDERR**{'\n'.join(stderrs)}"
return output
@make_verbose
def get_fstab() -> str:
"""Retruns the"""
output = run_command(["ssh", "driveripper", "virsh", "list", "--name"])
return output
@make_verbose
def check_vm_type(server_name: str) -> str:
"""Returns various information about a given server like OS and version. server_name should be a server from list_vms."""
output = run_command(["ssh", f"{server_name}-root", "cat", "/etc/*-release"])
return output
@make_verbose
def get_updates_fedora(server_name: str) -> str:
"""Check for updates for a given Fedora server"""
output = run_command(["ssh", f"{server_name}-root"])
return output
@make_verbose
def get_security_updates_fedora(server_name: str) -> str:
"""Checks only for security updates for a given Fedora server"""
output = run_command(
["ssh", f"{server_name}-root", "dnf", "check-update", "--security"]
)
return output
@make_verbose
def perform_security_updates_fedora(server_name: str) -> str:
"""Applies security updates for a given Fedora server"""
output = run_command(
["ssh", f"{server_name}-root", "dnf", "update", "--security", "-y"]
)
return output
@make_verbose
def perform_security_updates_ubuntu(server_name: str) -> str:
"""Applies security updates for a given Ubuntu server"""
output = run_command(["ssh", f"{server_name}-root", "apt", "update", "-y"])
return output
def get_api_key() -> str:
return os.getenv("OPENAI_API_KEY", "placeholder")
if __name__ == "__main__":
# Run the agent
llm = ChatOpenAI(
model="instruct",
base_url="https://llama-instruct.reeselink.com",
api_key=get_api_key,
temperature=0.7,
timeout=30,
max_retries=2,
verbose=True,
top_p=1,
)
agent = create_agent(
model=llm,
tools=[
list_vms,
check_vm_type,
get_updates_fedora,
get_security_updates_fedora,
perform_security_updates_fedora,
perform_security_updates_ubuntu,
],
system_prompt="You are a helpful assistant",
)
result = agent.invoke(
{
"messages": [
{
"role": "user",
"content": (
"List all the available servers. Then, for each server, check the "
"server's OS and use the appropriate update check tool to check for "
"security updates. If any server needs security updates, apply them with "
"the appropriate update tool. Finally, provide a brief summary of what "
"you did."
),
}
]
}
)
print(result["messages"][-1].content)

View File

@@ -100,13 +100,14 @@ def get_api_key() -> str:
if __name__ == "__main__": if __name__ == "__main__":
# Run the agent # Run the agent
llm = ChatOpenAI( llm = ChatOpenAI(
model="qwen3.5-35b-a3b", model="instruct",
base_url="https://llama-cpp.reeselink.com", base_url="https://llama-instruct.reeselink.com",
api_key=get_api_key, api_key=get_api_key,
temperature=0.95, temperature=0.7,
timeout=30, timeout=30,
max_retries=2, max_retries=2,
verbose=True, verbose=True,
top_p=1,
) )
agent = create_agent( agent = create_agent(