diff --git a/active/device_framework_desktop/framework_desktop.md b/active/device_framework_desktop/framework_desktop.md index 8eae31a..77e1e54 100644 --- a/active/device_framework_desktop/framework_desktop.md +++ b/active/device_framework_desktop/framework_desktop.md @@ -6,36 +6,7 @@ - [Notes](#notes) - [Firmware and Kernel](#firmware-and-kernel) - [Kernel args](#kernel-args) - - [Volume Locations](#volume-locations) - - [Setup](#setup) - - [Create the AI user](#create-the-ai-user) - - [Helper aliases](#helper-aliases) - - [Create the models dir](#create-the-models-dir) - - [Install the Hugging Face CLI](#install-the-hugging-face-cli) - - [Samba Model Storage](#samba-model-storage) - - [Download models](#download-models) - - [Text models](#text-models) - - [GPT-OSS](#gpt-oss) - - [Mistral](#mistral) - - [Nemotron](#nemotron) - - [Qwen](#qwen) - - [GLM](#glm) - - [Llama](#llama) - - [Gemma](#gemma) - - [Dolphin (Abliterated)](#dolphin-abliterated) - - [Image models](#image-models) - - [Z-Image](#z-image) - - [Flux](#flux) - - [Qwen Image 2512](#qwen-image-2512) - - [Embedding Models](#embedding-models) - - [Nomic](#nomic) - - [llama.cpp](#llamacpp) - - [stable-diffusion.cpp](#stable-diffusioncpp) - - [open-webui](#open-webui) - - [VLLM](#vllm) - - [Install the whole thing with quadlets (TM)](#install-the-whole-thing-with-quadlets-tm) - - [Install the update script](#install-the-update-script) - - [Install Guest Open Webui with Start/Stop Services](#install-guest-open-webui-with-startstop-services) + - [AI](#ai) ## BIOS @@ -65,517 +36,6 @@ amd_iommu=off amdgpu.gttsize=126976 ttm.pages_limit=32505856 Then `grub2-mkconfig -o /boot/grub2/grub.cfg` and `reboot`. -### Volume Locations +## AI -`~/.local/share/containers/storage/volumes/` - -## Setup - -### Create the AI user - -```bash -# Create your local ai user. This will be the user you launch podman processes from. -useradd -m ai -loginctl enable-linger ai -su -l ai -mkdir -p /home/ai/.config/containers/systemd/ -mkdir -p /home/ai/.ssh -``` - -Models are big. You'll want some tools to help find large files quickly when space runs out. - -### Helper aliases - -Add these to your .bashrc: - -```bash -# Calculate all folder sizes in current dir -alias {dudir,dud}='du -h --max-depth 1 | sort -h' - -# Calculate all file sizes in current dir -alias {dufile,duf}='ls -lhSr' - -# Restart llama-server / follow logs -alias llama-reload="systemctl --user daemon-reload && systemctl --user restart llama-server.service" -alias llama-logs="journalctl --user -fu llama-server" - -# Restart stable diffusion gen and edit server / follow logs -alias sd-gen-reload='systemctl --user daemon-reload && systemctl --user restart stable-diffusion-gen-server' -alias sd-gen-logs='journalctl --user -xeu stable-diffusion-gen-server' -alias sd-edit-reload='systemctl --user daemon-reload && systemctl --user restart stable-diffusion-edit-server' -alias sd-edit-logs='journalctl --user -xeu stable-diffusion-edit-server' -``` - -### Create the models dir - -```bash -mkdir -p /home/ai/models/{text,image,video,embedding,tts,stt} -``` - -### Install the Hugging Face CLI - - - -```bash -# Install -curl -LsSf https://hf.co/cli/install.sh | bash - -# Login -hf auth login -``` - -### Samba Model Storage - -I recommend adding network storage for keeping models offloaded. This mounts a samba share at `/srv/models`. - -```bash -# Add this to /etc/fstab -//driveripper.reeselink.com/smb_models /srv/models cifs _netdev,nofail,uid=1001,gid=1001,credentials=/etc/samba/credentials 0 0 - -# Then mount -systemctl daemon-reload -mount -a --mkdir -``` - -Here are some sync commands that I use to keep the samba share in sync with the home directory: - -```bash -# Sync models from home dir to the samba share -rsync -av --progress /home/ai/models/ /srv/models/ -``` - -### Download models - -#### Text models - - - -##### GPT-OSS - -```bash -# gpt-oss-120b -mkdir /home/ai/models/text/gpt-oss-120b -hf download --local-dir /home/ai/models/text/gpt-oss-120b ggml-org/gpt-oss-120b-GGUF - -# gpt-oss-20b -mkdir /home/ai/models/text/gpt-oss-20b -hf download --local-dir /home/ai/models/text/gpt-oss-20b ggml-org/gpt-oss-20b-GGUF -``` - -##### Mistral - -```bash -# devstral-2-123b -mkdir /home/ai/models/text/devstral-2-123b -hf download --local-dir /home/ai/models/text/devstral-2-123b unsloth/Devstral-2-123B-Instruct-2512-GGUF Q4_K_M/Devstral-2-123B-Instruct-2512-Q4_K_M-00001-of-00002.gguf -hf download --local-dir /home/ai/models/text/devstral-2-123b unsloth/Devstral-2-123B-Instruct-2512-GGUF Q4_K_M/Devstral-2-123B-Instruct-2512-Q4_K_M-00002-of-00002.gguf - -# devstral-small-2-24b -mkdir /home/ai/models/text/devstral-small-2-24b -hf download --local-dir /home/ai/models/text/devstral-small-2-24b unsloth/Devstral-Small-2-24B-Instruct-2512-GGUF Devstral-Small-2-24B-Instruct-2512-Q4_K_M.gguf - -# ministral-3-14b -mkdir /home/ai/models/text/ministral-3-14b -hf download --local-dir /home/ai/models/text/ministral-3-14b ggml-org/Ministral-3-14B-Reasoning-2512-GGUF - -# ministral-3-3b-instruct -mkdir /home/ai/models/text/ministral-3-3b-instruct -hf download --local-dir /home/ai/models/text/ministral-3-3b-instruct ggml-org/Ministral-3-3B-Instruct-2512-GGUF -``` - -##### Nemotron - -```bash -# nemotron-nano-30b -mkdir /home/ai/models/text/nemotron-nano-30b -hf download --local-dir /home/ai/models/text/nemotron-nano-30b ggml-org/Nemotron-Nano-3-30B-A3B-GGUF Nemotron-Nano-3-30B-A3B-Q4_K_M.gguf -``` - -##### Qwen - -```bash -# qwen3-30b-a3b-thinking -mkdir /home/ai/models/text/qwen3-30b-a3b-thinking -hf download --local-dir /home/ai/models/text/qwen3-30b-a3b-thinking ggml-org/Qwen3-30B-A3B-GGUF Qwen3-30B-A3B-Q4_K_M.gguf - -# qwen3-30b-a3b-instruct -mkdir /home/ai/models/text/qwen3-30b-a3b-instruct -hf download --local-dir /home/ai/models/text/qwen3-30b-a3b-instruct ggml-org/Qwen3-30B-A3B-Instruct-2507-Q8_0-GGUF - -# qwen3-coder-30b-a3b-instruct -mkdir /home/ai/models/text/qwen3-coder-30b-a3b-instruct -hf download --local-dir /home/ai/models/text/qwen3-coder-30b-a3b-instruct ggml-org/Qwen3-Coder-30B-A3B-Instruct-Q8_0-GGUF - -# qwen3-coder-next -mkdir /home/ai/models/text/qwen3-coder-next -hf download --local-dir /home/ai/models/text/qwen3-coder-next unsloth/Qwen3-Coder-Next-GGUF --include " 5_K_M/*.gguf" - -# qwen3-vl-30b-thinking -mkdir /home/ai/models/text/qwen3-vl-30b-thinking -hf download --local-dir /home/ai/models/text/qwen3-vl-30b-thinking unsloth/Qwen3-VL-30B-A3B-Thinking-1M-GGUF Qwen3-VL-30B-A3B-Thinking-1M-Q4_K_M.gguf -hf download --local-dir /home/ai/models/text/qwen3-vl-30b-thinking unsloth/Qwen3-VL-30B-A3B-Thinking-1M-GGUF mmproj-F16.gguf - -# qwen3-vl-8b-instruct -mkdir /home/ai/models/text/qwen3-vl-8b-instruct -hf download --local-dir /home/ai/models/text/qwen3-vl-8b-instruct Qwen/Qwen3-VL-8B-Instruct-GGUF Qwen3VL-8B-Instruct-Q4_K_M.gguf -hf download --local-dir /home/ai/models/text/qwen3-vl-8b-instruct Qwen/Qwen3-VL-8B-Instruct-GGUF mmproj-Qwen3VL-8B-Instruct-Q8_0.gguf - -# qwen3-4b-2507-abliterated -mkdir /home/ai/models/text/qwen3-4b-2507-abliterated -hf download --local-dir /home/ai/models/text/qwen3-4b-2507-abliterated prithivMLmods/Qwen3-4B-2507-abliterated-GGUF Qwen3-4B-Thinking-2507-abliterated-GGUF/Qwen3-4B-Thinking-2507-abliterated.Q4_K_M.gguf - -# qwen3-48b-a4b-abliterated -mkdir /home/ai/models/text/qwen3-48b-a4b-abliterated -hf download --local-dir /home/ai/models/text/qwen3-48b-a4b-abliterated DavidAU/Qwen3-48B-A4B-Savant-Commander-Distill-12X-Closed-Open-Heretic-Uncensored-GGUF Qwen3-48B-A4B-Savant-Commander-Dstll-12X-Cl-Op-Hrtic-Uncen-Q4_K_M.gguf -``` - -##### GLM - -```bash -# glm-4.7-flash-30b -mkdir /home/ai/models/text/glm-4.7-flash-30b -hf download --local-dir /home/ai/models/text/glm-4.7-flash-30b unsloth/GLM-4.7-Flash-GGUF GLM-4.7-Flash-Q4_K_M.gguf - -# glm-4.6v -mkdir /home/ai/models/text/glm-4.6v -hf download --local-dir /home/ai/models/text/glm-4.6v unsloth/GLM-4.6V-GGUF --include "Q4_K_M/*.gguf" -hf download --local-dir /home/ai/models/text/glm-4.6v unsloth/GLM-4.6V-GGUF mmproj-F16.gguf - -# glm-4.6v-flash -mkdir /home/ai/models/text/glm-4.6v-flash -hf download --local-dir /home/ai/models/text/glm-4.6v-flash unsloth/GLM-4.6V-Flash-GGUF GLM-4.6V-Flash-Q4_K_M.gguf -hf download --local-dir /home/ai/models/text/glm-4.6v-flash unsloth/GLM-4.6V-Flash-GGUF mmproj-F16.gguf -``` - -##### Llama - -```bash -# llama4-scout -mkdir /home/ai/models/text/llama4-scout -# Remember to move the gguf files into the llama4-scout folder, otherwise it won't pick up -hf download --local-dir /home/ai/models/text/llama4-scout unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF --include "Q4_K_M/*.gguf" -hf download --local-dir /home/ai/models/text/llama4-scout unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF mmproj-F16.gguf -``` - -##### Gemma - -```bash -# Note "it" vs "pt" suffixes. "it" is instruction following, "pt" is the base model (not as good for out-of-the-box use) -# gemma-3-27b-it -mkdir /home/ai/models/text/gemma-3-27b-it -hf download --local-dir /home/ai/models/text/gemma-3-27b-it unsloth/gemma-3-27b-it-GGUF gemma-3-27b-it-Q4_K_M.gguf -hf download --local-dir /home/ai/models/text/gemma-3-27b-it unsloth/gemma-3-27b-it-GGUF mmproj-F16.gguf -``` - -##### Dolphin (Abliterated) - -```bash -# dolphin-x1-8b -mkdir /home/ai/models/text/dolphin-x1-8b -hf download --local-dir /home/ai/models/text/dolphin-x1-8b dphn/Dolphin-X1-8B-GGUF Dolphin-X1-8B-Q4_K_M.gguf - -# dolphin-mistral-24b-venice -mkdir /home/ai/models/text/dolphin-mistral-24b-venice -hf download --local-dir /home/ai/models/text/dolphin-mistral-24b-venice bartowski/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-GGUF cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-Q4_K_M.gguf -``` - -#### Image models - -##### Z-Image - -```bash -# z-turbo -# Fastest image generation in 8 steps. Great a text and prompt following. -# Lacks variety. -mkdir /home/ai/models/image/z-turbo -hf download --local-dir /home/ai/models/image/z-turbo QuantStack/FLUX.1-Kontext-dev-GGUF flux1-kontext-dev-Q4_K_M.gguf -hf download --local-dir /home/ai/models/image/z-turbo black-forest-labs/FLUX.1-schnell ae.safetensors -hf download --local-dir /home/ai/models/image/z-turbo unsloth/Qwen3-4B-Instruct-2507-GGUF Qwen3-4B-Instruct-2507-Q4_K_M.gguf - -# z-image -# Full version of z-turbo. Needs 28-50 steps. -# Note, image quality not as good as z-turbo -mkdir /home/ai/models/image/z-image -hf download --local-dir /home/ai/models/image/z-image unsloth/Z-Image-GGUF z-image-Q4_K_M.gguf -hf download --local-dir /home/ai/models/image/z-image black-forest-labs/FLUX.1-schnell ae.safetensors -hf download --local-dir /home/ai/models/image/z-image unsloth/Qwen3-4B-Instruct-2507-GGUF Qwen3-4B-Instruct-2507-Q4_K_M.gguf -``` - -##### Flux - -```bash -# flux2-klein -# Capable of generating images in 4 steps -mkdir /home/ai/models/image/flux2-klein -hf download --local-dir /home/ai/models/image/flux2-klein leejet/FLUX.2-klein-9B-GGUF flux-2-klein-9b-Q4_0.gguf -hf download --local-dir /home/ai/models/image/flux2-klein black-forest-labs/FLUX.2-dev ae.safetensors -hf download --local-dir /home/ai/models/image/flux2-klein unsloth/Qwen3-8B-GGUF Qwen3-8B-Q4_K_M.gguf - -# flux-1-kontext -mkdir /home/ai/models/image/flux-1-kontext -hf download --local-dir /home/ai/models/image/flux-1-kontext leejet/Z-Image-Turbo-GGUF z_image_turbo-Q4_K.gguf -hf download --local-dir /home/ai/models/image/flux-1-kontext black-forest-labs/FLUX.1-dev ae.safetensors -hf download --local-dir /home/ai/models/image/flux-1-kontext comfyanonymous/flux_text_encoders clip_l.safetensors -hf download --local-dir /home/ai/models/image/flux-1-kontext comfyanonymous/flux_text_encoders t5xxl_fp16.safetensors -``` - -##### Qwen Image 2512 - -```bash - -``` - -#### Embedding Models - -##### Nomic - -```bash -# nomic-embed-text-v2 -mkdir /home/ai/models/embedding/nomic-embed-text-v2 -hf download --local-dir /home/ai/models/embedding/nomic-embed-text-v2 ggml-org/Nomic-Embed-Text-V2-GGUF -``` - -## llama.cpp - - - -```bash -# Build the llama.cpp container image -git clone https://github.com/ggml-org/llama.cpp.git -cd llama.cpp -export BUILD_TAG=$(date +"%Y-%m-%d-%H-%M-%S") - -# Vulkan -podman build -f .devops/vulkan.Dockerfile -t llama-cpp-vulkan:${BUILD_TAG} -t llama-cpp-vulkan:latest . - -# ROCM -podman build -f .devops/rocm.Dockerfile -t llama-cpp-rocm:${BUILD_TAG} -t llama-cpp-rocm:latest . - -# Run llama server (Available on port 8000) -# Add `--n-cpu-moe 32` to gpt-oss-120b to keep minimal number of expert in GPU -podman run \ ---rm \ ---name llama-server-demo \ ---device=/dev/kfd \ ---device=/dev/dri \ ---pod systemd-ai-internal \ --v /home/ai/models/text:/models:z \ -localhost/llama-cpp-vulkan:latest \ ---port 8000 \ --c 32000 \ ---perf \ ---n-gpu-layers all \ ---jinja \ ---models-max 1 \ ---models-dir /models - -# ROCM -podman run \ ---rm \ ---name llama-server-demo \ ---device=/dev/kfd \ ---device=/dev/dri \ ---pod systemd-ai-internal \ --v /home/ai/models/text:/models:z \ -localhost/llama-cpp-rocm:latest \ ---port 8000 \ --c 0 \ ---perf \ ---n-gpu-layers all \ ---jinja \ ---models-max 1 \ ---models-dir /models -``` - -## stable-diffusion.cpp - -Server: - -CLI: - -```bash -git clone https://github.com/leejet/stable-diffusion.cpp.git -cd stable-diffusion.cpp -git submodule update --init --recursive -export BUILD_TAG=$(date +"%Y-%m-%d-%H-%M-%S") - -# Vulkan -podman build -f Dockerfile.vulkan -t stable-diffusion-cpp:${BUILD_TAG} -t stable-diffusion-cpp:latest . -``` - -```bash -# z-turbo -podman run --rm \ --v /home/ai/models:/models:z \ --v /home/ai/output:/output:z \ ---device /dev/kfd \ ---device /dev/dri \ -localhost/stable-diffusion-cpp:latest \ ---diffusion-model /models/image/z-turbo/z_image_turbo-Q4_K.gguf \ ---vae /models/image/z-turbo/ae.safetensors \ ---llm /models/image/z-turbo/Qwen3-4B-Instruct-2507-Q4_K_M.gguf \ ---cfg-scale 1.0 \ --v \ ---seed -1 \ ---steps 8 \ ---vae-conv-direct \ --H 1024 \ --W 1024 \ --o /output/output.png \ --p "A photorealistic dragon" - -# z-image -podman run --rm \ --v /home/ai/models:/models:z \ --v /home/ai/output:/output:z \ ---device /dev/kfd \ ---device /dev/dri \ -localhost/stable-diffusion-cpp:latest \ ---diffusion-model /models/image/z-image/z-image-Q4_K_M.gguf \ ---vae /models/image/z-image/ae.safetensors \ ---llm /models/image/z-image/Qwen3-4B-Instruct-2507-Q4_K_M.gguf \ ---cfg-scale 1.0 \ --v \ ---seed -1 \ ---steps 28 \ ---vae-conv-direct \ --H 1024 \ --W 1024 \ --o /output/output.png \ --p "A photorealistic dragon" - -# flux2-klein -podman run --rm \ --v /home/ai/models:/models:z \ --v /home/ai/output:/output:z \ ---device /dev/kfd \ ---device /dev/dri \ -localhost/stable-diffusion-cpp:latest \ ---diffusion-model /models/image/flux2-klein/flux-2-klein-9b-Q4_0.gguf \ ---vae /models/image/flux2-klein/ae.safetensors \ ---llm /models/image/flux2-klein/Qwen3-8B-Q4_K_M.gguf \ ---cfg-scale 1.0 \ ---steps 4 \ --v \ ---seed -1 \ ---vae-conv-direct \ --H 1024 \ --W 1024 \ --o /output/output.png \ --p "A photorealistic dragon" - -# Edit with flux2 klein -.\bin\Release\sd-cli.exe \ ---diffusion-model /models/image/flux2-klein/flux-2-klein-9b-Q4_0.gguf \ ---vae /models/image/flux2-klein/ae.safetensors \ ---llm /models/image/flux2-klein/Qwen3-8B-Q4_K_M.gguf \ ---cfg-scale 1.0 \ ---sampling-method euler \ --v \ ---vae-conv-direct \ ---steps 4 --r .\kontext_input.png \ --p "change 'flux.cpp' to 'klein.cpp'" \ - -# Edit with flux kontext -podman run --rm \ --v /home/ai/models:/models:z \ --v /home/ai/output:/output:z \ ---device /dev/kfd \ ---device /dev/dri \ -localhost/stable-diffusion-cpp:latest \ ---diffusion-model /models/image/flux-1-kontext/flux1-kontext-dev-Q4_K_M.gguf \ ---vae /models/image/flux-1-kontext/ae.safetensors \ ---clip_l /models/image/flux-1-kontext/clip_l.safetensors \ ---t5xxl /models/image/flux-1-kontext/t5xxl_fp16.safetensors \ ---cfg-scale 1.0 \ ---sampling-method euler \ ---seed -1 \ ---steps 28 \ ---vae-conv-direct \ --v \ --H 512 \ --W 512 \ --o /output/output.png \ --r /output/everquest_logo.png \ --p "Add the text 'EverQuest'" -``` - -## open-webui - -```bash -mkdir /home/ai/.env -# Create a file called open-webui-env with `WEBUI_SECRET_KEY="some-random-key" -scp active/device_framework_desktop/secrets/open-webui-env deskwork-ai:.env/ - -# Will be available on port 8080 -podman run \ --d \ ---pod ai \ --v open-webui:/app/backend/data \ ---name open-webui \ ---restart always \ -ghcr.io/open-webui/open-webui:main -``` - -Use the following connections: - -| Service | Endpoint | -| -------------------- | ----------------------------------------- | -| llama.cpp | | -| stable-diffusion.cpp | | - -## VLLM - -```bash ---group-add=video \ ---cap-add=SYS_PTRACE \ ---security-opt seccomp=unconfined \ ---env "HF_TOKEN=$HF_TOKEN" \ ---ipc=host \ - -mkdir -p /home/ai/vllm/.cache/huggingface - -podman run --rm \ ---device /dev/kfd \ ---device /dev/dri \ --v /home/ai/vllm/.cache/huggingface:/root/.cache/huggingface:z \ --p 8002:8000 \ -docker.io/vllm/vllm-openai-rocm:latest \ ---model Qwen/Qwen3-0.6B -``` - -## Install the whole thing with quadlets (TM) - -```bash -# Installs and runs all services in `quadlets/` -scp -r active/device_framework_desktop/quadlets/* deskwork-ai:.config/containers/systemd/ -ssh deskwork-ai -systemctl --user daemon-reload -systemctl --user restart ai-internal-pod.service -``` - -Note, all services will be available at `host.containers.internal`. So llama.cpp -will be up at `http://host.containers.internal:8000`. - -### Install the update script - -```bash -# 1. Builds the latest llama.cpp and stable-diffusion.cpp -# 2. Pulls the latest open-webui -# 3. Restarts all services -scp active/device_framework_desktop/update-script.sh deskwork-ai: -ssh deskwork-ai -chmod +x update-script.sh -./update-script.sh -``` - -## Install Guest Open Webui with Start/Stop Services - -```bash -scp -r active/device_framework_desktop/systemd/. deskwork-ai:.config/systemd/user/ -ssh deskwork-ai -systemctl --user daemon-reload -systemctl --user enable open-webui-guest-start.timer -systemctl --user enable open-webui-guest-stop.timer -``` +See [Self Hosted AI Stack](/active/software_ai_stack/ai_stack.md) \ No newline at end of file diff --git a/active/software_ai_stack/ai_stack.md b/active/software_ai_stack/ai_stack.md new file mode 100644 index 0000000..4a0d822 --- /dev/null +++ b/active/software_ai_stack/ai_stack.md @@ -0,0 +1,499 @@ +# Self Hosted AI Stack + +- [Self Hosted AI Stack](#self-hosted-ai-stack) + - [Notes](#notes) + - [Podman Volume Locations](#podman-volume-locations) + - [Setup](#setup) + - [Create the AI user](#create-the-ai-user) + - [Helper aliases](#helper-aliases) + - [Create the models dir](#create-the-models-dir) + - [Install the Hugging Face CLI](#install-the-hugging-face-cli) + - [Samba Model Storage](#samba-model-storage) + - [Download models](#download-models) + - [Text models](#text-models) + - [GPT-OSS](#gpt-oss) + - [Mistral](#mistral) + - [Qwen](#qwen) + - [GLM](#glm) + - [Gemma](#gemma) + - [Dolphin](#dolphin) + - [Image models](#image-models) + - [Z-Image](#z-image) + - [Flux](#flux) + - [Embedding Models](#embedding-models) + - [Nomic](#nomic) + - [llama.cpp](#llamacpp) + - [stable-diffusion.cpp](#stable-diffusioncpp) + - [open-webui](#open-webui) + - [Install Services with Quadlets](#install-services-with-quadlets) + - [Internal and External Pods](#internal-and-external-pods) + - [Llama CPP Server](#llama-cpp-server) + - [Stable Diffusion CPP](#stable-diffusion-cpp) + - [Open Webui](#open-webui-1) + - [Install the update script](#install-the-update-script) + - [Install Guest Open Webui with Start/Stop Services](#install-guest-open-webui-with-startstop-services) + - [Benchmark Results](#benchmark-results) + +## Notes + +### Podman Volume Locations + +`~/.local/share/containers/storage/volumes/` + +## Setup + +### Create the AI user + +```bash +# Create your local ai user. This will be the user you launch podman processes from. +useradd -m ai +loginctl enable-linger ai +su -l ai +mkdir -p /home/ai/.config/containers/systemd/ +mkdir -p /home/ai/.ssh +``` + +Models are big. You'll want some tools to help find large files quickly when space runs out. + +### Helper aliases + +Add these to your .bashrc: + +```bash +# Calculate all folder sizes in current dir +alias {dudir,dud}='du -h --max-depth 1 | sort -h' + +# Calculate all file sizes in current dir +alias {dufile,duf}='ls -lhSr' + +# Restart llama-server / follow logs +alias llama-reload="systemctl --user daemon-reload && systemctl --user restart llama-server.service" +alias llama-logs="journalctl --user -fu llama-server" + +# Restart stable diffusion gen and edit server / follow logs +alias sd-gen-reload='systemctl --user daemon-reload && systemctl --user restart stable-diffusion-gen-server' +alias sd-gen-logs='journalctl --user -xeu stable-diffusion-gen-server' +alias sd-edit-reload='systemctl --user daemon-reload && systemctl --user restart stable-diffusion-edit-server' +alias sd-edit-logs='journalctl --user -xeu stable-diffusion-edit-server' +``` + +### Create the models dir + +```bash +mkdir -p /home/ai/models/{text,image,video,embedding,tts,stt} +``` + +### Install the Hugging Face CLI + + + +```bash +# Install +curl -LsSf https://hf.co/cli/install.sh | bash + +# Login +hf auth login +``` + +### Samba Model Storage + +I recommend adding network storage for keeping models offloaded. This mounts a samba share at `/srv/models`. + +```bash +dnf install -y cifs-utils + +# Add this to /etc/fstab +//driveripper.reeselink.com/smb_models /srv/models cifs _netdev,nofail,uid=1001,gid=1001,credentials=/etc/samba/credentials 0 0 + +# Then mount +systemctl daemon-reload +mount -a --mkdir +``` + +Here are some sync commands that I use to keep the samba share in sync with the home directory: + +```bash +# Sync models from home dir to the samba share +rsync -av --progress /home/ai/models/ /srv/models/ +``` + +### Download models + +In general I try to run 8 bit quantized minimum. + +#### Text models + + + +##### GPT-OSS + + + +```bash +# gpt-oss-120b +mkdir gpt-oss-120b && cd gpt-oss-120b +hf download --local-dir . ggml-org/gpt-oss-120b-GGUF + +# gpt-oss-20b +mkdir gpt-oss-20b && cd gpt-oss-20b +hf download --local-dir . ggml-org/gpt-oss-20b-GGUF +``` + +##### Mistral + +```bash +# devstral-small-2-24b +mkdir devstral-small-2-24b && cd devstral-small-2-24b +hf download --local-dir . ggml-org/Devstral-Small-2-24B-Instruct-2512-GGUF Devstral-Small-2-24B-Instruct-2512-Q8_0.gguf + +# ministral-3-14b +mkdir ministral-3-14b && cd ministral-3-14b +hf download --local-dir . ggml-org/Ministral-3-14B-Reasoning-2512-GGUF + +# ministral-3-3b-instruct +mkdir ministral-3-3b-instruct && cd ministral-3-3b-instruct +hf download --local-dir . ggml-org/Ministral-3-3B-Instruct-2512-GGUF +``` + +##### Qwen + +```bash +# qwen3-30b-a3b-thinking +mkdir qwen3-30b-a3b-thinking && cd qwen3-30b-a3b-thinking +hf download --local-dir . ggml-org/Qwen3-30B-A3B-Thinking-2507-Q8_0-GGUF + +# qwen3-30b-a3b-instruct +mkdir qwen3-30b-a3b-instruct && cd qwen3-30b-a3b-instruct +hf download --local-dir . ggml-org/Qwen3-30B-A3B-Instruct-2507-Q8_0-GGUF + +# qwen3-vl-30b-a3b-thinking +mkdir qwen3-vl-30b-a3b-thinking && cd qwen3-vl-30b-a3b-thinking +hf download --local-dir . Qwen/Qwen3-VL-30B-A3B-Thinking-GGUF Qwen3VL-30B-A3B-Thinking-Q8_0.gguf +hf download --local-dir . Qwen/Qwen3-VL-30B-A3B-Thinking-GGUF mmproj-Qwen3VL-30B-A3B-Thinking-F16.gguf + +# qwen3-vl-30b-a3b-instruct +mkdir qwen3-vl-30b-a3b-instruct && cd qwen3-vl-30b-a3b-instruct +hf download --local-dir . Qwen/Qwen3-VL-30B-A3B-Instruct-GGUF Qwen3VL-30B-A3B-Instruct-Q8_0.gguf +hf download --local-dir . Qwen/Qwen3-VL-30B-A3B-Instruct-GGUF mmproj-Qwen3VL-30B-A3B-Instruct-F16.gguf + +# qwen3-coder-30b-a3b-instruct +mkdir qwen3-coder-30b-a3b-instruct && cd qwen3-coder-30b-a3b-instruct +hf download --local-dir . ggml-org/Qwen3-Coder-30B-A3B-Instruct-Q8_0-GGUF + +# qwen3-coder-next +mkdir qwen3-coder-next && cd qwen3-coder-next +hf download --local-dir . unsloth/Qwen3-Coder-Next-GGUF --include "Q8_0/*.gguf" +``` + +##### GLM + +```bash +# glm-4.7-flash-30b +mkdir glm-4.7-flash-30b && cd glm-4.7-flash-30b +hf download --local-dir . unsloth/GLM-4.7-Flash-GGUF GLM-4.7-Flash-Q8_0.gguf +``` + +##### Gemma + +```bash +# Note "it" vs "pt" suffixes. "it" is instruction following, "pt" is the base model (not as good for out-of-the-box use) +# gemma-3-27b-it +mkdir gemma-3-27b-it && cd gemma-3-27b-it +hf download --local-dir . unsloth/gemma-3-27b-it-GGUF gemma-3-27b-it-Q8_0.gguf +hf download --local-dir . unsloth/gemma-3-27b-it-GGUF mmproj-F16.gguf +``` + +##### Dolphin + +```bash +# dolphin-mistral-24b-venice +mkdir dolphin-mistral-24b-venice && cd dolphin-mistral-24b-venice +cd dolphin-mistral-24b-venice +hf download --local-dir . bartowski/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-GGUF cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-Q8_0.gguf +``` + +#### Image models + +##### Z-Image + +```bash +# z-turbo +# Fastest image generation in 8 steps. Great a text and prompt following. +# Lacks variety. +mkdir /home/ai/models/image/z-turbo && cd /home/ai/models/image/z-turbo +hf download --local-dir . leejet/Z-Image-Turbo-GGUF z_image_turbo-Q8_0.gguf +hf download --local-dir . black-forest-labs/FLUX.1-schnell ae.safetensors +hf download --local-dir . unsloth/Qwen3-4B-Instruct-2507-GGUF Qwen3-4B-Instruct-2507-Q8_0.gguf +``` + +##### Flux + +```bash +# flux2-klein +# Capable of editing images in 4 steps (though 5 is my recommended steps) +mkdir /home/ai/models/image/flux2-klein && cd /home/ai/models/image/flux2-klein +hf download --local-dir . leejet/FLUX.2-klein-9B-GGUF flux-2-klein-9b-Q8_0.gguf +hf download --local-dir . black-forest-labs/FLUX.2-dev ae.safetensors +hf download --local-dir . unsloth/Qwen3-8B-GGUF Qwen3-8B-Q8_0.gguf +``` + +#### Embedding Models + +##### Nomic + +```bash +# nomic-embed-text-v2 +mkdir /home/ai/models/embedding/nomic-embed-text-v2 +hf download --local-dir /home/ai/models/embedding/nomic-embed-text-v2 ggml-org/Nomic-Embed-Text-V2-GGUF +``` + +## llama.cpp + + + +```bash +# Build the llama.cpp container image +git clone https://github.com/ggml-org/llama.cpp.git +cd llama.cpp +export BUILD_TAG=$(date +"%Y-%m-%d-%H-%M-%S") + +# Vulkan (better performance as of Feb 2026) +podman build -f .devops/vulkan.Dockerfile -t llama-cpp-vulkan:${BUILD_TAG} -t llama-cpp-vulkan:latest . + +# ROCM +podman build -f .devops/rocm.Dockerfile -t llama-cpp-rocm:${BUILD_TAG} -t llama-cpp-rocm:latest . + +# Run llama demo server (Available on port 8000) +podman run \ +--rm \ +--name llama-server-demo \ +--device=/dev/kfd \ +--device=/dev/dri \ +-v /home/ai/models/text:/models:z \ +-p 8000:8000 \ +localhost/llama-cpp-vulkan:latest \ +--host 0.0.0.0 \ +--port 8000 \ +-c 32768 \ +--perf \ +--n-gpu-layers all \ +--jinja \ +--models-max 1 \ +--models-dir /models +``` + +## stable-diffusion.cpp + +Server: + +CLI: + +```bash +git clone https://github.com/leejet/stable-diffusion.cpp.git +cd stable-diffusion.cpp +git submodule update --init --recursive +export BUILD_TAG=$(date +"%Y-%m-%d-%H-%M-%S") + +# Vulkan +podman build -f Dockerfile.vulkan -t stable-diffusion-cpp:${BUILD_TAG} -t stable-diffusion-cpp:latest . +``` + +```bash +# Generate an image with z-turbo +podman run --rm \ +-v /home/ai/models:/models:z \ +-v /home/ai/output:/output:z \ +--device /dev/kfd \ +--device /dev/dri \ +localhost/stable-diffusion-cpp:latest \ +--diffusion-model /models/image/z-turbo/z_image_turbo-Q8_0.gguf \ +--vae /models/image/z-turbo/ae.safetensors \ +--llm /models/image/z-turbo/Qwen3-4B-Instruct-2507-Q8_0.gguf \ +-v \ +--cfg-scale 1.0 \ +--vae-conv-direct \ +--diffusion-conv-direct \ +--fa \ +--mmap \ +--seed -1 \ +--steps 8 \ +-H 1024 \ +-W 1024 \ +-o /output/output.png \ +-p "A photorealistic dragon" + +# Edit the generated image with flux2-klein +podman run --rm \ +-v /home/ai/models:/models:z \ +-v /home/ai/output:/output:z \ +--device /dev/kfd \ +--device /dev/dri \ +localhost/stable-diffusion-cpp:latest \ +--diffusion-model /models/image/flux2-klein/flux-2-klein-9b-Q8_0.gguf \ +--vae /models/image/flux2-klein/ae.safetensors \ +--llm /models/image/flux2-klein/Qwen3-8B-Q8_0.gguf \ +-v \ +--cfg-scale 1.0 \ +--sampling-method euler \ +--vae-conv-direct \ +--diffusion-conv-direct \ +--fa \ +--mmap \ +--steps 5 \ +-H 1024 \ +-W 1024 \ +-r /output/output.png \ +-o /output/edit.png \ +-p "Replace the dragon with an old car" +``` + +## open-webui + +```bash +mkdir /home/ai/.env +# Create a file called open-webui-env with `WEBUI_SECRET_KEY="some-random-key" +scp active/device_framework_desktop/secrets/open-webui-env deskwork-ai:.env/ + +# Will be available on port 8080 +podman run \ +-d \ +-p 8080:8080 \ +-v open-webui:/app/backend/data \ +--name open-webui \ +--restart always \ +ghcr.io/open-webui/open-webui:main +``` + +Use the following connections: + +| Service | Endpoint | +| ------------------------- | ----------------------------------------- | +| llama.cpp | | +| stable-diffusion.cpp | | +| stable-diffusion.cpp edit | | + +## Install Services with Quadlets + +### Internal and External Pods + +These will be used to restrict internet access to our llama.cpp and +stable-diffusion.cpp services while allowing the frontend services to +communicate with those containers. + +```bash +scp -r active/device_framework_desktop/quadlets_pods/* deskwork-ai:.config/containers/systemd/ +ssh deskwork-ai +systemctl --user daemon-reload +systemctl --user start ai-internal-pod.service ai-external-pod.service +``` + +### Llama CPP Server + +Installs the llama.cpp server to run our text models. + +```bash +scp -r active/device_framework_desktop/quadlets_llama_server/* deskwork-ai:.config/containers/systemd/ +ssh deskwork-ai +systemctl --user daemon-reload +systemctl --user restart ai-internal-pod.service +``` + +### Stable Diffusion CPP + +Installs the stable-diffusion.cpp server to run our image models. + +```bash +scp -r active/device_framework_desktop/quadlets_stable_diffusion/* deskwork-ai:.config/containers/systemd/ +ssh deskwork-ai +systemctl --user daemon-reload +systemctl --user restart ai-internal-pod.service +``` + +### Open Webui + +Installs the open webui frontend. + +```bash +scp -r active/device_framework_desktop/quadlets_openwebui/* deskwork-ai:.config/containers/systemd/ +ssh deskwork-ai +systemctl --user daemon-reload +systemctl --user restart ai-external-pod.service +``` + +Note, all services will be available at `host.containers.internal`. So llama.cpp +will be up at `http://host.containers.internal:8000`. + +### Install the update script + +```bash +# 1. Builds the latest llama.cpp and stable-diffusion.cpp +# 2. Pulls the latest open-webui +# 3. Restarts all services +scp active/device_framework_desktop/update-script.sh deskwork-ai: +ssh deskwork-ai +chmod +x update-script.sh +./update-script.sh +``` + +### Install Guest Open Webui with Start/Stop Services + +Optionally install a guest openwebui service. + +```bash +scp -r active/device_framework_desktop/systemd/. deskwork-ai:.config/systemd/user/ +ssh deskwork-ai +systemctl --user daemon-reload +systemctl --user enable open-webui-guest-start.timer +systemctl --user enable open-webui-guest-stop.timer +``` + +## Benchmark Results + +Benchmarks are run with [unsloth gpt-oss-20b Q8_0](https://huggingface.co/unsloth/gpt-oss-20b-GGUF/blob/main/gpt-oss-20b-Q8_0.gguf) + +```bash +# Run the llama.cpp pod (AMD) +podman run -it --rm \ +--device=/dev/kfd \ +--device=/dev/dri \ +-v /home/ai/models/text:/models:z \ +--entrypoint /bin/bash \ +ghcr.io/ggml-org/llama.cpp:full-vulkan + +# Benchmark command +./llama-bench -m /models/benchmark/gpt-oss-20b-Q8_0.gguf +``` + +Framework Desktop + +| model | size | params | backend | ngl | test | t/s | +| ---------------- | --------: | ------: | ------- | ---: | ----: | -------------: | +| gpt-oss 20B Q8_0 | 11.27 GiB | 20.91 B | Vulkan | 99 | pp512 | 1128.50 ± 7.60 | +| gpt-oss 20B Q8_0 | 11.27 GiB | 20.91 B | Vulkan | 99 | tg128 | 77.94 ± 0.08 | + +| model | size | params | backend | ngl | test | t/s | +| ---------------- | --------: | ------: | ------- | ---: | ----: | ------------: | +| gpt-oss 20B Q8_0 | 11.27 GiB | 20.91 B | ROCm | 99 | pp512 | 526.05 ± 7.04 | +| gpt-oss 20B Q8_0 | 11.27 GiB | 20.91 B | ROCm | 99 | tg128 | 70.98 ± 0.01 | + +AMD R9700 + +| model | size | params | backend | ngl | test | t/s | +| ---------------- | --------: | ------: | ------- | ---: | ----: | ---------------: | +| gpt-oss 20B Q8_0 | 11.27 GiB | 20.91 B | Vulkan | 99 | pp512 | 3756.79 ± 203.97 | +| gpt-oss 20B Q8_0 | 11.27 GiB | 20.91 B | Vulkan | 99 | tg128 | 174.24 ± 0.32 | + +NVIDIA GeForce RTX 4080 SUPER + +| model | size | params | backend | ngl | test | t/s | +| ---------------- | --------: | ------: | ------- | ---: | ----: | ------------: | +| gpt-oss 20B Q8_0 | 11.27 GiB | 20.91 B | CUDA | 99 | tg128 | 193.28 ± 1.03 | +| gpt-oss 20B Q8_0 | 11.27 GiB | 20.91 B | CUDA | 99 | tg256 | 193.55 ± 0.34 | +| gpt-oss 20B Q8_0 | 11.27 GiB | 20.91 B | CUDA | 99 | tg512 | 187.39 ± 0.10 | + +NVIDIA GeForce RTX 3090 + +| model | size | params | backend | ngl | test | t/s | +| ---------------- | --------: | ------: | ------- | ---: | ----: | --------------: | +| gpt-oss 20B Q8_0 | 11.27 GiB | 20.91 B | CUDA | 99 | pp512 | 4297.72 ± 35.60 | +| gpt-oss 20B Q8_0 | 11.27 GiB | 20.91 B | CUDA | 99 | tg128 | 197.73 ± 0.62 | \ No newline at end of file diff --git a/active/software_ai_stack/quadlets_guest/llama-embed.container b/active/software_ai_stack/quadlets_guest/llama-embed.container new file mode 100644 index 0000000..a800ef7 --- /dev/null +++ b/active/software_ai_stack/quadlets_guest/llama-embed.container @@ -0,0 +1,39 @@ +[Unit] +Description=A Llama CPP Server running an Embedding Model + +[Container] +# Shared AI internal pod +Pod=ai-internal.pod + +# Image is built locally via podman build +Image=localhost/llama-cpp-vulkan:latest + +# Downloaded models volume +Volume=/home/ai/models/embedding:/models:z + +# GPU Device +AddDevice=/dev/kfd +AddDevice=/dev/dri + +# Server command +Exec=--port 8001 \ + --n-gpu-layers all \ + --embeddings \ + -m /models/nomic-embed-text-v2/nomic-embed-text-v2-moe-q8_0.gguf + +# Health Check +HealthCmd=CMD-SHELL curl --fail http://127.0.0.1:8001/props || exit 1 +HealthInterval=10s +HealthRetries=3 +HealthStartPeriod=10s +HealthTimeout=30s +HealthOnFailure=kill + +[Service] +Restart=always +# Extend Timeout to allow time to pull the image +TimeoutStartSec=900 + +[Install] +# Start by default on boot +WantedBy=multi-user.target default.target diff --git a/active/device_framework_desktop/quadlets_beta/open-webui-guest.container b/active/software_ai_stack/quadlets_guest/open-webui-guest.container similarity index 100% rename from active/device_framework_desktop/quadlets_beta/open-webui-guest.container rename to active/software_ai_stack/quadlets_guest/open-webui-guest.container diff --git a/active/device_framework_desktop/quadlets/llama-server.container b/active/software_ai_stack/quadlets_llama_server/llama-server.container similarity index 98% rename from active/device_framework_desktop/quadlets/llama-server.container rename to active/software_ai_stack/quadlets_llama_server/llama-server.container index 8d0f43f..354552f 100644 --- a/active/device_framework_desktop/quadlets/llama-server.container +++ b/active/software_ai_stack/quadlets_llama_server/llama-server.container @@ -17,7 +17,7 @@ AddDevice=/dev/dri # Server command Exec=--port 8000 \ - -c 48000 \ + -c 16384 \ --perf \ --n-gpu-layers all \ --jinja \ diff --git a/active/device_framework_desktop/quadlets/open-webui.container b/active/software_ai_stack/quadlets_openwebui/open-webui.container similarity index 100% rename from active/device_framework_desktop/quadlets/open-webui.container rename to active/software_ai_stack/quadlets_openwebui/open-webui.container diff --git a/active/device_framework_desktop/quadlets/ai-external.network b/active/software_ai_stack/quadlets_pods/ai-external.network similarity index 100% rename from active/device_framework_desktop/quadlets/ai-external.network rename to active/software_ai_stack/quadlets_pods/ai-external.network diff --git a/active/device_framework_desktop/quadlets/ai-external.pod b/active/software_ai_stack/quadlets_pods/ai-external.pod similarity index 100% rename from active/device_framework_desktop/quadlets/ai-external.pod rename to active/software_ai_stack/quadlets_pods/ai-external.pod diff --git a/active/device_framework_desktop/quadlets/ai-internal.network b/active/software_ai_stack/quadlets_pods/ai-internal.network similarity index 100% rename from active/device_framework_desktop/quadlets/ai-internal.network rename to active/software_ai_stack/quadlets_pods/ai-internal.network diff --git a/active/device_framework_desktop/quadlets/ai-internal.pod b/active/software_ai_stack/quadlets_pods/ai-internal.pod similarity index 100% rename from active/device_framework_desktop/quadlets/ai-internal.pod rename to active/software_ai_stack/quadlets_pods/ai-internal.pod diff --git a/active/device_framework_desktop/quadlets/stable-diffusion-edit-server.container b/active/software_ai_stack/quadlets_stable_diffusion/stable-diffusion-edit-server.container similarity index 90% rename from active/device_framework_desktop/quadlets/stable-diffusion-edit-server.container rename to active/software_ai_stack/quadlets_stable_diffusion/stable-diffusion-edit-server.container index e99c7fb..e02ecf3 100644 --- a/active/device_framework_desktop/quadlets/stable-diffusion-edit-server.container +++ b/active/software_ai_stack/quadlets_stable_diffusion/stable-diffusion-edit-server.container @@ -21,15 +21,16 @@ Entrypoint=/sd-server # Server args Exec=-l 0.0.0.0 \ --listen-port 1235 \ - --diffusion-model /models/image/flux2-klein/flux-2-klein-9b-Q4_0.gguf \ + --diffusion-model /models/image/flux2-klein/flux-2-klein-9b-Q8_0.gguf \ --vae /models/image/flux2-klein/ae.safetensors \ - --llm /models/image/flux2-klein/Qwen3-8B-Q4_K_M.gguf \ + --llm /models/image/flux2-klein/Qwen3-8B-Q8_0.gguf \ -v \ - --cfg-scale 1.0 \ --sampling-method euler \ + --cfg-scale 1.0 \ --vae-conv-direct \ - --offload-to-cpu \ --diffusion-conv-direct \ + --fa \ + --mmap \ --seed -1 \ --steps 5 diff --git a/active/device_framework_desktop/quadlets/stable-diffusion-gen-server.container b/active/software_ai_stack/quadlets_stable_diffusion/stable-diffusion-gen-server.container similarity index 80% rename from active/device_framework_desktop/quadlets/stable-diffusion-gen-server.container rename to active/software_ai_stack/quadlets_stable_diffusion/stable-diffusion-gen-server.container index 42d7b57..d12466a 100644 --- a/active/device_framework_desktop/quadlets/stable-diffusion-gen-server.container +++ b/active/software_ai_stack/quadlets_stable_diffusion/stable-diffusion-gen-server.container @@ -21,12 +21,15 @@ Entrypoint=/sd-server # Server args Exec=-l 0.0.0.0 \ --listen-port 1234 \ - --diffusion-model /models/image/z-turbo/z_image_turbo-Q4_K.gguf \ + --diffusion-model /models/image/z-turbo/z_image_turbo-Q8_0.gguf \ --vae /models/image/z-turbo/ae.safetensors \ - --llm /models/image/z-turbo/qwen_3_4b.safetensors \ + --llm /models/image/z-turbo/Qwen3-4B-Instruct-2507-Q8_0.gguf \ -v \ --cfg-scale 1.0 \ --vae-conv-direct \ + --diffusion-conv-direct \ + --fa \ + --mmap \ --seed -1 \ --steps 8 diff --git a/active/device_framework_desktop/systemd/open-webui-guest-start.service b/active/software_ai_stack/systemd/open-webui-guest-start.service similarity index 100% rename from active/device_framework_desktop/systemd/open-webui-guest-start.service rename to active/software_ai_stack/systemd/open-webui-guest-start.service diff --git a/active/device_framework_desktop/systemd/open-webui-guest-start.timer b/active/software_ai_stack/systemd/open-webui-guest-start.timer similarity index 100% rename from active/device_framework_desktop/systemd/open-webui-guest-start.timer rename to active/software_ai_stack/systemd/open-webui-guest-start.timer diff --git a/active/device_framework_desktop/systemd/open-webui-guest-stop.service b/active/software_ai_stack/systemd/open-webui-guest-stop.service similarity index 100% rename from active/device_framework_desktop/systemd/open-webui-guest-stop.service rename to active/software_ai_stack/systemd/open-webui-guest-stop.service diff --git a/active/device_framework_desktop/systemd/open-webui-guest-stop.timer b/active/software_ai_stack/systemd/open-webui-guest-stop.timer similarity index 100% rename from active/device_framework_desktop/systemd/open-webui-guest-stop.timer rename to active/software_ai_stack/systemd/open-webui-guest-stop.timer diff --git a/active/device_framework_desktop/update-script.sh b/active/software_ai_stack/update-script.sh similarity index 100% rename from active/device_framework_desktop/update-script.sh rename to active/software_ai_stack/update-script.sh