Files
homelab/active/device_framework_desktop/framework_desktop.md
ducoterra 70259d9542
All checks were successful
Podman DDNS Image / build-and-push-ddns (push) Successful in 1m12s
more framework desktop local ai fixes
2026-01-20 11:22:59 -05:00

6.9 KiB

Framework Desktop

BIOS

https://knowledgebase.frame.work/en_us/changing-memory-allocation-amd-ryzen-ai-max-300-series-By1LG5Yrll

  1. Set GPU memory to 96GB
  2. Add the following kernel args to /etc/default/grub (set allocatable VRAM to 112GB): ttm.pages_limit=28027343750 ttm.page_pool_size=28027343750
  3. Update grub grub2-mkconfig -o /boot/grub2/grub.cfg
  4. Reboot

References

https://docs.podman.io/en/latest/markdown/podman-systemd.unit.5.html

Notes

Volume Locations

~/.local/share/containers/storage/volumes/

Setup

Create the AI user

# Create your local ai user. This will be the user you launch podman processes from.
useradd -m ai
loginctl enable-linger ai
su -l ai
mkdir -p ~/.config/containers/systemd/

Models are big. You'll want some tools to help find large files quickly when space runs out.

Add this to your .bashrc:

# Calculate all folder sizes in current dir 
alias {dudir,dud}='du -h --max-depth 1 | sort -h'

# Calculate all file sizes in current dir
alias {dufile,duf}='ls -lhSr'

Create the models dir

mkdir -p /home/ai/models/{text,image,video}

Install the Hugging Face CLI

https://huggingface.co/docs/huggingface_hub/en/guides/cli#getting-started

# Install
curl -LsSf https://hf.co/cli/install.sh | bash

# Login
hf auth login

Download models

Text models

https://huggingface.co/ggml-org/collections

# gpt-oss-120b
mkdir /home/ai/models/text/gpt-oss-120b
hf download --local-dir /home/ai/models/text/gpt-oss-120b ggml-org/gpt-oss-120b-GGUF

# gpt-oss-20b
mkdir /home/ai/models/text/gpt-oss-20b
hf download --local-dir /home/ai/models/text/gpt-oss-20b ggml-org/gpt-oss-20b-GGUF

# devstral-2-123b
mkdir /home/ai/models/text/devstral-2-123b
hf download --local-dir /home/ai/models/text/devstral-2-123b unsloth/Devstral-2-123B-Instruct-2512-GGUF Q4_K_M/Devstral-2-123B-Instruct-2512-Q4_K_M-00001-of-00002.gguf
hf download --local-dir /home/ai/models/text/devstral-2-123b unsloth/Devstral-2-123B-Instruct-2512-GGUF Q4_K_M/Devstral-2-123B-Instruct-2512-Q4_K_M-00002-of-00002.gguf

# devstral-small-2-24b
mkdir /home/ai/models/text/devstral-small-2-24b
hf download --local-dir /home/ai/models/text/devstral-small-2-24b unsloth/Devstral-Small-2-24B-Instruct-2512-GGUF Devstral-Small-2-24B-Instruct-2512-Q4_K_M.gguf

# ministral-3-14b
mkdir /home/ai/models/text/ministral-3-14b
hf download --local-dir /home/ai/models/text/ministral-3-14b ggml-org/Ministral-3-14B-Reasoning-2512-GGUF

# nemotron-nano-30b
mkdir /home/ai/models/text/nemotron-nano-30b
hf download --local-dir /home/ai/models/text/nemotron-nano-30b ggml-org/Nemotron-Nano-3-30B-A3B-GGUF Nemotron-Nano-3-30B-A3B-Q4_K_M.gguf

Image models

Create the systemd-ai pod

You'll at least want the ai pod and network. Copy ai.pod and ai.network out of quadlets into ~/.config/containers/systemd.

Then run systemctl --user daemon-reload && systemctl --user start ai-pod

llama.cpp

https://github.com/ggml-org/llama.cpp/tree/master/tools/server

# Build the llama.cpp container image
git clone https://github.com/ggml-org/llama.cpp.git
cd llama.cpp
export BUILD_TAG=$(date +"%Y-%m-%d-%H-%M-%S")
podman build -t llama-cpp-vulkan:${BUILD_TAG} -t llama-cpp-vulkan:latest -f .devops/vulkan.Dockerfile .

# Run llama server (Available on port 8000)
# Add `--n-cpu-moe 32` to gpt-oss-120b to keep minimal number of expert in GPU
podman run \
--rm \
--name llama-server-demo \
--pod systemd-ai \
--device=/dev/kfd \
--device=/dev/dri \
-v /home/ai/models/text:/models:z \
localhost/llama-cpp-vulkan:2026-01-19-18-00-02 \
--port 8000 \
-c 0 \
-b 2048 \
-ub 2048 \
--perf \
--n-gpu-layers all \
--jinja \
--models-max 1 \
--models-dir /models

stable-diffusion.cpp

Server: https://github.com/leejet/stable-diffusion.cpp/tree/master/examples/server

CLI: https://github.com/leejet/stable-diffusion.cpp/tree/master/examples/cli

git clone https://github.com/leejet/stable-diffusion.cpp.git
cd stable-diffusion.cpp
git submodule update --init --recursive
export BUILD_TAG=$(date +"%Y-%m-%d-%H-%M-%S")
podman build -f Dockerfile.vulkan -t stable-diffusion-cpp:${BUILD_TAG} -t stable-diffusion-cpp:latest .
# z-turbo
podman run --rm \
-v /home/ai/models:/models:z \
-v /home/ai/output:/output:z \
--device /dev/kfd \
--device /dev/dri \
localhost/stable-diffusion-cpp:latest \
--diffusion-model /models/image/z-turbo/z_image_turbo-Q4_K.gguf \
--vae /models/image/z-turbo/ae.safetensors  \
--llm /models/image/z-turbo/Qwen3-4B-Instruct-2507-Q4_K_M.gguf \
--cfg-scale 1.0 \
-v \
-H 1024 \
-W 1024 \
--seed -1 \
--steps 8 \
--vae-conv-direct \
-o /output/output.png \
-p "A photorealistic dragon"

# Edit with flux kontext
podman run --rm \
-v /home/ai/models:/models:z \
-v /home/ai/output:/output:z \
--device /dev/kfd \
--device /dev/dri \
localhost/stable-diffusion-cpp:latest \
--diffusion-model /models/image/flux-1-kontext/flux1-kontext-dev-Q4_K_M.gguf \
--vae /models/image/flux-1-kontext/ae.safetensors \
--clip_l /models/image/flux-1-kontext/clip_l.safetensors \
--t5xxl /models/image/flux-1-kontext/t5xxl_fp16.safetensors \
--cfg-scale 1.0 \
--sampling-method euler \
--seed -1 \
--steps 20 \
-H 1024 \
-W 1024 \
-r /output/everquest_logo.png \
-p "change 'EverQuest' to 'EverSteak'" \
-o /output/output.png

open-webui

mkdir /home/ai/.env
# Create a file called open-webui-env with `WEBUI_SECRET_KEY="some-random-key"
scp active/device_framework_desktop/secrets/open-webui-env deskwork-ai:.env/

# Will be available on port 8080
podman run \
-d \
--pod ai \
-v open-webui:/app/backend/data \
--name open-webui \
--restart always \
ghcr.io/open-webui/open-webui:main

Install the whole thing with quadlets (TM)

scp -r active/device_framework_desktop/quadlets/* deskwork-ai:.config/containers/systemd/
ssh deskwork-ai
systemctl --user daemon-reload
systemctl --user restart ai-pod.service

Install the update script

# Copy update script and run it (assumes you have llama.cpp and stable-diffusion.cpp)
scp active/device_framework_desktop/update-script.sh deskwork:
ssh deskwork-ai
chmod +x update-script.sh
./update-script.sh