Compare commits

...

6 Commits

Author SHA1 Message Date
d4571c9b70 add qwen3 base and coder
All checks were successful
Podman DDNS Image / build-and-push-ddns (push) Successful in 1m21s
2026-01-27 19:32:24 -05:00
4c0a263d50 fix vulkan 4gb limit
All checks were successful
Podman DDNS Image / build-and-push-ddns (push) Successful in 1m0s
2026-01-21 13:33:33 -05:00
70259d9542 more framework desktop local ai fixes
All checks were successful
Podman DDNS Image / build-and-push-ddns (push) Successful in 1m12s
2026-01-20 11:22:59 -05:00
4f3102a2ff local ai checkpoint 2026-01-19 20:50:05 -05:00
ed65f8924d add initial framework desktop config
All checks were successful
Podman DDNS Image / build-and-push-ddns (push) Successful in 1m13s
2026-01-13 13:53:52 -05:00
737a58a13c stable diffusion framework desktop notes
All checks were successful
Podman DDNS Image / build-and-push-ddns (push) Successful in 1m9s
2026-01-08 21:55:59 -05:00
8 changed files with 451 additions and 75 deletions

View File

@@ -1,103 +1,297 @@
# Framework Desktop # Framework Desktop
- [Framework Desktop](#framework-desktop)
- [BIOS](#bios)
- [References](#references)
- [Notes](#notes)
- [Volume Locations](#volume-locations)
- [Setup](#setup)
- [Create the AI user](#create-the-ai-user)
- [Helper aliases](#helper-aliases)
- [Create the models dir](#create-the-models-dir)
- [Install the Hugging Face CLI](#install-the-hugging-face-cli)
- [Download models](#download-models)
- [Text models](#text-models)
- [Image models](#image-models)
- [Create the systemd-ai pod](#create-the-systemd-ai-pod)
- [llama.cpp](#llamacpp)
- [stable-diffusion.cpp](#stable-diffusioncpp)
- [open-webui](#open-webui)
- [Install the whole thing with quadlets (TM)](#install-the-whole-thing-with-quadlets-tm)
- [Install the update script](#install-the-update-script)
## BIOS ## BIOS
<https://knowledgebase.frame.work/en_us/changing-memory-allocation-amd-ryzen-ai-max-300-series-By1LG5Yrll> <https://knowledgebase.frame.work/en_us/changing-memory-allocation-amd-ryzen-ai-max-300-series-By1LG5Yrll>
1. Set GPU memory to 96GB 1. Set GPU memory to 512MB
2. Add the following kernel args to /etc/default/grub (set allocatable VRAM to
112GB): `ttm.pages_limit=28027343750 ttm.page_pool_size=28027343750`
3. Update grub `grub2-mkconfig -o /boot/grub2/grub.cfg`
4. Reboot
## Docker ## References
<https://docs.podman.io/en/latest/markdown/podman-systemd.unit.5.html>
## Notes
### Volume Locations
`~/.local/share/containers/storage/volumes/`
## Setup
### Create the AI user
```bash ```bash
dnf config-manager addrepo \ # Create your local ai user. This will be the user you launch podman processes from.
--from-repofile https://download.docker.com/linux/fedora/docker-ce.repo useradd -m ai
loginctl enable-linger ai
dnf install \ su -l ai
docker-ce \ mkdir -p ~/.config/containers/systemd/
docker-ce-cli \
containerd.io \
docker-buildx-plugin \
docker-compose-plugin
``` ```
## Llama.cpp Models are big. You'll want some tools to help find large files quickly when space runs out.
### Helper aliases
Add these to your .bashrc:
```bash ```bash
# Build the llama.cpp docker image # Calculate all folder sizes in current dir
git clone https://github.com/ggml-org/llama.cpp.git alias {dudir,dud}='du -h --max-depth 1 | sort -h'
export BUILD_TAG=$(date +"%Y-%m-%d-%H-%M-%S")
docker build -t llama-cpp-vulkan:${BUILD_TAG} -f .devops/vulkan.Dockerfile .
# Run llama server # Calculate all file sizes in current dir
docker run \ alias {dufile,duf}='ls -lhSr'
-d \
--restart always \ # Restart llama-server / follow logs
--name=llama-server \ alias llama-reload="systemctl --user daemon-reload && systemctl --user restart llama-server.service"
--network=host \ alias llama-logs="journalctl --user -fu llama-server"
# Restart stable diffusion gen and edit server / follow logs
alias sd-gen-reload='systemctl --user daemon-reload && systemctl --user restart stable-diffusion-gen-server'
alias sd-gen-logs='journalctl --user -xeu stable-diffusion-gen-server'
alias sd-edit-reload='systemctl --user daemon-reload && systemctl --user restart stable-diffusion-edit-server'
alias sd-edit-logs='journalctl --user -xeu stable-diffusion-edit-server'
```
### Create the models dir
```bash
mkdir -p /home/ai/models/{text,image,video}
```
### Install the Hugging Face CLI
<https://huggingface.co/docs/huggingface_hub/en/guides/cli#getting-started>
```bash
# Install
curl -LsSf https://hf.co/cli/install.sh | bash
# Login
hf auth login
```
### Download models
#### Text models
<https://huggingface.co/ggml-org/collections>
```bash
# gpt-oss-120b
mkdir /home/ai/models/text/gpt-oss-120b
hf download --local-dir /home/ai/models/text/gpt-oss-120b ggml-org/gpt-oss-120b-GGUF
# gpt-oss-20b
mkdir /home/ai/models/text/gpt-oss-20b
hf download --local-dir /home/ai/models/text/gpt-oss-20b ggml-org/gpt-oss-20b-GGUF
# devstral-2-123b
mkdir /home/ai/models/text/devstral-2-123b
hf download --local-dir /home/ai/models/text/devstral-2-123b unsloth/Devstral-2-123B-Instruct-2512-GGUF Q4_K_M/Devstral-2-123B-Instruct-2512-Q4_K_M-00001-of-00002.gguf
hf download --local-dir /home/ai/models/text/devstral-2-123b unsloth/Devstral-2-123B-Instruct-2512-GGUF Q4_K_M/Devstral-2-123B-Instruct-2512-Q4_K_M-00002-of-00002.gguf
# devstral-small-2-24b
mkdir /home/ai/models/text/devstral-small-2-24b
hf download --local-dir /home/ai/models/text/devstral-small-2-24b unsloth/Devstral-Small-2-24B-Instruct-2512-GGUF Devstral-Small-2-24B-Instruct-2512-Q4_K_M.gguf
# ministral-3-14b
mkdir /home/ai/models/text/ministral-3-14b
hf download --local-dir /home/ai/models/text/ministral-3-14b ggml-org/Ministral-3-14B-Reasoning-2512-GGUF
# ministral-3-3b-instruct
mkdir /home/ai/models/text/ministral-3-3b-instruct
hf download --local-dir /home/ai/models/text/ministral-3-3b-instruct ggml-org/Ministral-3-3B-Instruct-2512-GGUF
# nemotron-nano-30b
mkdir /home/ai/models/text/nemotron-nano-30b
hf download --local-dir /home/ai/models/text/nemotron-nano-30b ggml-org/Nemotron-Nano-3-30B-A3B-GGUF Nemotron-Nano-3-30B-A3B-Q4_K_M.gguf
# qwen3-30b-a3b-instruct
mkdir /home/ai/models/text/qwen3-30b-a3b-instruct
hf download --local-dir /home/ai/models/text/qwen3-30b-a3b-instruct ggml-org/Qwen3-30B-A3B-Instruct-2507-Q8_0-GGUF
# qwen3-coder-30b-a3b-instruct
mkdir /home/ai/models/text/qwen3-coder-30b-a3b-instruct
hf download --local-dir /home/ai/models/text/qwen3-coder-30b-a3b-instruct ggml-org/Qwen3-Coder-30B-A3B-Instruct-Q8_0-GGUF
```
#### Image models
```bash
# z-turbo
mkdir /home/ai/models/image/z-turbo
hf download --local-dir /home/ai/models/image/z-turbo QuantStack/FLUX.1-Kontext-dev-GGUF flux1-kontext-dev-Q4_K_M.gguf
hf download --local-dir /home/ai/models/image/z-turbo black-forest-labs/FLUX.1-schnell ae.safetensors
hf download --local-dir /home/ai/models/image/z-turbo unsloth/Qwen3-4B-Instruct-2507-GGUF Qwen3-4B-Instruct-2507-Q4_K_M.gguf
# flux-1-kontext
mkdir /home/ai/models/image/flux-1-kontext
hf download --local-dir /home/ai/models/image/flux-1-kontext leejet/Z-Image-Turbo-GGUF z_image_turbo-Q4_K.gguf
hf download --local-dir /home/ai/models/image/flux-1-kontext black-forest-labs/FLUX.1-dev ae.safetensors
hf download --local-dir /home/ai/models/image/flux-1-kontext comfyanonymous/flux_text_encoders clip_l.safetensors
hf download --local-dir /home/ai/models/image/flux-1-kontext comfyanonymous/flux_text_encoders t5xxl_fp16.safetensors
```
### Create the systemd-ai pod
You'll at least want the ai pod and network. Copy `ai.pod` and `ai.network` out
of `quadlets` into `~/.config/containers/systemd`.
Then run `systemctl --user daemon-reload && systemctl --user start ai-pod`
## llama.cpp
<https://github.com/ggml-org/llama.cpp/tree/master/tools/server>
```bash
# Build the llama.cpp container image
git clone https://github.com/ggml-org/llama.cpp.git
cd llama.cpp
export BUILD_TAG=$(date +"%Y-%m-%d-%H-%M-%S")
# Vulkan
podman build -f .devops/vulkan.Dockerfile -t llama-cpp-vulkan:${BUILD_TAG} -t llama-cpp-vulkan:latest .
# Run llama server (Available on port 8000)
# Add `--n-cpu-moe 32` to gpt-oss-120b to keep minimal number of expert in GPU
podman run \
--rm \
--name llama-server-demo \
--pod systemd-ai \
--device=/dev/kfd \ --device=/dev/kfd \
--device=/dev/dri \ --device=/dev/dri \
-v llama-server-cache:/root/.cache \ -v /home/ai/models/text:/models:z \
llama-cpp-vulkan:${BUILD_TAG} \ localhost/llama-cpp-vulkan:latest \
-hf ggml-org/gpt-oss-120b-GGUF --ctx-size 0 --jinja -ub 2048 -b 2048 \ --port 8000 \
--port 8000 --host 0.0.0.0 -n -1 --n-gpu-layers 999 -c 64000 \
-b 64000 \
# Firewall -ub 500 \
firewall-cmd --add-port=8000/tcp --permanent --perf \
firewall-cmd --reload --n-gpu-layers all \
--jinja \
--models-max 1 \
--models-dir /models
``` ```
## Ollama ## stable-diffusion.cpp
Server: <https://github.com/leejet/stable-diffusion.cpp/tree/master/examples/server>
CLI: <https://github.com/leejet/stable-diffusion.cpp/tree/master/examples/cli>
```bash ```bash
# Run ollama git clone https://github.com/leejet/stable-diffusion.cpp.git
# Will be available on port 11434 cd stable-diffusion.cpp
docker run \ git submodule update --init --recursive
-d \ export BUILD_TAG=$(date +"%Y-%m-%d-%H-%M-%S")
--restart always \
# Vulkan
podman build -f Dockerfile.vulkan -t stable-diffusion-cpp:${BUILD_TAG} -t stable-diffusion-cpp:latest .
```
```bash
# z-turbo
podman run --rm \
-v /home/ai/models:/models:z \
-v /home/ai/output:/output:z \
--device /dev/kfd \ --device /dev/kfd \
--device /dev/dri \ --device /dev/dri \
-v ollama:/root/.ollama \ localhost/stable-diffusion-cpp:latest \
-e OLLAMA_VULKAN=1 \ --diffusion-model /models/image/z-turbo/z_image_turbo-Q4_K.gguf \
--name ollama \ --vae /models/image/z-turbo/ae.safetensors \
--network host \ --llm /models/image/z-turbo/Qwen3-4B-Instruct-2507-Q4_K_M.gguf \
docker.io/ollama/ollama:0.13.5 --cfg-scale 1.0 \
-v \
-H 1024 \
-W 1024 \
--seed -1 \
--steps 8 \
--vae-conv-direct \
-o /output/output.png \
-p "A photorealistic dragon"
# Run an image # Edit with flux kontext
docker exec -it ollama ollama run gpt-oss:20b podman run --rm \
-v /home/ai/models:/models:z \
# Firewall -v /home/ai/output:/output:z \
firewall-cmd --add-port=11434/tcp --permanent --device /dev/kfd \
firewall-cmd --reload --device /dev/dri \
localhost/stable-diffusion-cpp:latest \
--diffusion-model /models/image/flux-1-kontext/flux1-kontext-dev-Q4_K_M.gguf \
--vae /models/image/flux-1-kontext/ae.safetensors \
--clip_l /models/image/flux-1-kontext/clip_l.safetensors \
--t5xxl /models/image/flux-1-kontext/t5xxl_fp16.safetensors \
--cfg-scale 1.0 \
--sampling-method euler \
--seed -1 \
--steps 28 \
--vae-conv-direct \
-v \
-H 512 \
-W 512 \
-o /output/output.png \
-r /output/everquest_logo.png \
-p "Add the text 'EverQuest'"
``` ```
## Anything LLM ## open-webui
Per [the docs](https://docs.anythingllm.com/installation-docker/cloud-docker):
> Note --cap-add SYS_ADMIN is a required command if you want to scrape webpages.
> We use PuppeeteerJS to scrape websites links and --cap-add SYS_ADMIN lets us
> use sandboxed Chromium across all runtimes for best security practices
```bash ```bash
# Server will be accessible on port 3001 mkdir /home/ai/.env
# Connect llama.cpp as a generic OpenAI LLM provider and use host # Create a file called open-webui-env with `WEBUI_SECRET_KEY="some-random-key"
# http://172.17.0.1:8000/v1 scp active/device_framework_desktop/secrets/open-webui-env deskwork-ai:.env/
# Chat model name doesn't matter.
docker run \
-d \
--restart always \
--network host \
--name anythingllm \
--cap-add SYS_ADMIN \
-v anythingllm:/app/server/storage \
-e STORAGE_DIR="/app/server/storage" \
docker.io/mintplexlabs/anythingllm
# Firewall # Will be available on port 8080
firewall-cmd --add-port=3001/tcp --permanent podman run \
firewall-cmd --reload -d \
``` --pod ai \
-v open-webui:/app/backend/data \
--name open-webui \
--restart always \
ghcr.io/open-webui/open-webui:main
```
## Install the whole thing with quadlets (TM)
```bash
# Installs and runs all services in `quadlets/`
scp -r active/device_framework_desktop/quadlets/* deskwork-ai:.config/containers/systemd/
ssh deskwork-ai
systemctl --user daemon-reload
systemctl --user restart ai-pod.service
```
Note, all services will be available at `host.containers.internal`. So llama.cpp
will be up at `http://host.containers.internal:8000`.
### Install the update script
```bash
# 1. Builds the latest llama.cpp and stable-diffusion.cpp
# 2. Pulls the latest open-webui
# 3. Restarts all services
scp active/device_framework_desktop/update-script.sh deskwork:
ssh deskwork-ai
chmod +x update-script.sh
./update-script.sh
```

View File

@@ -0,0 +1,2 @@
[Network]
IPv6=true

View File

@@ -0,0 +1,8 @@
[Pod]
Network=ai.network
# llama.cpp
PublishPort=8000:8000/tcp
# open-webui
PublishPort=8080:8080/tcp
# stable-diffusion.cpp
PublishPort=1234:1234/tcp

View File

@@ -0,0 +1,44 @@
[Unit]
Description=A Llama CPP Server Running GPT OSS 120b
[Container]
# Shared AI pod
Pod=ai.pod
# Image is built locally via podman build
Image=localhost/llama-cpp-vulkan:latest
# Downloaded models volume
Volume=/home/ai/models/text:/models:z
# GPU Device
AddDevice=/dev/kfd
AddDevice=/dev/dri
# Server command
Exec=--port 8000 \
-c 48000 \
-b 48000 \
-ub 500 \
--perf \
--n-gpu-layers all \
--jinja \
--models-max 1 \
--models-dir /models
# Health Check
HealthCmd=CMD-SHELL curl --fail http://127.0.0.1:8000/props?model=gpt-oss-120b || exit 1
HealthInterval=10s
HealthRetries=3
HealthStartPeriod=10s
HealthTimeout=30s
HealthOnFailure=kill
[Service]
Restart=always
# Extend Timeout to allow time to pull the image
TimeoutStartSec=900
[Install]
# Start by default on boot
WantedBy=multi-user.target default.target

View File

@@ -0,0 +1,24 @@
[Unit]
Description=An Open Webui Frontend for Local AI Services
[Container]
# Shared AI pod
Pod=ai.pod
# Open Webui base image
Image=ghcr.io/open-webui/open-webui:main
# Nothing too complicated here. Open Webui will basically configure itself.
Volume=open-webui-data:/app/backend/data
# WEBUI_SECRET_KEY is required to prevent logout on Restart
EnvironmentFile=/home/ai/.env/open-webui-env
[Service]
Restart=always
# Extend Timeout to allow time to pull the image
TimeoutStartSec=900
[Install]
# Start by default on boot
WantedBy=multi-user.target default.target

View File

@@ -0,0 +1,42 @@
[Unit]
Description=A Stable Diffusion CPP Server for Editing Images
[Container]
# Shared AI pod
Pod=ai.pod
# Vulkan image for AMD GPU
Image=localhost/stable-diffusion-cpp:latest
# Shared models directory
Volume=/home/ai/models:/models:z
# GPU Device
AddDevice=/dev/kfd
AddDevice=/dev/dri
# Override entrypoint to use server
Entrypoint=/sd-server
# Server args
Exec=-l 0.0.0.0 \
--listen-port 1235 \
--diffusion-model /models/image/flux-1-kontext/flux1-kontext-dev-Q4_K_M.gguf \
--vae /models/image/flux-1-kontext/ae.safetensors \
--clip_l /models/image/flux-1-kontext/clip_l.safetensors \
--t5xxl /models/image/flux-1-kontext/t5xxl_fp16.safetensors \
--cfg-scale 1.0 \
--sampling-method euler \
--vae-conv-direct \
--seed -1 \
--steps 28 \
-v
[Service]
Restart=always
# Extend Timeout to allow time to pull the image
TimeoutStartSec=900
[Install]
# Start by default on boot
WantedBy=multi-user.target default.target

View File

@@ -0,0 +1,42 @@
[Unit]
Description=A Stable Diffusion CPP Server for Generating Images
[Container]
# Shared AI pod
Pod=ai.pod
# Vulkan image for AMD GPU
Image=localhost/stable-diffusion-cpp:latest
# Shared models directory
Volume=/home/ai/models:/models:z
# GPU Device
AddDevice=/dev/kfd
AddDevice=/dev/dri
# Override entrypoint to use server
Entrypoint=/sd-server
# Server args
Exec=-l 0.0.0.0 \
--listen-port 1234 \
--diffusion-model /models/image/z-turbo/z_image_turbo-Q4_K.gguf \
--vae /models/image/z-turbo/ae.safetensors \
--llm /models/image/z-turbo/qwen_3_4b.safetensors \
-l 0.0.0.0 \
--listen-port 1234 \
--cfg-scale 1.0 \
--vae-conv-direct \
-v \
--seed -1 \
--steps 8
[Service]
Restart=always
# Extend Timeout to allow time to pull the image
TimeoutStartSec=900
[Install]
# Start by default on boot
WantedBy=multi-user.target default.target

View File

@@ -0,0 +1,20 @@
#!/bin/bash
set -x
export BUILD_TAG=$(date +"%Y-%m-%d-%H-%M-%S")
echo "Updates stable-diffusion.cpp, llama.cpp, and open-webui"
cd /home/ai/llama.cpp
git pull
podman build -t llama-cpp-vulkan:${BUILD_TAG} -t llama-cpp-vulkan:latest -f .devops/vulkan.Dockerfile .
cd /home/ai/stable-diffusion.cpp
git pull
git submodule update --init --recursive
podman build -f Dockerfile.vulkan -t stable-diffusion-cpp:${BUILD_TAG} -t stable-diffusion-cpp:latest .
podman image pull ghcr.io/open-webui/open-webui:main
systemctl --user restart ai-pod