# Framework Desktop - [Framework Desktop](#framework-desktop) - [BIOS](#bios) - [References](#references) - [Notes](#notes) - [Volume Locations](#volume-locations) - [Setup](#setup) - [Create the AI user](#create-the-ai-user) - [Create the models dir](#create-the-models-dir) - [Install the Hugging Face CLI](#install-the-hugging-face-cli) - [Download models](#download-models) - [Text models](#text-models) - [Image models](#image-models) - [Create the systemd-ai pod](#create-the-systemd-ai-pod) - [llama.cpp](#llamacpp) - [stable-diffusion.cpp](#stable-diffusioncpp) - [open-webui](#open-webui) - [Install the whole thing with quadlets (TM)](#install-the-whole-thing-with-quadlets-tm) ## BIOS 1. Set GPU memory to 96GB 2. Add the following kernel args to /etc/default/grub (set allocatable VRAM to 112GB): `ttm.pages_limit=28027343750 ttm.page_pool_size=28027343750` 3. Update grub `grub2-mkconfig -o /boot/grub2/grub.cfg` 4. Reboot ## References ## Notes ### Volume Locations `~/.local/share/containers/storage/volumes/` ## Setup ### Create the AI user ```bash # Create your local ai user. This will be the user you launch podman processes from. useradd -m ai loginctl enable-linger ai su -l ai mkdir -p ~/.config/containers/systemd/ ``` Models are big. You'll want some tools to help find large files quickly when space runs out. Add this to your .bashrc: ```bash # Calculate all folder sizes in current dir alias {dudir,dud}='du -h --max-depth 1 | sort -h' # Calculate all file sizes in current dir alias {dufile,duf}='ls -lhSr' ``` ### Create the models dir ```bash mkdir -p /home/ai/models/{text,image,video} ``` ### Install the Hugging Face CLI ```bash # Install curl -LsSf https://hf.co/cli/install.sh | bash # Login hf auth login ``` ### Download models #### Text models ```bash # gpt-oss-120b mkdir /home/ai/models/text/gpt-oss-120b hf download --local-dir /home/ai/models/text/gpt-oss-120b ggml-org/gpt-oss-120b-GGUF # devstral-2-123b mkdir /home/ai/models/text/devstral-2-123b hf download --local-dir /home/ai/models/text/devstral-2-123b unsloth/Devstral-2-123B-Instruct-2512-GGUF Q4_K_M/Devstral-2-123B-Instruct-2512-Q4_K_M-00001-of-00002.gguf hf download --local-dir /home/ai/models/text/devstral-2-123b unsloth/Devstral-2-123B-Instruct-2512-GGUF Q4_K_M/Devstral-2-123B-Instruct-2512-Q4_K_M-00002-of-00002.gguf # devstral-small-2-24b mkdir /home/ai/models/text/devstral-small-2-24b hf download --local-dir /home/ai/models/text/devstral-small-2-24b unsloth/Devstral-Small-2-24B-Instruct-2512-GGUF Devstral-Small-2-24B-Instruct-2512-Q4_K_M.gguf ``` #### Image models ### Create the systemd-ai pod You'll at least want the ai pod and network. Copy `ai.pod` and `ai.network` out of `quadlets` into `~/.config/containers/systemd`. Then run `systemctl --user daemon-reload && systemctl --user start ai-pod` ## llama.cpp ```bash # Build the llama.cpp container image git clone https://github.com/ggml-org/llama.cpp.git cd llama.cpp export BUILD_TAG=$(date +"%Y-%m-%d-%H-%M-%S") podman build -t llama-cpp-vulkan:${BUILD_TAG} -t llama-cpp-vulkan:latest -f .devops/vulkan.Dockerfile . # Run llama server (Available on port 8000) # Add `--n-cpu-moe 32` to gpt-oss-120b to keep minimal number of expert in GPU podman run \ --rm \ --name llama-server-demo \ --pod systemd-ai \ --device=/dev/kfd \ --device=/dev/dri \ -v /home/ai/models/text:/models:z \ localhost/llama-cpp-vulkan:2026-01-19-18-00-02 \ --port 8000 \ -c 0 \ -b 2048 \ -ub 2048 \ --perf \ --n-gpu-layers all \ --jinja \ --models-max 1 \ --models-dir /models ``` ## stable-diffusion.cpp ```bash git clone https://github.com/leejet/stable-diffusion.cpp.git cd stable-diffusion.cpp git submodule update --init --recursive export BUILD_TAG=$(date +"%Y-%m-%d-%H-%M-%S") podman build -f Dockerfile.vulkan -t stable-diffusion-cpp:${BUILD_TAG} -t stable-diffusion-cpp:latest . ``` ```bash # z-turbo podman run --rm \ -v /home/ai/models:/models:z \ -v /home/ai/output:/output:z \ --device /dev/kfd \ --device /dev/dri \ localhost/stable-diffusion-cpp:latest \ --diffusion-model /models/image/z-turbo/z_image_turbo-Q4_K.gguf \ --vae /models/image/z-turbo/ae.safetensors \ --llm /models/image/z-turbo/Qwen3-4B-Instruct-2507-Q4_K_M.gguf \ --cfg-scale 1.0 \ -v \ -H 512 \ -W 1024 \ --seed -1 \ --steps 8 \ -o /output/output.png \ -p "A watercolor dragon with flowing ink lines, pastel palette, white paper background, soft brush strokes, high-resolution" # Edit with flux kontext podman run --rm \ -v /home/ai/models:/models:z \ -v /home/ai/output:/output:z \ --device /dev/kfd \ --device /dev/dri \ localhost/stable-diffusion-cpp:latest \ --diffusion-model /models/image/flux-1-kontext/flux1-kontext-dev-Q4_K_M.gguf \ --vae /models/image/flux-1-kontext/ae.safetensors \ --clip_l /models/image/flux-1-kontext/clip_l.safetensors \ --t5xxl /models/image/flux-1-kontext/t5xxl_fp16.safetensors \ --cfg-scale 1.0 \ --sampling-method euler \ --seed -1 \ --steps 20 \ -H 512 \ -W 1024 \ -r /output/everquest_logo.png \ -p "change 'EverQuest' to 'EverSteak'" \ -o /output/output.png ``` ## open-webui ```bash # Will be available on port 8080 podman run \ -d \ --pod ai \ -v open-webui:/app/backend/data \ --name open-webui \ --restart always \ ghcr.io/open-webui/open-webui:main ``` ## Install the whole thing with quadlets (TM) ```bash scp -r active/device_framework_desktop/quadlets/* deskwork-ai:.config/containers/systemd/ ssh deskwork-ai systemctl --user daemon-reload systemctl --user restart ai-pod.service ```