# Framework Desktop - [Framework Desktop](#framework-desktop) - [BIOS](#bios) - [References](#references) - [Notes](#notes) - [Volume Locations](#volume-locations) - [Setup](#setup) - [Create the AI user](#create-the-ai-user) - [Helper aliases](#helper-aliases) - [Create the models dir](#create-the-models-dir) - [Install the Hugging Face CLI](#install-the-hugging-face-cli) - [Download models](#download-models) - [Text models](#text-models) - [Image models](#image-models) - [Create the systemd-ai pod](#create-the-systemd-ai-pod) - [llama.cpp](#llamacpp) - [stable-diffusion.cpp](#stable-diffusioncpp) - [open-webui](#open-webui) - [Install the whole thing with quadlets (TM)](#install-the-whole-thing-with-quadlets-tm) - [Install the update script](#install-the-update-script) ## BIOS 1. Set GPU memory to 512MB ## References ## Notes ### Volume Locations `~/.local/share/containers/storage/volumes/` ## Setup ### Create the AI user ```bash # Create your local ai user. This will be the user you launch podman processes from. useradd -m ai loginctl enable-linger ai su -l ai mkdir -p ~/.config/containers/systemd/ ``` Models are big. You'll want some tools to help find large files quickly when space runs out. ### Helper aliases Add these to your .bashrc: ```bash # Calculate all folder sizes in current dir alias {dudir,dud}='du -h --max-depth 1 | sort -h' # Calculate all file sizes in current dir alias {dufile,duf}='ls -lhSr' # Restart llama-server / follow logs alias llama-reload="systemctl --user daemon-reload && systemctl --user restart llama-server.service" alias llama-logs="journalctl --user -fu llama-server" # Restart stable diffusion gen and edit server / follow logs alias sd-gen-reload='systemctl --user daemon-reload && systemctl --user restart stable-diffusion-gen-server' alias sd-gen-logs='journalctl --user -xeu stable-diffusion-gen-server' alias sd-edit-reload='systemctl --user daemon-reload && systemctl --user restart stable-diffusion-edit-server' alias sd-edit-logs='journalctl --user -xeu stable-diffusion-edit-server' ``` ### Create the models dir ```bash mkdir -p /home/ai/models/{text,image,video} ``` ### Install the Hugging Face CLI ```bash # Install curl -LsSf https://hf.co/cli/install.sh | bash # Login hf auth login ``` ### Download models #### Text models ```bash # gpt-oss-120b mkdir /home/ai/models/text/gpt-oss-120b hf download --local-dir /home/ai/models/text/gpt-oss-120b ggml-org/gpt-oss-120b-GGUF # gpt-oss-20b mkdir /home/ai/models/text/gpt-oss-20b hf download --local-dir /home/ai/models/text/gpt-oss-20b ggml-org/gpt-oss-20b-GGUF # devstral-2-123b mkdir /home/ai/models/text/devstral-2-123b hf download --local-dir /home/ai/models/text/devstral-2-123b unsloth/Devstral-2-123B-Instruct-2512-GGUF Q4_K_M/Devstral-2-123B-Instruct-2512-Q4_K_M-00001-of-00002.gguf hf download --local-dir /home/ai/models/text/devstral-2-123b unsloth/Devstral-2-123B-Instruct-2512-GGUF Q4_K_M/Devstral-2-123B-Instruct-2512-Q4_K_M-00002-of-00002.gguf # devstral-small-2-24b mkdir /home/ai/models/text/devstral-small-2-24b hf download --local-dir /home/ai/models/text/devstral-small-2-24b unsloth/Devstral-Small-2-24B-Instruct-2512-GGUF Devstral-Small-2-24B-Instruct-2512-Q4_K_M.gguf # ministral-3-14b mkdir /home/ai/models/text/ministral-3-14b hf download --local-dir /home/ai/models/text/ministral-3-14b ggml-org/Ministral-3-14B-Reasoning-2512-GGUF # ministral-3-3b-instruct mkdir /home/ai/models/text/ministral-3-3b-instruct hf download --local-dir /home/ai/models/text/ministral-3-3b-instruct ggml-org/Ministral-3-3B-Instruct-2512-GGUF # nemotron-nano-30b mkdir /home/ai/models/text/nemotron-nano-30b hf download --local-dir /home/ai/models/text/nemotron-nano-30b ggml-org/Nemotron-Nano-3-30B-A3B-GGUF Nemotron-Nano-3-30B-A3B-Q4_K_M.gguf # qwen3-30b-a3b-instruct mkdir /home/ai/models/text/qwen3-30b-a3b-instruct hf download --local-dir /home/ai/models/text/qwen3-30b-a3b-instruct ggml-org/Qwen3-30B-A3B-Instruct-2507-Q8_0-GGUF # qwen3-coder-30b-a3b-instruct mkdir /home/ai/models/text/qwen3-coder-30b-a3b-instruct hf download --local-dir /home/ai/models/text/qwen3-coder-30b-a3b-instruct ggml-org/Qwen3-Coder-30B-A3B-Instruct-Q8_0-GGUF ``` #### Image models ```bash # z-turbo mkdir /home/ai/models/image/z-turbo hf download --local-dir /home/ai/models/image/z-turbo QuantStack/FLUX.1-Kontext-dev-GGUF flux1-kontext-dev-Q4_K_M.gguf hf download --local-dir /home/ai/models/image/z-turbo black-forest-labs/FLUX.1-schnell ae.safetensors hf download --local-dir /home/ai/models/image/z-turbo unsloth/Qwen3-4B-Instruct-2507-GGUF Qwen3-4B-Instruct-2507-Q4_K_M.gguf # flux-1-kontext mkdir /home/ai/models/image/flux-1-kontext hf download --local-dir /home/ai/models/image/flux-1-kontext leejet/Z-Image-Turbo-GGUF z_image_turbo-Q4_K.gguf hf download --local-dir /home/ai/models/image/flux-1-kontext black-forest-labs/FLUX.1-dev ae.safetensors hf download --local-dir /home/ai/models/image/flux-1-kontext comfyanonymous/flux_text_encoders clip_l.safetensors hf download --local-dir /home/ai/models/image/flux-1-kontext comfyanonymous/flux_text_encoders t5xxl_fp16.safetensors ``` ### Create the systemd-ai pod You'll at least want the ai pod and network. Copy `ai.pod` and `ai.network` out of `quadlets` into `~/.config/containers/systemd`. Then run `systemctl --user daemon-reload && systemctl --user start ai-pod` ## llama.cpp ```bash # Build the llama.cpp container image git clone https://github.com/ggml-org/llama.cpp.git cd llama.cpp export BUILD_TAG=$(date +"%Y-%m-%d-%H-%M-%S") # Vulkan podman build -f .devops/vulkan.Dockerfile -t llama-cpp-vulkan:${BUILD_TAG} -t llama-cpp-vulkan:latest . # Run llama server (Available on port 8000) # Add `--n-cpu-moe 32` to gpt-oss-120b to keep minimal number of expert in GPU podman run \ --rm \ --name llama-server-demo \ --pod systemd-ai \ --device=/dev/kfd \ --device=/dev/dri \ -v /home/ai/models/text:/models:z \ localhost/llama-cpp-vulkan:latest \ --port 8000 \ -c 64000 \ -b 64000 \ -ub 500 \ --perf \ --n-gpu-layers all \ --jinja \ --models-max 1 \ --models-dir /models ``` ## stable-diffusion.cpp Server: CLI: ```bash git clone https://github.com/leejet/stable-diffusion.cpp.git cd stable-diffusion.cpp git submodule update --init --recursive export BUILD_TAG=$(date +"%Y-%m-%d-%H-%M-%S") # Vulkan podman build -f Dockerfile.vulkan -t stable-diffusion-cpp:${BUILD_TAG} -t stable-diffusion-cpp:latest . ``` ```bash # z-turbo podman run --rm \ -v /home/ai/models:/models:z \ -v /home/ai/output:/output:z \ --device /dev/kfd \ --device /dev/dri \ localhost/stable-diffusion-cpp:latest \ --diffusion-model /models/image/z-turbo/z_image_turbo-Q4_K.gguf \ --vae /models/image/z-turbo/ae.safetensors \ --llm /models/image/z-turbo/Qwen3-4B-Instruct-2507-Q4_K_M.gguf \ --cfg-scale 1.0 \ -v \ -H 1024 \ -W 1024 \ --seed -1 \ --steps 8 \ --vae-conv-direct \ -o /output/output.png \ -p "A photorealistic dragon" # Edit with flux kontext podman run --rm \ -v /home/ai/models:/models:z \ -v /home/ai/output:/output:z \ --device /dev/kfd \ --device /dev/dri \ localhost/stable-diffusion-cpp:latest \ --diffusion-model /models/image/flux-1-kontext/flux1-kontext-dev-Q4_K_M.gguf \ --vae /models/image/flux-1-kontext/ae.safetensors \ --clip_l /models/image/flux-1-kontext/clip_l.safetensors \ --t5xxl /models/image/flux-1-kontext/t5xxl_fp16.safetensors \ --cfg-scale 1.0 \ --sampling-method euler \ --seed -1 \ --steps 28 \ --vae-conv-direct \ -v \ -H 512 \ -W 512 \ -o /output/output.png \ -r /output/everquest_logo.png \ -p "Add the text 'EverQuest'" ``` ## open-webui ```bash mkdir /home/ai/.env # Create a file called open-webui-env with `WEBUI_SECRET_KEY="some-random-key" scp active/device_framework_desktop/secrets/open-webui-env deskwork-ai:.env/ # Will be available on port 8080 podman run \ -d \ --pod ai \ -v open-webui:/app/backend/data \ --name open-webui \ --restart always \ ghcr.io/open-webui/open-webui:main ``` ## Install the whole thing with quadlets (TM) ```bash # Installs and runs all services in `quadlets/` scp -r active/device_framework_desktop/quadlets/* deskwork-ai:.config/containers/systemd/ ssh deskwork-ai systemctl --user daemon-reload systemctl --user restart ai-pod.service ``` Note, all services will be available at `host.containers.internal`. So llama.cpp will be up at `http://host.containers.internal:8000`. ### Install the update script ```bash # 1. Builds the latest llama.cpp and stable-diffusion.cpp # 2. Pulls the latest open-webui # 3. Restarts all services scp active/device_framework_desktop/update-script.sh deskwork: ssh deskwork-ai chmod +x update-script.sh ./update-script.sh ```