Compare commits
8 Commits
b65ef9cbb7
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
|
d4571c9b70
|
|||
|
4c0a263d50
|
|||
|
70259d9542
|
|||
|
4f3102a2ff
|
|||
|
ed65f8924d
|
|||
|
737a58a13c
|
|||
|
a2cef18efe
|
|||
|
1c245a593a
|
297
active/device_framework_desktop/framework_desktop.md
Normal file
297
active/device_framework_desktop/framework_desktop.md
Normal file
@@ -0,0 +1,297 @@
|
|||||||
|
# Framework Desktop
|
||||||
|
|
||||||
|
- [Framework Desktop](#framework-desktop)
|
||||||
|
- [BIOS](#bios)
|
||||||
|
- [References](#references)
|
||||||
|
- [Notes](#notes)
|
||||||
|
- [Volume Locations](#volume-locations)
|
||||||
|
- [Setup](#setup)
|
||||||
|
- [Create the AI user](#create-the-ai-user)
|
||||||
|
- [Helper aliases](#helper-aliases)
|
||||||
|
- [Create the models dir](#create-the-models-dir)
|
||||||
|
- [Install the Hugging Face CLI](#install-the-hugging-face-cli)
|
||||||
|
- [Download models](#download-models)
|
||||||
|
- [Text models](#text-models)
|
||||||
|
- [Image models](#image-models)
|
||||||
|
- [Create the systemd-ai pod](#create-the-systemd-ai-pod)
|
||||||
|
- [llama.cpp](#llamacpp)
|
||||||
|
- [stable-diffusion.cpp](#stable-diffusioncpp)
|
||||||
|
- [open-webui](#open-webui)
|
||||||
|
- [Install the whole thing with quadlets (TM)](#install-the-whole-thing-with-quadlets-tm)
|
||||||
|
- [Install the update script](#install-the-update-script)
|
||||||
|
|
||||||
|
## BIOS
|
||||||
|
|
||||||
|
<https://knowledgebase.frame.work/en_us/changing-memory-allocation-amd-ryzen-ai-max-300-series-By1LG5Yrll>
|
||||||
|
|
||||||
|
1. Set GPU memory to 512MB
|
||||||
|
|
||||||
|
## References
|
||||||
|
|
||||||
|
<https://docs.podman.io/en/latest/markdown/podman-systemd.unit.5.html>
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
### Volume Locations
|
||||||
|
|
||||||
|
`~/.local/share/containers/storage/volumes/`
|
||||||
|
|
||||||
|
## Setup
|
||||||
|
|
||||||
|
### Create the AI user
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create your local ai user. This will be the user you launch podman processes from.
|
||||||
|
useradd -m ai
|
||||||
|
loginctl enable-linger ai
|
||||||
|
su -l ai
|
||||||
|
mkdir -p ~/.config/containers/systemd/
|
||||||
|
```
|
||||||
|
|
||||||
|
Models are big. You'll want some tools to help find large files quickly when space runs out.
|
||||||
|
|
||||||
|
### Helper aliases
|
||||||
|
|
||||||
|
Add these to your .bashrc:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Calculate all folder sizes in current dir
|
||||||
|
alias {dudir,dud}='du -h --max-depth 1 | sort -h'
|
||||||
|
|
||||||
|
# Calculate all file sizes in current dir
|
||||||
|
alias {dufile,duf}='ls -lhSr'
|
||||||
|
|
||||||
|
# Restart llama-server / follow logs
|
||||||
|
alias llama-reload="systemctl --user daemon-reload && systemctl --user restart llama-server.service"
|
||||||
|
alias llama-logs="journalctl --user -fu llama-server"
|
||||||
|
|
||||||
|
# Restart stable diffusion gen and edit server / follow logs
|
||||||
|
alias sd-gen-reload='systemctl --user daemon-reload && systemctl --user restart stable-diffusion-gen-server'
|
||||||
|
alias sd-gen-logs='journalctl --user -xeu stable-diffusion-gen-server'
|
||||||
|
alias sd-edit-reload='systemctl --user daemon-reload && systemctl --user restart stable-diffusion-edit-server'
|
||||||
|
alias sd-edit-logs='journalctl --user -xeu stable-diffusion-edit-server'
|
||||||
|
```
|
||||||
|
|
||||||
|
### Create the models dir
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mkdir -p /home/ai/models/{text,image,video}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Install the Hugging Face CLI
|
||||||
|
|
||||||
|
<https://huggingface.co/docs/huggingface_hub/en/guides/cli#getting-started>
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Install
|
||||||
|
curl -LsSf https://hf.co/cli/install.sh | bash
|
||||||
|
|
||||||
|
# Login
|
||||||
|
hf auth login
|
||||||
|
```
|
||||||
|
|
||||||
|
### Download models
|
||||||
|
|
||||||
|
#### Text models
|
||||||
|
|
||||||
|
<https://huggingface.co/ggml-org/collections>
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# gpt-oss-120b
|
||||||
|
mkdir /home/ai/models/text/gpt-oss-120b
|
||||||
|
hf download --local-dir /home/ai/models/text/gpt-oss-120b ggml-org/gpt-oss-120b-GGUF
|
||||||
|
|
||||||
|
# gpt-oss-20b
|
||||||
|
mkdir /home/ai/models/text/gpt-oss-20b
|
||||||
|
hf download --local-dir /home/ai/models/text/gpt-oss-20b ggml-org/gpt-oss-20b-GGUF
|
||||||
|
|
||||||
|
# devstral-2-123b
|
||||||
|
mkdir /home/ai/models/text/devstral-2-123b
|
||||||
|
hf download --local-dir /home/ai/models/text/devstral-2-123b unsloth/Devstral-2-123B-Instruct-2512-GGUF Q4_K_M/Devstral-2-123B-Instruct-2512-Q4_K_M-00001-of-00002.gguf
|
||||||
|
hf download --local-dir /home/ai/models/text/devstral-2-123b unsloth/Devstral-2-123B-Instruct-2512-GGUF Q4_K_M/Devstral-2-123B-Instruct-2512-Q4_K_M-00002-of-00002.gguf
|
||||||
|
|
||||||
|
# devstral-small-2-24b
|
||||||
|
mkdir /home/ai/models/text/devstral-small-2-24b
|
||||||
|
hf download --local-dir /home/ai/models/text/devstral-small-2-24b unsloth/Devstral-Small-2-24B-Instruct-2512-GGUF Devstral-Small-2-24B-Instruct-2512-Q4_K_M.gguf
|
||||||
|
|
||||||
|
# ministral-3-14b
|
||||||
|
mkdir /home/ai/models/text/ministral-3-14b
|
||||||
|
hf download --local-dir /home/ai/models/text/ministral-3-14b ggml-org/Ministral-3-14B-Reasoning-2512-GGUF
|
||||||
|
|
||||||
|
# ministral-3-3b-instruct
|
||||||
|
mkdir /home/ai/models/text/ministral-3-3b-instruct
|
||||||
|
hf download --local-dir /home/ai/models/text/ministral-3-3b-instruct ggml-org/Ministral-3-3B-Instruct-2512-GGUF
|
||||||
|
|
||||||
|
# nemotron-nano-30b
|
||||||
|
mkdir /home/ai/models/text/nemotron-nano-30b
|
||||||
|
hf download --local-dir /home/ai/models/text/nemotron-nano-30b ggml-org/Nemotron-Nano-3-30B-A3B-GGUF Nemotron-Nano-3-30B-A3B-Q4_K_M.gguf
|
||||||
|
|
||||||
|
# qwen3-30b-a3b-instruct
|
||||||
|
mkdir /home/ai/models/text/qwen3-30b-a3b-instruct
|
||||||
|
hf download --local-dir /home/ai/models/text/qwen3-30b-a3b-instruct ggml-org/Qwen3-30B-A3B-Instruct-2507-Q8_0-GGUF
|
||||||
|
|
||||||
|
# qwen3-coder-30b-a3b-instruct
|
||||||
|
mkdir /home/ai/models/text/qwen3-coder-30b-a3b-instruct
|
||||||
|
hf download --local-dir /home/ai/models/text/qwen3-coder-30b-a3b-instruct ggml-org/Qwen3-Coder-30B-A3B-Instruct-Q8_0-GGUF
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Image models
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# z-turbo
|
||||||
|
mkdir /home/ai/models/image/z-turbo
|
||||||
|
hf download --local-dir /home/ai/models/image/z-turbo QuantStack/FLUX.1-Kontext-dev-GGUF flux1-kontext-dev-Q4_K_M.gguf
|
||||||
|
hf download --local-dir /home/ai/models/image/z-turbo black-forest-labs/FLUX.1-schnell ae.safetensors
|
||||||
|
hf download --local-dir /home/ai/models/image/z-turbo unsloth/Qwen3-4B-Instruct-2507-GGUF Qwen3-4B-Instruct-2507-Q4_K_M.gguf
|
||||||
|
|
||||||
|
# flux-1-kontext
|
||||||
|
mkdir /home/ai/models/image/flux-1-kontext
|
||||||
|
hf download --local-dir /home/ai/models/image/flux-1-kontext leejet/Z-Image-Turbo-GGUF z_image_turbo-Q4_K.gguf
|
||||||
|
hf download --local-dir /home/ai/models/image/flux-1-kontext black-forest-labs/FLUX.1-dev ae.safetensors
|
||||||
|
hf download --local-dir /home/ai/models/image/flux-1-kontext comfyanonymous/flux_text_encoders clip_l.safetensors
|
||||||
|
hf download --local-dir /home/ai/models/image/flux-1-kontext comfyanonymous/flux_text_encoders t5xxl_fp16.safetensors
|
||||||
|
```
|
||||||
|
|
||||||
|
### Create the systemd-ai pod
|
||||||
|
|
||||||
|
You'll at least want the ai pod and network. Copy `ai.pod` and `ai.network` out
|
||||||
|
of `quadlets` into `~/.config/containers/systemd`.
|
||||||
|
|
||||||
|
Then run `systemctl --user daemon-reload && systemctl --user start ai-pod`
|
||||||
|
|
||||||
|
## llama.cpp
|
||||||
|
|
||||||
|
<https://github.com/ggml-org/llama.cpp/tree/master/tools/server>
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Build the llama.cpp container image
|
||||||
|
git clone https://github.com/ggml-org/llama.cpp.git
|
||||||
|
cd llama.cpp
|
||||||
|
export BUILD_TAG=$(date +"%Y-%m-%d-%H-%M-%S")
|
||||||
|
|
||||||
|
# Vulkan
|
||||||
|
podman build -f .devops/vulkan.Dockerfile -t llama-cpp-vulkan:${BUILD_TAG} -t llama-cpp-vulkan:latest .
|
||||||
|
|
||||||
|
# Run llama server (Available on port 8000)
|
||||||
|
# Add `--n-cpu-moe 32` to gpt-oss-120b to keep minimal number of expert in GPU
|
||||||
|
podman run \
|
||||||
|
--rm \
|
||||||
|
--name llama-server-demo \
|
||||||
|
--pod systemd-ai \
|
||||||
|
--device=/dev/kfd \
|
||||||
|
--device=/dev/dri \
|
||||||
|
-v /home/ai/models/text:/models:z \
|
||||||
|
localhost/llama-cpp-vulkan:latest \
|
||||||
|
--port 8000 \
|
||||||
|
-c 64000 \
|
||||||
|
-b 64000 \
|
||||||
|
-ub 500 \
|
||||||
|
--perf \
|
||||||
|
--n-gpu-layers all \
|
||||||
|
--jinja \
|
||||||
|
--models-max 1 \
|
||||||
|
--models-dir /models
|
||||||
|
```
|
||||||
|
|
||||||
|
## stable-diffusion.cpp
|
||||||
|
|
||||||
|
Server: <https://github.com/leejet/stable-diffusion.cpp/tree/master/examples/server>
|
||||||
|
|
||||||
|
CLI: <https://github.com/leejet/stable-diffusion.cpp/tree/master/examples/cli>
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git clone https://github.com/leejet/stable-diffusion.cpp.git
|
||||||
|
cd stable-diffusion.cpp
|
||||||
|
git submodule update --init --recursive
|
||||||
|
export BUILD_TAG=$(date +"%Y-%m-%d-%H-%M-%S")
|
||||||
|
|
||||||
|
# Vulkan
|
||||||
|
podman build -f Dockerfile.vulkan -t stable-diffusion-cpp:${BUILD_TAG} -t stable-diffusion-cpp:latest .
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# z-turbo
|
||||||
|
podman run --rm \
|
||||||
|
-v /home/ai/models:/models:z \
|
||||||
|
-v /home/ai/output:/output:z \
|
||||||
|
--device /dev/kfd \
|
||||||
|
--device /dev/dri \
|
||||||
|
localhost/stable-diffusion-cpp:latest \
|
||||||
|
--diffusion-model /models/image/z-turbo/z_image_turbo-Q4_K.gguf \
|
||||||
|
--vae /models/image/z-turbo/ae.safetensors \
|
||||||
|
--llm /models/image/z-turbo/Qwen3-4B-Instruct-2507-Q4_K_M.gguf \
|
||||||
|
--cfg-scale 1.0 \
|
||||||
|
-v \
|
||||||
|
-H 1024 \
|
||||||
|
-W 1024 \
|
||||||
|
--seed -1 \
|
||||||
|
--steps 8 \
|
||||||
|
--vae-conv-direct \
|
||||||
|
-o /output/output.png \
|
||||||
|
-p "A photorealistic dragon"
|
||||||
|
|
||||||
|
# Edit with flux kontext
|
||||||
|
podman run --rm \
|
||||||
|
-v /home/ai/models:/models:z \
|
||||||
|
-v /home/ai/output:/output:z \
|
||||||
|
--device /dev/kfd \
|
||||||
|
--device /dev/dri \
|
||||||
|
localhost/stable-diffusion-cpp:latest \
|
||||||
|
--diffusion-model /models/image/flux-1-kontext/flux1-kontext-dev-Q4_K_M.gguf \
|
||||||
|
--vae /models/image/flux-1-kontext/ae.safetensors \
|
||||||
|
--clip_l /models/image/flux-1-kontext/clip_l.safetensors \
|
||||||
|
--t5xxl /models/image/flux-1-kontext/t5xxl_fp16.safetensors \
|
||||||
|
--cfg-scale 1.0 \
|
||||||
|
--sampling-method euler \
|
||||||
|
--seed -1 \
|
||||||
|
--steps 28 \
|
||||||
|
--vae-conv-direct \
|
||||||
|
-v \
|
||||||
|
-H 512 \
|
||||||
|
-W 512 \
|
||||||
|
-o /output/output.png \
|
||||||
|
-r /output/everquest_logo.png \
|
||||||
|
-p "Add the text 'EverQuest'"
|
||||||
|
```
|
||||||
|
|
||||||
|
## open-webui
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mkdir /home/ai/.env
|
||||||
|
# Create a file called open-webui-env with `WEBUI_SECRET_KEY="some-random-key"
|
||||||
|
scp active/device_framework_desktop/secrets/open-webui-env deskwork-ai:.env/
|
||||||
|
|
||||||
|
# Will be available on port 8080
|
||||||
|
podman run \
|
||||||
|
-d \
|
||||||
|
--pod ai \
|
||||||
|
-v open-webui:/app/backend/data \
|
||||||
|
--name open-webui \
|
||||||
|
--restart always \
|
||||||
|
ghcr.io/open-webui/open-webui:main
|
||||||
|
```
|
||||||
|
|
||||||
|
## Install the whole thing with quadlets (TM)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Installs and runs all services in `quadlets/`
|
||||||
|
scp -r active/device_framework_desktop/quadlets/* deskwork-ai:.config/containers/systemd/
|
||||||
|
ssh deskwork-ai
|
||||||
|
systemctl --user daemon-reload
|
||||||
|
systemctl --user restart ai-pod.service
|
||||||
|
```
|
||||||
|
|
||||||
|
Note, all services will be available at `host.containers.internal`. So llama.cpp
|
||||||
|
will be up at `http://host.containers.internal:8000`.
|
||||||
|
|
||||||
|
### Install the update script
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 1. Builds the latest llama.cpp and stable-diffusion.cpp
|
||||||
|
# 2. Pulls the latest open-webui
|
||||||
|
# 3. Restarts all services
|
||||||
|
scp active/device_framework_desktop/update-script.sh deskwork:
|
||||||
|
ssh deskwork-ai
|
||||||
|
chmod +x update-script.sh
|
||||||
|
./update-script.sh
|
||||||
|
```
|
||||||
2
active/device_framework_desktop/quadlets/ai.network
Normal file
2
active/device_framework_desktop/quadlets/ai.network
Normal file
@@ -0,0 +1,2 @@
|
|||||||
|
[Network]
|
||||||
|
IPv6=true
|
||||||
8
active/device_framework_desktop/quadlets/ai.pod
Normal file
8
active/device_framework_desktop/quadlets/ai.pod
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
[Pod]
|
||||||
|
Network=ai.network
|
||||||
|
# llama.cpp
|
||||||
|
PublishPort=8000:8000/tcp
|
||||||
|
# open-webui
|
||||||
|
PublishPort=8080:8080/tcp
|
||||||
|
# stable-diffusion.cpp
|
||||||
|
PublishPort=1234:1234/tcp
|
||||||
@@ -0,0 +1,44 @@
|
|||||||
|
[Unit]
|
||||||
|
Description=A Llama CPP Server Running GPT OSS 120b
|
||||||
|
|
||||||
|
[Container]
|
||||||
|
# Shared AI pod
|
||||||
|
Pod=ai.pod
|
||||||
|
|
||||||
|
# Image is built locally via podman build
|
||||||
|
Image=localhost/llama-cpp-vulkan:latest
|
||||||
|
|
||||||
|
# Downloaded models volume
|
||||||
|
Volume=/home/ai/models/text:/models:z
|
||||||
|
|
||||||
|
# GPU Device
|
||||||
|
AddDevice=/dev/kfd
|
||||||
|
AddDevice=/dev/dri
|
||||||
|
|
||||||
|
# Server command
|
||||||
|
Exec=--port 8000 \
|
||||||
|
-c 48000 \
|
||||||
|
-b 48000 \
|
||||||
|
-ub 500 \
|
||||||
|
--perf \
|
||||||
|
--n-gpu-layers all \
|
||||||
|
--jinja \
|
||||||
|
--models-max 1 \
|
||||||
|
--models-dir /models
|
||||||
|
|
||||||
|
# Health Check
|
||||||
|
HealthCmd=CMD-SHELL curl --fail http://127.0.0.1:8000/props?model=gpt-oss-120b || exit 1
|
||||||
|
HealthInterval=10s
|
||||||
|
HealthRetries=3
|
||||||
|
HealthStartPeriod=10s
|
||||||
|
HealthTimeout=30s
|
||||||
|
HealthOnFailure=kill
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Restart=always
|
||||||
|
# Extend Timeout to allow time to pull the image
|
||||||
|
TimeoutStartSec=900
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
# Start by default on boot
|
||||||
|
WantedBy=multi-user.target default.target
|
||||||
@@ -0,0 +1,24 @@
|
|||||||
|
[Unit]
|
||||||
|
Description=An Open Webui Frontend for Local AI Services
|
||||||
|
|
||||||
|
[Container]
|
||||||
|
# Shared AI pod
|
||||||
|
Pod=ai.pod
|
||||||
|
|
||||||
|
# Open Webui base image
|
||||||
|
Image=ghcr.io/open-webui/open-webui:main
|
||||||
|
|
||||||
|
# Nothing too complicated here. Open Webui will basically configure itself.
|
||||||
|
Volume=open-webui-data:/app/backend/data
|
||||||
|
|
||||||
|
# WEBUI_SECRET_KEY is required to prevent logout on Restart
|
||||||
|
EnvironmentFile=/home/ai/.env/open-webui-env
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Restart=always
|
||||||
|
# Extend Timeout to allow time to pull the image
|
||||||
|
TimeoutStartSec=900
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
# Start by default on boot
|
||||||
|
WantedBy=multi-user.target default.target
|
||||||
@@ -0,0 +1,42 @@
|
|||||||
|
[Unit]
|
||||||
|
Description=A Stable Diffusion CPP Server for Editing Images
|
||||||
|
|
||||||
|
[Container]
|
||||||
|
# Shared AI pod
|
||||||
|
Pod=ai.pod
|
||||||
|
|
||||||
|
# Vulkan image for AMD GPU
|
||||||
|
Image=localhost/stable-diffusion-cpp:latest
|
||||||
|
|
||||||
|
# Shared models directory
|
||||||
|
Volume=/home/ai/models:/models:z
|
||||||
|
|
||||||
|
# GPU Device
|
||||||
|
AddDevice=/dev/kfd
|
||||||
|
AddDevice=/dev/dri
|
||||||
|
|
||||||
|
# Override entrypoint to use server
|
||||||
|
Entrypoint=/sd-server
|
||||||
|
|
||||||
|
# Server args
|
||||||
|
Exec=-l 0.0.0.0 \
|
||||||
|
--listen-port 1235 \
|
||||||
|
--diffusion-model /models/image/flux-1-kontext/flux1-kontext-dev-Q4_K_M.gguf \
|
||||||
|
--vae /models/image/flux-1-kontext/ae.safetensors \
|
||||||
|
--clip_l /models/image/flux-1-kontext/clip_l.safetensors \
|
||||||
|
--t5xxl /models/image/flux-1-kontext/t5xxl_fp16.safetensors \
|
||||||
|
--cfg-scale 1.0 \
|
||||||
|
--sampling-method euler \
|
||||||
|
--vae-conv-direct \
|
||||||
|
--seed -1 \
|
||||||
|
--steps 28 \
|
||||||
|
-v
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Restart=always
|
||||||
|
# Extend Timeout to allow time to pull the image
|
||||||
|
TimeoutStartSec=900
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
# Start by default on boot
|
||||||
|
WantedBy=multi-user.target default.target
|
||||||
@@ -0,0 +1,42 @@
|
|||||||
|
[Unit]
|
||||||
|
Description=A Stable Diffusion CPP Server for Generating Images
|
||||||
|
|
||||||
|
[Container]
|
||||||
|
# Shared AI pod
|
||||||
|
Pod=ai.pod
|
||||||
|
|
||||||
|
# Vulkan image for AMD GPU
|
||||||
|
Image=localhost/stable-diffusion-cpp:latest
|
||||||
|
|
||||||
|
# Shared models directory
|
||||||
|
Volume=/home/ai/models:/models:z
|
||||||
|
|
||||||
|
# GPU Device
|
||||||
|
AddDevice=/dev/kfd
|
||||||
|
AddDevice=/dev/dri
|
||||||
|
|
||||||
|
# Override entrypoint to use server
|
||||||
|
Entrypoint=/sd-server
|
||||||
|
|
||||||
|
# Server args
|
||||||
|
Exec=-l 0.0.0.0 \
|
||||||
|
--listen-port 1234 \
|
||||||
|
--diffusion-model /models/image/z-turbo/z_image_turbo-Q4_K.gguf \
|
||||||
|
--vae /models/image/z-turbo/ae.safetensors \
|
||||||
|
--llm /models/image/z-turbo/qwen_3_4b.safetensors \
|
||||||
|
-l 0.0.0.0 \
|
||||||
|
--listen-port 1234 \
|
||||||
|
--cfg-scale 1.0 \
|
||||||
|
--vae-conv-direct \
|
||||||
|
-v \
|
||||||
|
--seed -1 \
|
||||||
|
--steps 8
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Restart=always
|
||||||
|
# Extend Timeout to allow time to pull the image
|
||||||
|
TimeoutStartSec=900
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
# Start by default on boot
|
||||||
|
WantedBy=multi-user.target default.target
|
||||||
20
active/device_framework_desktop/update-script.sh
Normal file
20
active/device_framework_desktop/update-script.sh
Normal file
@@ -0,0 +1,20 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
set -x
|
||||||
|
|
||||||
|
export BUILD_TAG=$(date +"%Y-%m-%d-%H-%M-%S")
|
||||||
|
|
||||||
|
echo "Updates stable-diffusion.cpp, llama.cpp, and open-webui"
|
||||||
|
|
||||||
|
cd /home/ai/llama.cpp
|
||||||
|
git pull
|
||||||
|
podman build -t llama-cpp-vulkan:${BUILD_TAG} -t llama-cpp-vulkan:latest -f .devops/vulkan.Dockerfile .
|
||||||
|
|
||||||
|
cd /home/ai/stable-diffusion.cpp
|
||||||
|
git pull
|
||||||
|
git submodule update --init --recursive
|
||||||
|
podman build -f Dockerfile.vulkan -t stable-diffusion-cpp:${BUILD_TAG} -t stable-diffusion-cpp:latest .
|
||||||
|
|
||||||
|
podman image pull ghcr.io/open-webui/open-webui:main
|
||||||
|
|
||||||
|
systemctl --user restart ai-pod
|
||||||
@@ -9,16 +9,11 @@ groups = []
|
|||||||
hostname = "f43-base"
|
hostname = "f43-base"
|
||||||
|
|
||||||
[[customizations.disk.partitions]]
|
[[customizations.disk.partitions]]
|
||||||
type = "btrfs"
|
type = "plain"
|
||||||
minsize = "32 GiB"
|
label = "root"
|
||||||
|
|
||||||
[[customizations.disk.partitions.subvolumes]]
|
|
||||||
name = "root"
|
|
||||||
mountpoint = "/"
|
mountpoint = "/"
|
||||||
|
fs_type = "ext4"
|
||||||
[[customizations.disk.partitions.subvolumes]]
|
minsize = "128 GiB"
|
||||||
name = "home"
|
|
||||||
mountpoint = "/home"
|
|
||||||
|
|
||||||
[customizations.timezone]
|
[customizations.timezone]
|
||||||
timezone = "America/New_York"
|
timezone = "America/New_York"
|
||||||
@@ -70,8 +65,8 @@ data = """
|
|||||||
[[customizations.files]]
|
[[customizations.files]]
|
||||||
path = "/home/ducoterra/.inputrc"
|
path = "/home/ducoterra/.inputrc"
|
||||||
mode = "0644"
|
mode = "0644"
|
||||||
user = "root"
|
user = "ducoterra"
|
||||||
group = "root"
|
group = "ducoterra"
|
||||||
data = """
|
data = """
|
||||||
"\\C-h": backward-kill-word
|
"\\C-h": backward-kill-word
|
||||||
"""
|
"""
|
||||||
@@ -33,18 +33,21 @@ sudo usermod -aG weldr $USER
|
|||||||
|
|
||||||
# Optional: cockpit dependency
|
# Optional: cockpit dependency
|
||||||
dnf install -y cockpit-composer
|
dnf install -y cockpit-composer
|
||||||
|
|
||||||
|
# Optional: allow security profiles
|
||||||
|
dnf install openscap-scanner scap-security-guide
|
||||||
```
|
```
|
||||||
|
|
||||||
## Building Images
|
## Building Images
|
||||||
|
|
||||||
1. Create a toml file describing your image
|
1. Create a toml file describing your image
|
||||||
|
|
||||||
See `fedora-42-base.toml` for an example.
|
See `fedora42-base.toml` for an example.
|
||||||
|
|
||||||
2. Push the toml to composer
|
2. Push the toml to composer
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
composer-cli blueprints push active/software_osbuild/fedora-42-base.toml
|
composer-cli blueprints push active/software_osbuild/fedora42-base.toml
|
||||||
|
|
||||||
# List blueprints
|
# List blueprints
|
||||||
composer-cli blueprints list
|
composer-cli blueprints list
|
||||||
@@ -57,13 +60,13 @@ dnf install -y cockpit-composer
|
|||||||
composer-cli compose types
|
composer-cli compose types
|
||||||
|
|
||||||
# Build the image
|
# Build the image
|
||||||
composer-cli compose start fedora-42-base qcow2
|
composer-cli compose start fedora42-base qcow2
|
||||||
|
|
||||||
# Check status
|
# Check status
|
||||||
watch composer-cli compose status
|
watch composer-cli compose status
|
||||||
|
|
||||||
# Download logs if error
|
# Download logs if error
|
||||||
cd /tmp && composer-cli compose logs f91a12b6-01fd-4f94-91cc-9d5fb68b8129
|
cd /tmp && composer-cli compose logs 52963ac9-b680-4def-baaf-252845f0e3fe
|
||||||
|
|
||||||
# Delete failed images
|
# Delete failed images
|
||||||
composer-cli compose list failed -j | jq '.[].body.failed.[]?.id' | xargs -I '%' composer-cli compose delete '%'
|
composer-cli compose list failed -j | jq '.[].body.failed.[]?.id' | xargs -I '%' composer-cli compose delete '%'
|
||||||
@@ -79,16 +82,25 @@ dnf install -y cockpit-composer
|
|||||||
composer-cli compose list finished
|
composer-cli compose list finished
|
||||||
|
|
||||||
# Download the image
|
# Download the image
|
||||||
composer-cli compose image --filename /var/lib/libvirt/images/fedora-42-base.qcow2 image-uuid
|
composer-cli compose image --filename active/software_osbuild/secrets/fedora43-base.qcow2 image-uuid
|
||||||
|
|
||||||
# Test with qemu
|
# Test with qemu
|
||||||
qemu-kvm --name test-fedora-42-base -m 4096 -hda ~/Downloads/fedora-42-base.qcow2
|
virt-install \
|
||||||
|
--name "fedora43-base" \
|
||||||
|
--boot uefi,firmware.feature0.name=secure-boot,firmware.feature0.enabled=no \
|
||||||
|
--cpu host-passthrough --vcpus sockets=1,cores=8,threads=2 \
|
||||||
|
--ram=8192 \
|
||||||
|
--os-variant=fedora41 \
|
||||||
|
--network bridge:virbr0 \
|
||||||
|
--graphics none \
|
||||||
|
--console pty,target.type=virtio \
|
||||||
|
--import --disk "path=active/software_osbuild/secrets/fedora43-base.qcow2,bus=virtio"
|
||||||
```
|
```
|
||||||
|
|
||||||
### Image Build and Watch One Liner
|
### Image Build and Watch One Liner
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
composer-cli blueprints push active/software_osbuild/fedora-43-base.toml && \
|
composer-cli blueprints push active/software_osbuild/fedora43-base.toml && \
|
||||||
composer-cli compose start fedora-43-base qcow2 && \
|
composer-cli compose start fedora43-base qcow2 && \
|
||||||
watch composer-cli compose status
|
watch composer-cli compose status
|
||||||
```
|
```
|
||||||
Reference in New Issue
Block a user