Compare commits

..

27 Commits

Author SHA1 Message Date
d4571c9b70 add qwen3 base and coder
All checks were successful
Podman DDNS Image / build-and-push-ddns (push) Successful in 1m21s
2026-01-27 19:32:24 -05:00
4c0a263d50 fix vulkan 4gb limit
All checks were successful
Podman DDNS Image / build-and-push-ddns (push) Successful in 1m0s
2026-01-21 13:33:33 -05:00
70259d9542 more framework desktop local ai fixes
All checks were successful
Podman DDNS Image / build-and-push-ddns (push) Successful in 1m12s
2026-01-20 11:22:59 -05:00
4f3102a2ff local ai checkpoint 2026-01-19 20:50:05 -05:00
ed65f8924d add initial framework desktop config
All checks were successful
Podman DDNS Image / build-and-push-ddns (push) Successful in 1m13s
2026-01-13 13:53:52 -05:00
737a58a13c stable diffusion framework desktop notes
All checks were successful
Podman DDNS Image / build-and-push-ddns (push) Successful in 1m9s
2026-01-08 21:55:59 -05:00
a2cef18efe Add framework desktop LLM instructions
All checks were successful
Podman DDNS Image / build-and-push-ddns (push) Successful in 1m9s
2026-01-07 09:28:29 -05:00
1c245a593a image_builder checkpoint 2026-01-05 10:01:11 -05:00
b65ef9cbb7 initial smb instructions
All checks were successful
Podman DDNS Image / build-and-push-ddns (push) Successful in 1m10s
2025-12-17 12:32:26 -05:00
ea3e8f9c10 add luks and virsh notes from truenas migration 2025-12-17 10:04:59 -05:00
b5aecf1565 add btrfs notes from server setup 2025-12-16 21:40:37 -05:00
380d8f8e48 get cloud-init working consistently
All checks were successful
Podman DDNS Image / build-and-push-ddns (push) Successful in 2m21s
2025-11-26 13:54:01 -05:00
07a297f818 Add better intro to README
All checks were successful
Podman DDNS Image / build-and-push-ddns (push) Successful in 59s
2025-11-21 13:43:23 -05:00
d7224b038b switch to custom caddy with route53 dns support
All checks were successful
Podman DDNS Image / build-and-push-ddns (push) Successful in 2m21s
2025-11-20 17:58:53 -05:00
fc62219db7 update jellyfin to 10.11.3 2025-11-20 07:24:26 -05:00
2ebd97c345 update gitea to 2.5.1 2025-11-20 07:06:18 -05:00
270e86bfd0 clean up templates 2025-11-20 06:57:11 -05:00
7305e3a35b add cloud-init notes for amazon linux on qemu 2025-11-20 06:56:56 -05:00
aabbd8286f update immich to 2.3.1 2025-11-20 06:56:27 -05:00
37f7d442a1 add missing return 2025-11-20 06:56:09 -05:00
3ff805fa39 clamav don't alert on max size
All checks were successful
Podman DDNS Image / build-and-push-ddns (push) Successful in 1m33s
2025-11-18 10:24:53 -05:00
1ae62e70ed move systemd prefixes to software prefixes 2025-11-18 10:01:18 -05:00
91f4687c07 add esphome display notes 2025-11-18 09:57:49 -05:00
dc2df62d04 add borg repo update notes 2025-11-18 09:57:14 -05:00
b75aac76c2 add firewalld inspection notes 2025-11-18 09:57:00 -05:00
5161dced6e add clamav ignore signatures notes 2025-11-18 09:56:44 -05:00
d9ed144578 add lvfs-testing framework 16 update notes 2025-11-18 09:56:08 -05:00
65 changed files with 1444 additions and 407 deletions

View File

@@ -0,0 +1,30 @@
name: Podman DDNS Image
run-name: Build and Push the Custom Caddy Image with Route53 DNS Certbot
on:
push:
paths:
- active/podman_caddy/**
- .gitea/workflows/caddy.yaml
schedule:
- cron: '@daily'
jobs:
build-and-push-ddns:
runs-on: ubuntu-latest
if: gitea.ref == 'refs/heads/main'
steps:
- name: Check out repository code
uses: actions/checkout@v4
- name: Login to Gitea Registry
uses: docker/login-action@v2
with:
registry: gitea.reeseapps.com
username: ${{ secrets.REGISTRY_USERNAME }}
password: ${{ secrets.REGISTRY_PASSWORD }}
- name: Build and push Docker image
uses: https://github.com/docker/build-push-action@v5
with:
context: ${{ gitea.workspace }}/active/podman_caddy
file: ${{ gitea.workspace }}/active/podman_caddy/Containerfile
push: true
tags: "gitea.reeseapps.com/services/caddy:latest,gitea.reeseapps.com/services/caddy:${{gitea.sha}}"
no-cache: true

View File

@@ -1,13 +1,34 @@
# Homelab
A project to store homelab stuff.
Welcome to my homelab!
Just here for the Arch distoolbox?
This repo is an in-flux collection of my personal notes, docs, and tutorials of
things I find interesting and self-host.
[Arch Distoolbox](active/software_distoolbox/distoolbox.md)
Take a look around!
![Arch Toolbox
Status](https://gitea.reeseapps.com/services/homelab/actions/workflows/distoolbox.yaml/badge.svg?branch=main)
- "Active" projects (/active) are in use today and generally fall into these
categories:
- `aws_` is for aws notes
- `device_` is for hardware
- `kubernetes_` is for helm charts or other kubernetes hosted software
- `os_` is for operating system setup guides and notes
- `podman_` is for containerized projects
- `software_` is for cli tools, projects without a specific way to host them,
or other misfits
All active projects will have a markdown file named after the project. This is
for quick access via shortcuts like `ctrl + p` in vscode. For example, I want
to check my notes for `virsh` so I would type `ctrl + p` "virsh" to open
"virsh.md".
"Retired" projects (/retired) is a graveyard of things I didn't want to delete.
"Template" projects (/templates) are quick templates for creating new active
projects with sane defaults.
I keep my GPG and SSH keys in `keys` if you want to add those to your keyring
or give me access to your servers.
## Table of Contents
@@ -18,6 +39,7 @@ Status](https://gitea.reeseapps.com/services/homelab/actions/workflows/distoolbo
- [inputrc](#inputrc)
- ["find ." shortcuts](#find--shortcuts)
- [tmux](#tmux)
- [bash](#bash)
- [SSH Setup](#ssh-setup)
- [Git GPG Commit Signing](#git-gpg-commit-signing)
- [Important Dates and Times](#important-dates-and-times)
@@ -62,6 +84,40 @@ find . -type d -exec chmod 755 {} \;
- Even Vertical Distribution: ctrl + b + alt + 2
- Swap pane order: ctrl + b + : -> swap-pane -t 0
### bash
<https://tecadmin.net/bash-special-variables/>
Here are some handy references for default bash variables
```text
$0 The name of the script being executed.
$1-$9 The first nine command-line arguments.
$# The number of command-line arguments.
$* All command-line arguments as a single string.
$@ All command-line arguments as an array.
$? The exit status of the last executed command.
$$ The process ID of the current shell.
$! The process ID of the last background command.
$- Shows the current shell options or flags.
```
And here are the meanings of the shell options
```text
h Remember the location of commands as they are looked up
i Interactive shell
m Job control is enabled
B Brace expansion is enabled
H History substitution is enabled
```
So to check if you are in an interactive shell:
```bash
[ $- == *i* ]] && Some command here
```
## SSH Setup
Generate a key (password protect it!)
@@ -176,12 +232,10 @@ All projects will be prefixed with one of the following categories:
- `device_`
- `os_`
- `cloud_`
- `systemd_`
- `software_`
- `podman_`
- `docker_`
- `kubernetes_`
- `software_`
Note, some projects will be named with just the prefix. These are projects for
configuring the underlying technology. The `podman` project, for example, will
@@ -203,24 +257,15 @@ be prefixed with the cloud's name, not the word "cloud". So AWS services will
be prefixed with `aws_` and azure would be `azure_`. This should make them more
searchable.
`systemd_` projects are designed to be installed with ansible and run via
systemd on a linux VM or other linux hardware.
`software_` projects record configuration for common software agnostic to
operating system or linux flavor.
`podman_` projects are either designed to be run as quadlets or as podman
containers outright.
`docker_` projects are either docker-compose or some form of docker run
command.
`kubernetes_` projects are helm, kustomize, kubectl, or some other kubernetes
compliant deployment.
`software_` projects record configuration for common software agnostic to
operating system or linux flavor.
`security_` projects record security best practices and might reference other
projects, but are usually just best practices and how-tos.
## Active Project Requirements
- [ ] Installation is documented

View File

@@ -25,6 +25,7 @@
- [light](#light)
- [binary sensor](#binary-sensor)
- [lambda](#lambda)
- [Display](#display)
## Install
@@ -177,3 +178,72 @@ assist_pipeline:
> you call any of ESPHomes many APIs directly. For example, here were
> retrieving the current state of the end stop using .state and using it to
> construct our cover state.
### Display
Display pages
```yaml
display:
- platform: st7735
spi_id: spi_lcd
model: "INITR_MINI160X80"
reset_pin: GPIO1
cs_pin: GPIO4
dc_pin: GPIO2
rotation: 270
device_width: 82
device_height: 161
col_start: 0
row_start: 0
eight_bit_color: true
invert_colors: true
use_bgr: true
auto_clear_enabled: true
id: my_display
pages:
- id: page1
lambda: |-
it.print(0, 10, id(font_roboto), "Connecting to");
it.print(0, 30, id(font_roboto), "Home Assistant...");
- id: page2
lambda: |-
it.print(0, 10, id(font_roboto), "Configuring");
it.print(0, 30, id(font_roboto), "sensors...");
- id: page3
lambda: |-
it.print(0, 10, id(font_roboto), "Loading");
it.print(0, 30, id(font_roboto), "important");
it.print(0, 50, id(font_roboto), "update...");
- id: page4
lambda: |-
it.image(0, 0, id(my_image), COLOR_OFF, COLOR_ON);
```
Switch pages
```yaml
interval:
- interval: 5s
then:
- display.page.show_next: my_display
- component.update: my_display
```
Show an image
```yaml
image:
- file: "test_tdongle_image.png"
type: RGB
id: my_image
```
Specify a font
```yaml
font:
- file: "gfonts://Roboto"
id: font_roboto
size: 20
```

View File

@@ -11,3 +11,13 @@ sudo curl -o /etc/udev/rules.d/50-qmk.rules https://raw.githubusercontent.com/qm
sudo udevadm control --reload-rules
sudo udevadm trigger
```
## Beta Bios Updates
```bash
# With charger attached
sudo fwupdmgr enable-remote lvfs-testing
sudo fwupdmgr refresh --force
sudo fwupdmgr get-updates
sudo fwupdmgr update
```

View File

@@ -0,0 +1,297 @@
# Framework Desktop
- [Framework Desktop](#framework-desktop)
- [BIOS](#bios)
- [References](#references)
- [Notes](#notes)
- [Volume Locations](#volume-locations)
- [Setup](#setup)
- [Create the AI user](#create-the-ai-user)
- [Helper aliases](#helper-aliases)
- [Create the models dir](#create-the-models-dir)
- [Install the Hugging Face CLI](#install-the-hugging-face-cli)
- [Download models](#download-models)
- [Text models](#text-models)
- [Image models](#image-models)
- [Create the systemd-ai pod](#create-the-systemd-ai-pod)
- [llama.cpp](#llamacpp)
- [stable-diffusion.cpp](#stable-diffusioncpp)
- [open-webui](#open-webui)
- [Install the whole thing with quadlets (TM)](#install-the-whole-thing-with-quadlets-tm)
- [Install the update script](#install-the-update-script)
## BIOS
<https://knowledgebase.frame.work/en_us/changing-memory-allocation-amd-ryzen-ai-max-300-series-By1LG5Yrll>
1. Set GPU memory to 512MB
## References
<https://docs.podman.io/en/latest/markdown/podman-systemd.unit.5.html>
## Notes
### Volume Locations
`~/.local/share/containers/storage/volumes/`
## Setup
### Create the AI user
```bash
# Create your local ai user. This will be the user you launch podman processes from.
useradd -m ai
loginctl enable-linger ai
su -l ai
mkdir -p ~/.config/containers/systemd/
```
Models are big. You'll want some tools to help find large files quickly when space runs out.
### Helper aliases
Add these to your .bashrc:
```bash
# Calculate all folder sizes in current dir
alias {dudir,dud}='du -h --max-depth 1 | sort -h'
# Calculate all file sizes in current dir
alias {dufile,duf}='ls -lhSr'
# Restart llama-server / follow logs
alias llama-reload="systemctl --user daemon-reload && systemctl --user restart llama-server.service"
alias llama-logs="journalctl --user -fu llama-server"
# Restart stable diffusion gen and edit server / follow logs
alias sd-gen-reload='systemctl --user daemon-reload && systemctl --user restart stable-diffusion-gen-server'
alias sd-gen-logs='journalctl --user -xeu stable-diffusion-gen-server'
alias sd-edit-reload='systemctl --user daemon-reload && systemctl --user restart stable-diffusion-edit-server'
alias sd-edit-logs='journalctl --user -xeu stable-diffusion-edit-server'
```
### Create the models dir
```bash
mkdir -p /home/ai/models/{text,image,video}
```
### Install the Hugging Face CLI
<https://huggingface.co/docs/huggingface_hub/en/guides/cli#getting-started>
```bash
# Install
curl -LsSf https://hf.co/cli/install.sh | bash
# Login
hf auth login
```
### Download models
#### Text models
<https://huggingface.co/ggml-org/collections>
```bash
# gpt-oss-120b
mkdir /home/ai/models/text/gpt-oss-120b
hf download --local-dir /home/ai/models/text/gpt-oss-120b ggml-org/gpt-oss-120b-GGUF
# gpt-oss-20b
mkdir /home/ai/models/text/gpt-oss-20b
hf download --local-dir /home/ai/models/text/gpt-oss-20b ggml-org/gpt-oss-20b-GGUF
# devstral-2-123b
mkdir /home/ai/models/text/devstral-2-123b
hf download --local-dir /home/ai/models/text/devstral-2-123b unsloth/Devstral-2-123B-Instruct-2512-GGUF Q4_K_M/Devstral-2-123B-Instruct-2512-Q4_K_M-00001-of-00002.gguf
hf download --local-dir /home/ai/models/text/devstral-2-123b unsloth/Devstral-2-123B-Instruct-2512-GGUF Q4_K_M/Devstral-2-123B-Instruct-2512-Q4_K_M-00002-of-00002.gguf
# devstral-small-2-24b
mkdir /home/ai/models/text/devstral-small-2-24b
hf download --local-dir /home/ai/models/text/devstral-small-2-24b unsloth/Devstral-Small-2-24B-Instruct-2512-GGUF Devstral-Small-2-24B-Instruct-2512-Q4_K_M.gguf
# ministral-3-14b
mkdir /home/ai/models/text/ministral-3-14b
hf download --local-dir /home/ai/models/text/ministral-3-14b ggml-org/Ministral-3-14B-Reasoning-2512-GGUF
# ministral-3-3b-instruct
mkdir /home/ai/models/text/ministral-3-3b-instruct
hf download --local-dir /home/ai/models/text/ministral-3-3b-instruct ggml-org/Ministral-3-3B-Instruct-2512-GGUF
# nemotron-nano-30b
mkdir /home/ai/models/text/nemotron-nano-30b
hf download --local-dir /home/ai/models/text/nemotron-nano-30b ggml-org/Nemotron-Nano-3-30B-A3B-GGUF Nemotron-Nano-3-30B-A3B-Q4_K_M.gguf
# qwen3-30b-a3b-instruct
mkdir /home/ai/models/text/qwen3-30b-a3b-instruct
hf download --local-dir /home/ai/models/text/qwen3-30b-a3b-instruct ggml-org/Qwen3-30B-A3B-Instruct-2507-Q8_0-GGUF
# qwen3-coder-30b-a3b-instruct
mkdir /home/ai/models/text/qwen3-coder-30b-a3b-instruct
hf download --local-dir /home/ai/models/text/qwen3-coder-30b-a3b-instruct ggml-org/Qwen3-Coder-30B-A3B-Instruct-Q8_0-GGUF
```
#### Image models
```bash
# z-turbo
mkdir /home/ai/models/image/z-turbo
hf download --local-dir /home/ai/models/image/z-turbo QuantStack/FLUX.1-Kontext-dev-GGUF flux1-kontext-dev-Q4_K_M.gguf
hf download --local-dir /home/ai/models/image/z-turbo black-forest-labs/FLUX.1-schnell ae.safetensors
hf download --local-dir /home/ai/models/image/z-turbo unsloth/Qwen3-4B-Instruct-2507-GGUF Qwen3-4B-Instruct-2507-Q4_K_M.gguf
# flux-1-kontext
mkdir /home/ai/models/image/flux-1-kontext
hf download --local-dir /home/ai/models/image/flux-1-kontext leejet/Z-Image-Turbo-GGUF z_image_turbo-Q4_K.gguf
hf download --local-dir /home/ai/models/image/flux-1-kontext black-forest-labs/FLUX.1-dev ae.safetensors
hf download --local-dir /home/ai/models/image/flux-1-kontext comfyanonymous/flux_text_encoders clip_l.safetensors
hf download --local-dir /home/ai/models/image/flux-1-kontext comfyanonymous/flux_text_encoders t5xxl_fp16.safetensors
```
### Create the systemd-ai pod
You'll at least want the ai pod and network. Copy `ai.pod` and `ai.network` out
of `quadlets` into `~/.config/containers/systemd`.
Then run `systemctl --user daemon-reload && systemctl --user start ai-pod`
## llama.cpp
<https://github.com/ggml-org/llama.cpp/tree/master/tools/server>
```bash
# Build the llama.cpp container image
git clone https://github.com/ggml-org/llama.cpp.git
cd llama.cpp
export BUILD_TAG=$(date +"%Y-%m-%d-%H-%M-%S")
# Vulkan
podman build -f .devops/vulkan.Dockerfile -t llama-cpp-vulkan:${BUILD_TAG} -t llama-cpp-vulkan:latest .
# Run llama server (Available on port 8000)
# Add `--n-cpu-moe 32` to gpt-oss-120b to keep minimal number of expert in GPU
podman run \
--rm \
--name llama-server-demo \
--pod systemd-ai \
--device=/dev/kfd \
--device=/dev/dri \
-v /home/ai/models/text:/models:z \
localhost/llama-cpp-vulkan:latest \
--port 8000 \
-c 64000 \
-b 64000 \
-ub 500 \
--perf \
--n-gpu-layers all \
--jinja \
--models-max 1 \
--models-dir /models
```
## stable-diffusion.cpp
Server: <https://github.com/leejet/stable-diffusion.cpp/tree/master/examples/server>
CLI: <https://github.com/leejet/stable-diffusion.cpp/tree/master/examples/cli>
```bash
git clone https://github.com/leejet/stable-diffusion.cpp.git
cd stable-diffusion.cpp
git submodule update --init --recursive
export BUILD_TAG=$(date +"%Y-%m-%d-%H-%M-%S")
# Vulkan
podman build -f Dockerfile.vulkan -t stable-diffusion-cpp:${BUILD_TAG} -t stable-diffusion-cpp:latest .
```
```bash
# z-turbo
podman run --rm \
-v /home/ai/models:/models:z \
-v /home/ai/output:/output:z \
--device /dev/kfd \
--device /dev/dri \
localhost/stable-diffusion-cpp:latest \
--diffusion-model /models/image/z-turbo/z_image_turbo-Q4_K.gguf \
--vae /models/image/z-turbo/ae.safetensors \
--llm /models/image/z-turbo/Qwen3-4B-Instruct-2507-Q4_K_M.gguf \
--cfg-scale 1.0 \
-v \
-H 1024 \
-W 1024 \
--seed -1 \
--steps 8 \
--vae-conv-direct \
-o /output/output.png \
-p "A photorealistic dragon"
# Edit with flux kontext
podman run --rm \
-v /home/ai/models:/models:z \
-v /home/ai/output:/output:z \
--device /dev/kfd \
--device /dev/dri \
localhost/stable-diffusion-cpp:latest \
--diffusion-model /models/image/flux-1-kontext/flux1-kontext-dev-Q4_K_M.gguf \
--vae /models/image/flux-1-kontext/ae.safetensors \
--clip_l /models/image/flux-1-kontext/clip_l.safetensors \
--t5xxl /models/image/flux-1-kontext/t5xxl_fp16.safetensors \
--cfg-scale 1.0 \
--sampling-method euler \
--seed -1 \
--steps 28 \
--vae-conv-direct \
-v \
-H 512 \
-W 512 \
-o /output/output.png \
-r /output/everquest_logo.png \
-p "Add the text 'EverQuest'"
```
## open-webui
```bash
mkdir /home/ai/.env
# Create a file called open-webui-env with `WEBUI_SECRET_KEY="some-random-key"
scp active/device_framework_desktop/secrets/open-webui-env deskwork-ai:.env/
# Will be available on port 8080
podman run \
-d \
--pod ai \
-v open-webui:/app/backend/data \
--name open-webui \
--restart always \
ghcr.io/open-webui/open-webui:main
```
## Install the whole thing with quadlets (TM)
```bash
# Installs and runs all services in `quadlets/`
scp -r active/device_framework_desktop/quadlets/* deskwork-ai:.config/containers/systemd/
ssh deskwork-ai
systemctl --user daemon-reload
systemctl --user restart ai-pod.service
```
Note, all services will be available at `host.containers.internal`. So llama.cpp
will be up at `http://host.containers.internal:8000`.
### Install the update script
```bash
# 1. Builds the latest llama.cpp and stable-diffusion.cpp
# 2. Pulls the latest open-webui
# 3. Restarts all services
scp active/device_framework_desktop/update-script.sh deskwork:
ssh deskwork-ai
chmod +x update-script.sh
./update-script.sh
```

View File

@@ -0,0 +1,2 @@
[Network]
IPv6=true

View File

@@ -0,0 +1,8 @@
[Pod]
Network=ai.network
# llama.cpp
PublishPort=8000:8000/tcp
# open-webui
PublishPort=8080:8080/tcp
# stable-diffusion.cpp
PublishPort=1234:1234/tcp

View File

@@ -0,0 +1,44 @@
[Unit]
Description=A Llama CPP Server Running GPT OSS 120b
[Container]
# Shared AI pod
Pod=ai.pod
# Image is built locally via podman build
Image=localhost/llama-cpp-vulkan:latest
# Downloaded models volume
Volume=/home/ai/models/text:/models:z
# GPU Device
AddDevice=/dev/kfd
AddDevice=/dev/dri
# Server command
Exec=--port 8000 \
-c 48000 \
-b 48000 \
-ub 500 \
--perf \
--n-gpu-layers all \
--jinja \
--models-max 1 \
--models-dir /models
# Health Check
HealthCmd=CMD-SHELL curl --fail http://127.0.0.1:8000/props?model=gpt-oss-120b || exit 1
HealthInterval=10s
HealthRetries=3
HealthStartPeriod=10s
HealthTimeout=30s
HealthOnFailure=kill
[Service]
Restart=always
# Extend Timeout to allow time to pull the image
TimeoutStartSec=900
[Install]
# Start by default on boot
WantedBy=multi-user.target default.target

View File

@@ -0,0 +1,24 @@
[Unit]
Description=An Open Webui Frontend for Local AI Services
[Container]
# Shared AI pod
Pod=ai.pod
# Open Webui base image
Image=ghcr.io/open-webui/open-webui:main
# Nothing too complicated here. Open Webui will basically configure itself.
Volume=open-webui-data:/app/backend/data
# WEBUI_SECRET_KEY is required to prevent logout on Restart
EnvironmentFile=/home/ai/.env/open-webui-env
[Service]
Restart=always
# Extend Timeout to allow time to pull the image
TimeoutStartSec=900
[Install]
# Start by default on boot
WantedBy=multi-user.target default.target

View File

@@ -0,0 +1,42 @@
[Unit]
Description=A Stable Diffusion CPP Server for Editing Images
[Container]
# Shared AI pod
Pod=ai.pod
# Vulkan image for AMD GPU
Image=localhost/stable-diffusion-cpp:latest
# Shared models directory
Volume=/home/ai/models:/models:z
# GPU Device
AddDevice=/dev/kfd
AddDevice=/dev/dri
# Override entrypoint to use server
Entrypoint=/sd-server
# Server args
Exec=-l 0.0.0.0 \
--listen-port 1235 \
--diffusion-model /models/image/flux-1-kontext/flux1-kontext-dev-Q4_K_M.gguf \
--vae /models/image/flux-1-kontext/ae.safetensors \
--clip_l /models/image/flux-1-kontext/clip_l.safetensors \
--t5xxl /models/image/flux-1-kontext/t5xxl_fp16.safetensors \
--cfg-scale 1.0 \
--sampling-method euler \
--vae-conv-direct \
--seed -1 \
--steps 28 \
-v
[Service]
Restart=always
# Extend Timeout to allow time to pull the image
TimeoutStartSec=900
[Install]
# Start by default on boot
WantedBy=multi-user.target default.target

View File

@@ -0,0 +1,42 @@
[Unit]
Description=A Stable Diffusion CPP Server for Generating Images
[Container]
# Shared AI pod
Pod=ai.pod
# Vulkan image for AMD GPU
Image=localhost/stable-diffusion-cpp:latest
# Shared models directory
Volume=/home/ai/models:/models:z
# GPU Device
AddDevice=/dev/kfd
AddDevice=/dev/dri
# Override entrypoint to use server
Entrypoint=/sd-server
# Server args
Exec=-l 0.0.0.0 \
--listen-port 1234 \
--diffusion-model /models/image/z-turbo/z_image_turbo-Q4_K.gguf \
--vae /models/image/z-turbo/ae.safetensors \
--llm /models/image/z-turbo/qwen_3_4b.safetensors \
-l 0.0.0.0 \
--listen-port 1234 \
--cfg-scale 1.0 \
--vae-conv-direct \
-v \
--seed -1 \
--steps 8
[Service]
Restart=always
# Extend Timeout to allow time to pull the image
TimeoutStartSec=900
[Install]
# Start by default on boot
WantedBy=multi-user.target default.target

View File

@@ -0,0 +1,20 @@
#!/bin/bash
set -x
export BUILD_TAG=$(date +"%Y-%m-%d-%H-%M-%S")
echo "Updates stable-diffusion.cpp, llama.cpp, and open-webui"
cd /home/ai/llama.cpp
git pull
podman build -t llama-cpp-vulkan:${BUILD_TAG} -t llama-cpp-vulkan:latest -f .devops/vulkan.Dockerfile .
cd /home/ai/stable-diffusion.cpp
git pull
git submodule update --init --recursive
podman build -f Dockerfile.vulkan -t stable-diffusion-cpp:${BUILD_TAG} -t stable-diffusion-cpp:latest .
podman image pull ghcr.io/open-webui/open-webui:main
systemctl --user restart ai-pod

View File

@@ -0,0 +1,8 @@
FROM docker.io/caddy:2-builder AS builder
RUN xcaddy build \
--with github.com/caddy-dns/route53@v1.6.0
FROM docker.io/caddy:2
COPY --from=builder /usr/bin/caddy /usr/bin/caddy

View File

@@ -4,7 +4,7 @@ Description=Caddy
[Container]
AddCapability=NET_ADMIN
ContainerName=caddy
Image=docker.io/caddy:2
Image=gitea.reeseapps.com/services/caddy:latest
Network=host
SecurityLabelDisable=true
Volume=/etc/caddy:/etc/caddy

View File

@@ -1,11 +1,24 @@
# Caddy Reverse Proxy
- [Caddy Reverse Proxy](#caddy-reverse-proxy)
- [Custom Caddy Image](#custom-caddy-image)
- [Install Caddy](#install-caddy)
- [Ansible](#ansible)
- [Manual](#manual)
- [Adding a new Caddy Record](#adding-a-new-caddy-record)
## Custom Caddy Image
This repo builds a custom caddy image with route53 DNS certbot support.
```bash
podman image pull gitea.reeseapps.com/services/caddy:latest
```
To upgrade the image, check [the caddy-dns route53
project](https://github.com/caddy-dns/route53/tags) releases and update the
`Containerfile` with the new version.
## Install Caddy
### Ansible

View File

@@ -1,6 +1,6 @@
services:
gitea:
image: docker.gitea.com/gitea:1.24-rootless
image: docker.gitea.com/gitea:1.25-rootless
container_name: gitea
environment:
- GITEA__database__DB_TYPE=postgres

View File

@@ -6,7 +6,7 @@
- [Create the gitea user](#create-the-gitea-user)
- [Convert Compose to Quadlet](#convert-compose-to-quadlet)
- [Install Quadlets](#install-quadlets)
- [Upgrade Quadlets](#upgrade-quadlets)
- [Upgrade](#upgrade)
- [Editing Gitea Config](#editing-gitea-config)
- [Gitea Runners](#gitea-runners)
- [Firewall Rules](#firewall-rules)
@@ -102,12 +102,23 @@ systemctl --user restart gitea postgres
systemctl --user enable --now podman-auto-update.timer
```
### Upgrade Quadlets
### Upgrade
1. Check [the blog](https://blog.gitea.com/) for any breaking changes.
2. Update the `compose.yaml` with any needed changes
3. [Regenerate the quadlets](#convert-compose-to-quadlet)
4. Upload the new quadlets and restart the service
```bash
scp -r quadlets/. gitea:~/.config/containers/systemd/
ssh gitea systemctl --user daemon-reload
ssh gitea systemctl --user restart gitea postgres
# Upload quadlets and restart
export PODMAN_SERVER=3dserver
scp -r active/podman_gitea/quadlets/. $PODMAN_SERVER:/home/gitea/.config/containers/systemd/
ssh $PODMAN_SERVER chown -R gitea:gitea /home/gitea/.config/containers/systemd/
ssh $PODMAN_SERVER
machinectl shell gitea@
systemctl --user daemon-reload
systemctl --user restart gitea postgres
```
### Editing Gitea Config

View File

@@ -5,7 +5,7 @@ Requires=postgres.service
AutoUpdate=registry
ContainerName=gitea
Environment=GITEA__database__DB_TYPE=postgres GITEA__database__HOST=postgres:5432 GITEA__database__NAME=gitea GITEA__database__USER=gitea GITEA__database__PASSWD=gitea
Image=docker.gitea.com/gitea:1.24-rootless
Image=docker.gitea.com/gitea:1.25-rootless
Network=gitea.network
PublishPort=3000:3000
PublishPort=2222:2222

View File

@@ -10,7 +10,7 @@
services:
immich-server:
container_name: immich_server
image: ghcr.io/immich-app/immich-server:release
image: ghcr.io/immich-app/immich-server:v2.3.1
# extends:
# file: hwaccel.transcoding.yml
# service: cpu # set to one of [nvenc, quicksync, rkmpp, vaapi, vaapi-wsl] for accelerated transcoding

View File

@@ -4,7 +4,7 @@ Requires=redis.service database.service
[Container]
ContainerName=immich_server
EnvironmentFile=.env
Image=ghcr.io/immich-app/immich-server:release
Image=ghcr.io/immich-app/immich-server:v2.3.1
Network=immich.network
PublishPort=2283:2283
Volume=/home/immich/library:/data:Z

View File

@@ -1,51 +1,59 @@
# Jellyfin
- [Jellyfin](#jellyfin)
- [Install](#install)
- [Upgrade](#upgrade)
- [Mounting Media Directory](#mounting-media-directory)
They have podman rootless instructions!
<https://jellyfin.org/docs/general/installation/container/#managing-via-systemd>
## Install
### Create jellyfin btrfs volume
1. Create the jellyfin user
```bash
useradd jellyfin
loginctl enable-linger $(id -u jellyfin)
systemctl --user --machine=jellyfin@.host enable podman-restart
systemctl --user --machine=jellyfin@.host enable --now podman.socket
mkdir -p /home/jellyfin/.config/containers/systemd
```
2. Update the jellyfin record in Caddy.
3. Open port 8096 in the firewall.
4. Copy the files to the server and start the service
```bash
export PODMAN_SERVER=3dserver
scp -r active/podman_jellyfin/quadlets/. $PODMAN_SERVER:/home/jellyfin/.config/containers/systemd/
ssh $PODMAN_SERVER chown -R jellyfin:jellyfin /home/jellyfin/.config/containers/systemd/
ssh $PODMAN_SERVER
machinectl shell jellyfin@
systemctl --user daemon-reload
systemctl --user restart jellyfin
```
## Upgrade
1. Check [the blog](https://jellyfin.org/posts) for breaking changes
2. Update the `jellyfin.container` with the new image version
3. Update quadlets and restart the service
```bash
btrfs sub create /btrfs/jellyfin
# Upload quadlets and restart
export PODMAN_SERVER=3dserver
scp -r active/podman_jellyfin/quadlets/. $PODMAN_SERVER:/home/jellyfin/.config/containers/systemd/
ssh $PODMAN_SERVER chown -R jellyfin:jellyfin /home/jellyfin/.config/containers/systemd/
ssh $PODMAN_SERVER
machinectl shell jellyfin@
systemctl --user daemon-reload
systemctl --user restart jellyfin
```
Add /home/jellyfin mount to /etc/fstab
```bash
systemctl daemon-reload
mount -a --mkdir
```
### Create the jellyfin user
```bash
useradd jellyfin
loginctl enable-linger $(id -u jellyfin)
systemctl --user --machine=jellyfin@.host enable podman-restart
systemctl --user --machine=jellyfin@.host enable --now podman.socket
mkdir -p /home/jellyfin/.config/containers/systemd
```
### Install jellyfin
~/.config/containers/systemd/jellyfin.container
```bash
# Copy the files to the server
scp -r active/podman_jellyfin/quadlets/. jellyfin:~/.config/containers/systemd/
ssh jellyfin systemctl --user daemon-reload
ssh jellyfin systemctl --user start jellyfin
ssh jellyfin journalctl --user -xeu jellyfin
ssh jellyfin systemctl --user enable --now podman-auto-update.timer
```
Update the jellyfin record in Caddy.
Open port 8096 in the firewall.
## Mounting Media Directory
Update /etc/fstab with the smb disk details.

View File

@@ -1,12 +1,12 @@
[Container]
Image=docker.io/jellyfin/jellyfin:latest
Image=docker.io/jellyfin/jellyfin:10.11.3
AutoUpdate=registry
PublishPort=8096:8096/tcp
UserNS=keep-id
SecurityLabelDisable=true
Volume=/home/jellyfin/jellyfin-config:/config:Z
Volume=/home/jellyfin/jellyfin-cache:/cache:Z
Volume=/home/jellyfin/jellyfin-media:/media:Z
Volume=/var/media:/media:Z
Network=jellyfin.network
[Service]

View File

@@ -8,6 +8,7 @@
- [Check backup service logs](#check-backup-service-logs)
- [Run a Manual Backup](#run-a-manual-backup)
- [Back up and Entire System](#back-up-and-entire-system)
- [Upgrade a Borg Repo](#upgrade-a-borg-repo)
## Install Borg
@@ -248,3 +249,7 @@ borg create \
${BORG_REPO}::$(date +"%F-%H-%M-%S") \
/
```
## Upgrade a Borg Repo
<https://borgbackup.readthedocs.io/en/stable/usage/upgrade.html>

View File

@@ -0,0 +1,217 @@
# BTRFS
- [BTRFS](#btrfs)
- [Creating an Array](#creating-an-array)
- [Mounting the Array](#mounting-the-array)
- [Adding Disks](#adding-disks)
- [Replacing a Disk](#replacing-a-disk)
- [Scrubbing the Array](#scrubbing-the-array)
- [Creating Subvolumes](#creating-subvolumes)
- [Monitoring Usage](#monitoring-usage)
- [Encrypting BTRFS with LUKS](#encrypting-btrfs-with-luks)
- [Monitoring Disk Health](#monitoring-disk-health)
- [Defragmenting and Compressing](#defragmenting-and-compressing)
Oracle [has decent docs here](https://docs.oracle.com/en/operating-systems/oracle-linux/8/btrfs/btrfs-ResizingaBtrfsFileSystem.html)
You'll also want to [read about btrfs compression](https://thelinuxcode.com/enable-btrfs-filesystem-compression/)
## Creating an Array
```bash
# At any point you can check the status of an array by referencing any member
btrfs filesystem show /dev/vdb
```
```bash
# Raid0
mkfs.btrfs --data raid0 --metadata raid0 /dev/vdb /dev/vdc
btrfs device scan
# Raid1
mkfs.btrfs --data raid1 --metadata raid1 /dev/vdb /dev/vdc
btrfs device scan
# Raid1c3
mkfs.btrfs --data raid1c3 --metadata raid1c3 /dev/vdb /dev/vdc /dev/vdd
btrfs device scan
# Raid10
mkfs.btrfs --data raid10 --metadata raid10 /dev/vdb /dev/vdc /dev/vdd /dev/vde
btrfs device scan
# Convert to raid1
# -dconvert == "data convert"
# -mconvert == "metadata convert"
btrfs balance start -dconvert=raid1 -mconvert=raid1 /btrfs
btrfs balance status
```
## Mounting the Array
One off
```bash
# Create a mount point
mkdir /btrfs
# Mount the top level subvolume
mount /dev/vdb /btrfs -o subvolid=5
# Mount with better SSD support
mount /dev/vdb /btrfs -o subvolid=5,ssd
# Mount with auto defragmentation for HDD support
mount /dev/vdb /btrfs -o subvolid=5,autodefrag
# Mount a subvolume
mount /dev/vdb /btrfs -o subvol=home
# Inspect
btrfs filesystem show /btrfs
```
In fstab
```conf
UUID=btrfs_uuid /btrfs btrfs defaults 0 0
```
## Adding Disks
```bash
# Add a disk
btrfs device add /dev/vdd /btrfs
# Watch the expansion
btrfs filesystem usage /btrfs
```
## Replacing a Disk
```bash
# Remove a disk from the array
btrfs device delete /dev/vdb /btrfs
# Add the new device
btrfs device add /dev/vdg /btrfs
```
## Scrubbing the Array
```bash
# Start a scrub to check for errors
# -B prevents the process from going to the background
# -d prints stats for each device
btrfs scrub start -Bd /btrfs
# Check the status of a scrub
btrfs scrub status /btrfs
# Watch for disk failures
dmesg | grep btrfs
```
## Creating Subvolumes
```bash
# Create a new subvolume (make sure to mount /btrfs as subvolid=5)
btrfs subvolume create /btrfs/foo
# List all subvolumes under a path
btrfs subvolume list -t /btrfs
# Delete a subvolume
btrfs subvolume delete /btrfs/foo
```
## Monitoring Usage
```bash
# Quick info for all btrfs arrays
btrfs filesystem show
# Show usage for a specific array
btrfs filesystem usage /btrfs
# Quick command to filter for data used
btrfs filesystem usage /btrfs | grep 'Data.*Used'
```
## Encrypting BTRFS with LUKS
```bash
export KEYFILE_PATH=/root/btrfs.keyfile
export LUKS_DEVS="sdb sdc sdd sde sdf sdg sdh"
# Create a key file
dd if=/dev/urandom of=${KEYFILE_PATH} bs=128 count=1
chmod 400 ${KEYFILE_PATH}
# Create partitions
for luks_dev in $LUKS_DEVS; do
echo Creating partition for /dev/$luks_dev
parted -s -a optimal -- /dev/$luks_dev mklabel gpt mkpart primary 1MiB 100%
done
# Check that your list is good
for luks_dev in $LUKS_DEVS; do
echo will encrypt /dev/${luks_dev}1 and create /dev/mapper/luks-$(lsblk -n -o PARTUUID /dev/${luks_dev}1)
done
# Create the luks partitions
# Note that --iter-time 10000 is how long, in milliseconds, to decrypt the key
# -v is verbose
# -q is "batch mode", don't ask for confirmation
# Longer makes it harder to brute-force
for luks_dev in $LUKS_DEVS; do \
LUKS_UUID=$(lsblk -n -o PARTUUID /dev/${luks_dev}1)
LUKS_NAME=luks-${LUKS_UUID}
echo "Encrypting /dev/${luks_dev}1"; \
cryptsetup luksFormat -v -q --key-file ${KEYFILE_PATH} /dev/${luks_dev}1
echo "Unlocking /dev/${luks_dev}1 as ${LUKS_NAME}"
cryptsetup open /dev/${luks_dev}1 ${LUKS_NAME} --key-file=${KEYFILE_PATH}
echo "Adding ${LUKS_NAME} UUID=${LUKS_UUID} ${KEYFILE_PATH} discard to crypttab"
echo "${LUKS_NAME} UUID=${LUKS_UUID} none discard" >> /etc/crypttab
done
# List filesystems with UUID
lsblk --fs
# Now create the array using the /dev/mapper entries from above
mkfs.btrfs --data raid1 --metadata raid1 /dev/mapper/crypt-btrfs-vdb /dev/mapper/crypt-btrfs-vdc...
btrfs device scan
```
## Monitoring Disk Health
<https://www.dotlinux.net/blog/how-to-configure-smartd-and-be-notified-of-hard-disk-problems-via-email/#installing-smartd-smartmontools>
```bash
# btrfs device stats shows any errors
# Grep for any line not ending in "0"
btrfs device stats /mnt | grep -vE ' 0$'
# Show the device IDs for the mounted filesystem
btrfs filesystem show /mnt
# Delete a device (with ID 8, for example)
btrfs device delete 8 /mnt
# Add a device to the array
btrfs device add /dev/vdi1 /mnt
# Rebalance the array
btrfs balance start /mnt
```
## Defragmenting and Compressing
```bash
# Defrag a filesystem
btrfs filesystem defragment /mnt
# Defrag and apply compression
# zstd:20 is currently the best compression algorithm
btrfs filesystem defragment -c zstd:20 /mnt
```

View File

@@ -6,6 +6,7 @@
- [Selinux](#selinux)
- [On Access Scanning](#on-access-scanning)
- [Testing](#testing)
- [Ignore Signatures](#ignore-signatures)
<https://wiki.archlinux.org/title/ClamAV>
@@ -163,3 +164,18 @@ wget https://secure.eicar.org/eicar.com.txt
# This should not work
cat eicar.com.txt
```
## Ignore Signatures
<https://docs.clamav.net/faq/faq-ignore.html>
```bash
# Create the ignore list
cd /var/lib/clamav
touch ignore_list.ign2
```
Then add an ignore, like `PUA.Win.Trojan.Xored-1` which is a [known false
positive](https://github.com/jensyt/imurmurhash-js/issues/1).
Then `systemctl restart clamd@scan`.

View File

@@ -35,7 +35,7 @@ AlertEncryptedArchive no
AlertEncryptedDoc no
AlertOLE2Macros no
AlertPartitionIntersection no
AlertExceedsMax yes
AlertExceedsMax no
ScanPDF yes
ScanSWF yes

View File

@@ -21,3 +21,13 @@ firewall-cmd --info-service=samba
# Get zone information
firewall-cmd --info-zone=drop
```
## Inspecting Zones
```bash
# List all active rules
firewall-cmd --list-all
# Log all denies
firewall-cmd --set-log-denied=all
```

View File

@@ -0,0 +1,91 @@
# LUKS
Disk Encryption
## Encrypting a Drive
You get 8 key slots total.
```bash
# Remember to install if you need it
dnf install cryptsetup
# Create an encryption key
mkdir /etc/luks-keys
chmod 700 /etc/luks-keys
dd if=/dev/urandom bs=128 count=1 of=/etc/luks-keys/data0.key
# Create an encrypted partition
# -q means don't ask for confirmation
# -v means verbose
cryptsetup -q -v luksFormat /dev/nvme6n1p1 /etc/luks-keys/data0.key
# Unlock
cryptsetup -q -v luksOpen --key-file /etc/luks-keys/data0.key /dev/nvme6n1p1 luks-$(cryptsetup luksUUID /dev/nvme6n1p1)
# List keys
cryptsetup luksDump /dev/nvme6n1p1
# Remove a key from a slot
cryptsetup luksKillSlot /dev/nvme6n1p1 2
# Add a new key to a slot
cryptsetup luksAddKey /dev/nvme6n1p1 -S 5
```
## TPM2 Decryption
Mostly taken from here:
<https://gist.github.com/jdoss/777e8b52c8d88eb87467935769c98a95>
PCR reference for `--tpm2-pcrs` args
```text
0: System firmware executable
2: Kernel
4: Bootloader
7: Secure boot state
8: Cmdline
9: Initrd
```
Basic commands:
```bash
# Show tpm2 devices
systemd-cryptenroll --tpm2-device=list
# Show crypto luks block devices
blkid -t TYPE=crypto_LUKS
# Enroll the tpm2 device with systemd-cryptenroll
systemd-cryptenroll --tpm2-device=auto --tpm2-pcrs=0,2,4,7,8,9 /dev/nvme0n1p3
# Reenroll
systemd-cryptenroll /dev/nvme0n1p3 --wipe-slot=tpm2 --tpm2-device=auto --tpm2-pcrs=0,2,4,7,8,9
```
Note, you'll need to add `rd.luks.options=tpm2-device=auto` to your kernel parameters.
## Batch Operations
```bash
# Create encrypted drives in bulk
export LUKS_DEVS="/dev/nvme4n1p1 /dev/nvme3n1p1 /dev/nvme0n1p1 /dev/nvme1n1p4 /dev/nvme2n1p1 /dev/nvme5n1p1"
for luks_drive in $LUKS_DRIVES; do
cryptsetup -q -v luksFormat /dev/${luks_drive} /etc/luks-keys/data0.key
done
# Unlock encrypted drives in bulk
export LUKS_DEVS="/dev/nvme4n1p1 /dev/nvme3n1p1 /dev/nvme0n1p1 /dev/nvme1n1p4 /dev/nvme2n1p1 /dev/nvme5n1p1"
for luks_drive in $LUKS_DRIVES; do
cryptsetup -q -v luksOpen --key-file /etc/luks-keys/data0.key /dev/${luks_drive} luks-$(cryptsetup luksUUID /dev/${luks_drive})
done
# Add new keys in bulk
export LUKS_DEVS="/dev/nvme4n1p1 /dev/nvme3n1p1 /dev/nvme0n1p1 /dev/nvme1n1p4 /dev/nvme2n1p1 /dev/nvme5n1p1"
for luks_dev in $LUKS_DEVS; do
echo Adding key to $luks_dev
cryptsetup luksAddKey $luks_dev -S 2
done
```

View File

@@ -9,16 +9,11 @@ groups = []
hostname = "f43-base"
[[customizations.disk.partitions]]
type = "btrfs"
minsize = "32 GiB"
[[customizations.disk.partitions.subvolumes]]
name = "root"
type = "plain"
label = "root"
mountpoint = "/"
[[customizations.disk.partitions.subvolumes]]
name = "home"
mountpoint = "/home"
fs_type = "ext4"
minsize = "128 GiB"
[customizations.timezone]
timezone = "America/New_York"
@@ -70,8 +65,8 @@ data = """
[[customizations.files]]
path = "/home/ducoterra/.inputrc"
mode = "0644"
user = "root"
group = "root"
user = "ducoterra"
group = "ducoterra"
data = """
"\\C-h": backward-kill-word
"""

View File

@@ -33,18 +33,21 @@ sudo usermod -aG weldr $USER
# Optional: cockpit dependency
dnf install -y cockpit-composer
# Optional: allow security profiles
dnf install openscap-scanner scap-security-guide
```
## Building Images
1. Create a toml file describing your image
See `fedora-42-base.toml` for an example.
See `fedora42-base.toml` for an example.
2. Push the toml to composer
```bash
composer-cli blueprints push active/software_osbuild/fedora-42-base.toml
composer-cli blueprints push active/software_osbuild/fedora42-base.toml
# List blueprints
composer-cli blueprints list
@@ -57,13 +60,13 @@ dnf install -y cockpit-composer
composer-cli compose types
# Build the image
composer-cli compose start fedora-42-base qcow2
composer-cli compose start fedora42-base qcow2
# Check status
watch composer-cli compose status
# Download logs if error
cd /tmp && composer-cli compose logs f91a12b6-01fd-4f94-91cc-9d5fb68b8129
cd /tmp && composer-cli compose logs 52963ac9-b680-4def-baaf-252845f0e3fe
# Delete failed images
composer-cli compose list failed -j | jq '.[].body.failed.[]?.id' | xargs -I '%' composer-cli compose delete '%'
@@ -79,16 +82,25 @@ dnf install -y cockpit-composer
composer-cli compose list finished
# Download the image
composer-cli compose image --filename /var/lib/libvirt/images/fedora-42-base.qcow2 image-uuid
composer-cli compose image --filename active/software_osbuild/secrets/fedora43-base.qcow2 image-uuid
# Test with qemu
qemu-kvm --name test-fedora-42-base -m 4096 -hda ~/Downloads/fedora-42-base.qcow2
virt-install \
--name "fedora43-base" \
--boot uefi,firmware.feature0.name=secure-boot,firmware.feature0.enabled=no \
--cpu host-passthrough --vcpus sockets=1,cores=8,threads=2 \
--ram=8192 \
--os-variant=fedora41 \
--network bridge:virbr0 \
--graphics none \
--console pty,target.type=virtio \
--import --disk "path=active/software_osbuild/secrets/fedora43-base.qcow2,bus=virtio"
```
### Image Build and Watch One Liner
```bash
composer-cli blueprints push active/software_osbuild/fedora-43-base.toml && \
composer-cli compose start fedora-43-base qcow2 && \
composer-cli blueprints push active/software_osbuild/fedora43-base.toml && \
composer-cli compose start fedora43-base qcow2 && \
watch composer-cli compose status
```

View File

@@ -0,0 +1,97 @@
# SMB
- [SMB](#smb)
- [Install SMB](#install-smb)
- [Create SMB User](#create-smb-user)
- [Create a SMB Share](#create-a-smb-share)
- [Create a SMB Share with Many Users](#create-a-smb-share-with-many-users)
## Install SMB
```bash
sudo dnf install samba
sudo systemctl enable smb --now
firewall-cmd --get-active-zones
sudo firewall-cmd --permanent --zone=FedoraServer --add-service=samba
sudo firewall-cmd --reload
```
## Create SMB User
```bash
sudo smbpasswd -a ducoterra
```
## Create a SMB Share
```bash
# Create share
mkdir /btrfs/pool0/smb/ducoterra
# Set proper selinux labels for samba
sudo semanage fcontext --add --type "samba_share_t" "/btrfs/pool0/smb/ducoterra(/.*)?"
# Run restorecon at the root of the btrfs subvolume
sudo restorecon -R /btrfs/pool0
```
Edit /etc/samba/smb.conf
```conf
[ducoterra]
comment = My Share
path = /btrfs/pool0/smb/ducoterra
writeable = yes
browseable = yes
public = no
create mask = 0644
directory mask = 0755
write list = user
```
Then restart SMB
```bash
sudo systemctl restart smb
```
## Create a SMB Share with Many Users
```bash
sudo groupadd myfamily
sudo useradd -G myfamily jack
sudo useradd -G myfamily maria
sudo smbpasswd -a jack
sudo smbpasswd -a maria
sudo mkdir /home/share
sudo chgrp myfamily /home/share
sudo chmod 770 /home/share
sudo semanage fcontext --add --type "samba_share_t" "/home/share(/.*)?"
sudo restorecon -R /home/share
```
```conf
[family]
comment = Family Share
path = /home/share
writeable = yes
browseable = yes
public = yes
valid users = @myfamily
create mask = 0660
directory mask = 0770
force group = +myfamily
```
- valid users: only users of the group family have access rights. The @ denotes a group name.
- force group = +myfamily: files and directories are created with this group, instead of the user group.
- create mask = 0660: files in the share are created with permissions to allow all group users to read and write files created by other users.
- directory mask = 0770: as before, but for directories.
Don't forget to restart smb
```bash
systemctl restart smb
```

View File

@@ -0,0 +1 @@
local-hostname: cloud-init-vm

View File

@@ -0,0 +1,10 @@
#cloud-config
#vim:syntax=yaml
users:
- default
- name: ducoterra
lock_passwd: false
sudo: ALL=(ALL) NOPASSWD:ALL
ssh_authorized_keys:
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGQa781Qj8mNlUdRquFFqg0O2ornG9SBHe705y4+1vPI ssh@ducoterra.net

View File

@@ -4,6 +4,9 @@ Virtual Machine Management
- [Virsh](#virsh)
- [Before you Begin](#before-you-begin)
- [Connecting to External Servers via SSH](#connecting-to-external-servers-via-ssh)
- [Configuring Aliases](#configuring-aliases)
- [One-off Connections](#one-off-connections)
- [Useful Virsh Commands](#useful-virsh-commands)
- [Virsh Networking](#virsh-networking)
- [Create a Virtual Network](#create-a-virtual-network)
@@ -13,9 +16,13 @@ Virtual Machine Management
- [Set a Static IP](#set-a-static-ip)
- [Creating VMs](#creating-vms)
- [Create VM with No Graphics and use an Existing QCOW2 Disk](#create-vm-with-no-graphics-and-use-an-existing-qcow2-disk)
- [Create a Cloud Init Compatible VM](#create-a-cloud-init-compatible-vm)
- [Create VM with Graphics using an ISO Installation Disk](#create-vm-with-graphics-using-an-iso-installation-disk)
- [Create VM using Host Device as Disk](#create-vm-using-host-device-as-disk)
- [Create a Home Assistant VM](#create-a-home-assistant-vm)
- [Snapshots](#snapshots)
- [Creating and Attaching Disks](#creating-and-attaching-disks)
- [Virt Builder](#virt-builder)
## Before you Begin
@@ -27,6 +34,35 @@ Virtual Machine Management
6. Tell virsh to connect to your root system rather than your user: `export LIBVIRT_DEFAULT_URI='qemu:///system'`
7. Export your editor so virsh knows what to use: `export EDITOR=vim`
## Connecting to External Servers via SSH
<https://libvirt.org/uri.html#ssh-transport>
### Configuring Aliases
1. Edit `~/.config/libvirt/libvirt.conf`
2. Add your aliases
```bash
uri_aliases = [
"3dserver=qemu+ssh://3dserver/system",
]
```
3. Export the alias: `export LIBVIRT_DEFAULT_URI=3dserver`
### One-off Connections
```bash
export LIBVIRT_DEFAULT_URI='qemu+ssh://user@server/system'
```
Or for Truenas
```bash
export LIBVIRT_DEFAULT_URI='qemu+ssh://root@truenas/system?socket=/run/truenas_libvirt/libvirt-sock'
```
## Useful Virsh Commands
```bash
@@ -206,6 +242,44 @@ virt-install \
--import --disk "path=${VM_DISK_PATH},bus=virtio"
```
### Create a Cloud Init Compatible VM
<https://cloudinit.readthedocs.io/en/latest/reference/examples.html>
```bash
# Fedora
# https://fedoraproject.org/cloud/download
export VM_NAME="cloud-init-test-fedora"
export VM_DISK_PATH=/var/lib/libvirt/images/Fedora-Cloud-Base-Generic-43-1.6.x86_64.qcow2
# Rocky
# https://rockylinux.org/download
export VM_NAME="cloud-init-test-rocky"
export VM_DISK_PATH=/var/lib/libvirt/images/Rocky-10-GenericCloud-Base.latest.x86_64.qcow2
# Ubuntu
# https://cloud-images.ubuntu.com/noble/current/
export VM_NAME="cloud-init-test-ubuntu"
export VM_DISK_PATH=/var/lib/libvirt/images/noble-server-cloudimg-amd64.img
# Debian
# https://cloud.debian.org/images/cloud/trixie/20251117-2299/
export VM_NAME="cloud-init-test-debian"
export VM_DISK_PATH=/var/lib/libvirt/images/debian-13-generic-amd64-20251117-2299.qcow2
# Set --cloud-init disable=no to allow cloud-init to run again after first boot
virt-install \
--name "${VM_NAME}" \
--boot uefi,firmware.feature0.name=secure-boot,firmware.feature0.enabled=no \
--cpu host-passthrough --vcpus sockets=1,cores=8,threads=2 \
--ram=8192 \
--os-variant=fedora41 \
--network bridge:virbr0 \
--graphics none \
--import --disk "path=${VM_DISK_PATH},bus=virtio" \
--cloud-init disable=yes,user-data="active/software_virsh/cloud-init/user-data,meta-data=active/software_virsh/cloud-init/meta-data"
```
### Create VM with Graphics using an ISO Installation Disk
```bash
@@ -215,8 +289,8 @@ virt-install \
# `--console pty,target.type=virtio` adds a console connection
# For any command, use `virt-install --arg=?` to see all available options
export VM_NAME="fedora43-kinoite-test"
export VM_ISO_PATH=/var/lib/libvirt/iso/
export VM_DISK_PATH=/var/lib/libvirt/images/
export VM_ISO_PATH=/var/lib/libvirt/iso/fedora43.iso
export VM_DISK_PATH=/var/lib/libvirt/images/fedora43.qcow2
virt-install \
--name "${VM_NAME}" \
--boot uefi,firmware.feature0.name=secure-boot,firmware.feature0.enabled=no \
@@ -252,6 +326,119 @@ virt-install \
--disk none
```
### Create a Home Assistant VM
```bash
virt-install \
--name haos \
--description "Home Assistant OS" \
--os-variant=generic \
--ram=4096 \
--vcpus=2 \
--disk /var/lib/libvirt/images/haos_ova-16.3.qcow2,bus=scsi \
--controller type=scsi,model=virtio-scsi \
--import \
--graphics none \
--boot uefi
```
## Snapshots
See [qemu qcow2 snapshots](/active/software_qemu/qemu.md#qcow2-snapshots)
## Creating and Attaching Disks
To create and attach one disk:
```bash
export VM_NAME="cloud-init-test-fedora"
export VM_DISK_NAME="test1"
qemu-img create -f qcow2 /var/lib/libvirt/images/${VM_DISK_NAME}.qcow2 1G
virsh attach-disk ${VM_NAME} \
--source /var/lib/libvirt/images/${VM_DISK_NAME} \
--target vdb \
--persistent
--live
```
To create and attach multiple disks (for raid testing)
```bash
export VM_NAME="cloud-init-test-fedora"
# Max supported for this script is 25
export VM_NUM_DISKS=8
export VM_DISK_SIZE=4G
##### Attach #####
# Create the disks and target mounts from our array
letters=($(echo {a..z}))
for disk_num in $(seq 1 $VM_NUM_DISKS); do
VM_DISK_NAME="test-${disk_num}"
VM_DISK_TARGET=vd${letters[$disk_num]}
echo "Creating /var/lib/libvirt/images/${VM_DISK_NAME}.qcow2"
sudo qemu-img create -f qcow2 /var/lib/libvirt/images/${VM_DISK_NAME}.qcow2 ${VM_DISK_SIZE}
echo "Attaching vd${letters[$disk_num]} to ${VM_NAME}"
virsh attach-disk ${VM_NAME} \
--source /var/lib/libvirt/images/${VM_DISK_NAME}.qcow2 \
--target vd${letters[$disk_num]} \
--persistent \
--subdriver qcow2 \
--live
done;
##### Cleanup #####
# Detach the disks from our VMs
letters=($(echo {a..z}))
for disk_num in $(seq 1 $VM_NUM_DISKS); do
VM_DISK_NAME="test-${disk_num}"
VM_DISK_TARGET=vd${letters[$disk_num]}
echo "Detaching vd${letters[$disk_num]} from ${VM_NAME}"
virsh detach-disk ${VM_NAME} \
--target vd${letters[$disk_num]} \
--persistent
done;
# Optionally delete images
letters=($(echo {a..z}))
for disk_num in $(seq 1 $VM_NUM_DISKS); do
VM_DISK_NAME="test-${disk_num}"
VM_DISK_TARGET=vd${letters[$disk_num]}
echo "Removing /var/lib/libvirt/images/${VM_DISK_NAME}.qcow2"
sudo rm /var/lib/libvirt/images/${VM_DISK_NAME}.qcow2
done;
```
## Virt Builder
<https://docs.fedoraproject.org/en-US/fedora-server/virtualization/vm-install-diskimg-virtbuilder/#_minimal_effort_customization>
You can use virt-builder to build vm images
```bash
export VM_NAME=fedora42-vb
export VM_DISK_PATH=/var/lib/libvirt/images/fedora42-vb.qcow2
# Build the image
virt-builder fedora-42 \
--format qcow2 --output ${VM_DISK_PATH} \
--root-password locked:disabled \
--hostname ${VM_NAME} \
--selinux-relabel \
--firstboot-command 'useradd -m -G wheel -p "" ducoterra ; chage -d 0 ducoterra'
# Run the built image
virt-install \
--name "${VM_NAME}" \
--cpu host-passthrough --vcpus sockets=1,cores=8,threads=2 \
--ram=8192 \
--os-variant=fedora41 \
--network bridge:virbr0 \
--graphics none \
--console pty,target.type=virtio \
--import --disk "path=${VM_DISK_PATH},bus=virtio"
```

View File

@@ -8,15 +8,15 @@ installing the OS onto.
- [Foobar](#foobar)
- [Installation](#installation)
- [Network](#network)
- [Firewalld Configuration](#firewalld-configuration)
- [Firewall](#firewall)
- [Setup SSH](#setup-ssh)
- [Fail2Ban](#fail2ban)
- [Set eui64 on network interface](#set-eui64-on-network-interface)
- [Enable IPv6](#enable-ipv6)
- [Set up Network Bridge](#set-up-network-bridge)
- [Storage](#storage)
- [BTRFS Parent Volumes](#btrfs-parent-volumes)
- [BTRFS Snapshots](#btrfs-snapshots)
- [BTRFS Maintenance](#btrfs-maintenance)
- [Volumes](#volumes)
- [Snapshots](#snapshots)
- [Maintenance](#maintenance)
- [Backups](#backups)
- [TPM2 Luks Decryption](#tpm2-luks-decryption)
- [Users](#users)
- [Change your password](#change-your-password)
@@ -25,16 +25,8 @@ installing the OS onto.
- [Monitoring](#monitoring)
- [Disk Usage](#disk-usage)
- [Disk Wear](#disk-wear)
- [Backups](#backups)
- [Downgrading Kernel](#downgrading-kernel)
- [Apps](#apps)
- [Package Manager](#package-manager)
- [Install and Enable Cockpit](#install-and-enable-cockpit)
- [Install and Enable Virtualization](#install-and-enable-virtualization)
- [Install and Enable Containers](#install-and-enable-containers)
- [Troubleshooting](#troubleshooting)
- [Disable Swap](#disable-swap)
- [Disable Selinux](#disable-selinux)
## Installation
@@ -57,355 +49,85 @@ installing the OS onto.
## Network
### Firewalld Configuration
### Firewall
Set the default firewalld zone to `public`
Allow SSH
```bash
# Note, you probably don't have to do this. Check Cockpit Network -> Firewall
# firewall-cmd --set-default-zone=public
```
Firewalld will be on and blocking by default. You can check the zone and allowed ports with:
```bash
firewall-cmd --zone=public --list-ports
firewall-cmd --zone=public --list-services
```
Allow Cockpit with
```bash
firewall-cmd --permanent --zone=public --add-port=9090/tcp
firewall-cmd --reload
# Your firewall command here
```
### Setup SSH
See [README](/README.md#ssh-key-generation)
### Fail2Ban
On the server:
Install SSH
```bash
# Run tmux session
tmux
dnf install -y fail2ban
# Setup initial rules
cat <<EOF > /etc/fail2ban/jail.local
# Jail configuration additions for local installation
# Adjust the default configuration's default values
[DEFAULT]
# Optional enter an trusted IP never to ban
# ignoreip = 2600:1700:1e6c:a81f::0/64
bantime = 6600
backend = auto
# The main configuration file defines all services but
# deactivates them by default. We have to activate those neeeded
[sshd]
enabled = true
EOF
systemctl enable fail2ban --now
# OPTIONAL: follow logs
tail -f /var/log/fail2ban.log
# Steps to install SSH server
```
Checking, banning, unbanning
### Enable IPv6
```bash
# See banned clients
fail2ban-client banned
# See jails (sshd should be one of them)
fail2ban-client status
# Unban a client from the sshd jail
fail2ban-client set sshd unbanip <IP address>
```
### Set eui64 on network interface
Ensures consistent mac-based IPv6 address.
```bash
nmcli connection modify Wired\ connection\ 1 ipv6.addr-gen-mode eui64
```
1. Disable privacy
2. Enable eui64 addressing
### Set up Network Bridge
Networking -> Add bridge -> add network interface and save
```bash
nmcli connection modify bridge0 ipv6.addr-gen-mode eui64
```
Create a network bridge for VMs
## Storage
### BTRFS Parent Volumes
### Volumes
In `/etc/fstab`, add the parent volumes for your disks mounted with subvolid=5 at `/btrfs` so you can see
all subvolumes.
Create volumes for `/home`, `/var`, `/var/log`, `/tmp`, etc.
```conf
UUID=64beedac-c0c9-48bf-a3ae-7707df6ebc97 /btrfs/3dserver-root btrfs subvolid=5,compress=zstd:1,x-systemd.device-timeout=0 0 0
UUID=3c76b83f-7547-4c18-b08f-9e7902022b8d /btrfs/3dserver-data btrfs subvolid=5,compress=zstd:1,x-systemd.device-timeout=0 0 0
```
### Snapshots
```bash
systemctl daemon-reload
mount -a --mkdir
```
If supported, set up automated snapshots to keep a history of your data.
### BTRFS Snapshots
### Maintenance
<https://en.opensuse.org/openSUSE:Snapper_Tutorial>
If supported, set up scrub and check jobs to ensure data integrity.
<http://snapper.io/manpages/snapper-configs.html>
### Backups
We'll be using snapper, a tool for automating and controlling snapshot behavior.
```bash
dnf install snapper dnf-plugin-snapper
# Allow selinux management
semanage permissive -a snapperd_t
# Note, if you mess something up you can run snapper -c root delete-config to delete
# System configs are stored in /etc/sysconfig/snapper as well as /etc/snapper
snapper -c root create-config /
snapper -c data create-config /path/to/other/data
# Enable automatic snapshots
systemctl enable --now snapper-timeline.timer
# Enable automatic cleanup
systemctl enable --now snapper-cleanup.timer
# Enable snapshots on boot
systemctl enable --now snapper-boot.timer
# List snapshots
snapper -c root list
# Create snapshot manually
snapper -c root create --description "test snapshot"
# Delete first snapshot
snapper -c root delete 1
```
Note - you probably don't want to keep yearly snapshots.
Edit `/etc/snapper/configs/root` and change `TIMELINE_LIMIT_YEARLY=` to `0`.
### BTRFS Maintenance
```bash
# Start a scrub in the foreground (-B) at /
btrfs scrub start -B /
```
Set up regular backups via a supported tool like Borg.
### TPM2 Luks Decryption
Mostly taken from here:
<https://gist.github.com/jdoss/777e8b52c8d88eb87467935769c98a95>
PCR reference for `--tpm2-pcrs` args
```text
0: System firmware executable
2: Kernel
4: Bootloader
7: Secure boot state
8: Cmdline
9: Initrd
```
Note, if your threat vector is people trying to get data off your old disks after throwing them
away, you can set `--tpm2-pcrs=""`. Someone could gain access to your encrypted partition if they
can access your machine physically by manipulating the boot parameters but you're guaranteed to
unlock despite updates and upgrades.
Basic commands:
```bash
# Run tmux session
tmux
# Show tpm2 devices
systemd-cryptenroll --tpm2-device=list
# Show crypto luks block devices
blkid -t TYPE=crypto_LUKS
# Enroll the tpm2 device with systemd-cryptenroll
systemd-cryptenroll /dev/nvme0n1p3 --tpm2-device=auto --tpm2-pcrs=""
####################
##### OPTIONAL #####
####################
# If you have lots of devices to decrypt (like a btrfs raid array), use these commands.
# Get all crypto luks partitions
blkid | grep crypto_LUKS
# List them all space-separated and drop the '/dev'
LUKS_DEVS="nvme0n1p4 nvme1n1p1 nvme2n1p1 nvme3n1p1 nvme5n1p1 nvme4n1p1 nvme6n1p1"
# Check that your list is good
for dev in $LUKS_DEVS; do echo will enroll /dev/$dev; done
# Enroll
for dev in $LUKS_DEVS; do \
echo "Enrolling /dev/$dev"; \
systemd-cryptenroll /dev/$dev --tpm2-device=auto --tpm2-pcrs=""; \
done
########################
##### END OPTIONAL #####
########################
# Append to command line args
echo "add_dracutmodules+=\" tpm2-tss \"" | tee /etc/dracut.conf.d/tpm2.conf
dracut -f
```
Finally, `vim /etc/default/grub` and add `rd.luks.options=tpm2-device=auto` to GRUB_CMDLINE_LINUX
```bash
# Update Grub
grub2-mkconfig -o /boot/grub2/grub.cfg
reboot
# Cross your fingers that you don't have to go type in the password manually.
# Yes, 60 full seconds is too long. Go type your password in.
```
If you need to reenroll for some reason:
```bash
# Reenroll
systemd-cryptenroll /dev/nvme0n1p3 --wipe-slot=tpm2 --tpm2-device=auto --tpm2-pcrs=""
```
If you want automatic decryption via TPM2, set it up here.
## Users
### Change your password
In Cockpit navigate to Accounts -> user -> Set password
Change the default user's password.
## Updates
Describe what needs updated regularly and how to update it.
### Automatic Updates
In Cockpit navigate to software updates -> automatic updates -> install -> security updates only
Set up automatic updates or security patches.
## Monitoring
In Cockpit: Overview -> View metrics and history -> Install PCP Support -> Metrics settings -> Turn on Collect Metrics
Set up a monitoring service for resources or uptime.
### Disk Usage
TODO
Set up disk usage monitoring.
### Disk Wear
TODO
## Backups
See [borg.md](/active/systemd_borg/borg.md)
### Downgrading Kernel
```bash
dnf install koji
# Note: format is kernel-version.fedora-version
cd $(mktemp -d) && koji download-build --arch=x86_64 --arch=noarch kernel-6.11.3-300.fc41 && dnf install ./*
reboot
```
Set up physical disk wear monitoring for SSDs if applicable.
## Apps
### Package Manager
Document any recommended apps here.
Configure dnf to use the fastest mirror:
```bash
echo 'fastestmirror=1' >> /etc/dnf/dnf.conf
dnf clean all
dnf update --refresh -y
# libdnf5 is required for ansible to work
dnf install -y glances tmux vim python3-libdnf5
```
### Install and Enable Cockpit
<https://cockpit-project.org/running>
```bash
dnf install cockpit
systemctl enable --now cockpit.socket
firewall-cmd --add-service=cockpit
firewall-cmd --add-service=cockpit --permanent
```
## Install and Enable Virtualization
Don't forget to add a btrfs subvolume for `/var/lib/libvirt`
```bash
# Since we already created our /btrfs mountpoint, this volume will show up automatically
# at /btrfs/libvirt
btrfs sub create /btrfs/libvirt
```
Now create an fstab entry that mounts the volume at /var/lib/libvirt
```bash
UUID=... /var/lib/libvirt btrfs subvol=libvirt,compress=zstd:1,x-systemd.device-timeout=0 0 0
```
Mount the libvirt volume:
```bash
systemctl daemon-reload
mount -a --mkdir
# Check that the mount was successful. This will print something if our mount worked.
mount | grep -i /var/lib/libvirt
```
Create a snapshot schedule for libvirt.
```bash
snapper -c libvirt create-config /var/lib/libvirt
# Don't forget to edit "YEARLY" at /etc/snapper/configs/libvirt
```
Install and enable the virtualization service.
```bash
dnf group install --with-optional virtualization
systemctl enable --now libvirtd
```
Install the cockpit machines application.
## Install and Enable Containers
If your package manager requires specific configuration, put it here.
## Troubleshooting
### Disable Swap
```bash
swapoff -a
zramctl --reset /dev/zram0
dnf -y remove zram-generator-defaults
```
### Disable Selinux
By default selinux will be enforcing. You can set it to permissive with
```bash
setenforce 0
```
And then make it permanent by editing `/etc/selinux/config` and inserting `SELINUX=permissive`.
Any troubleshooting recommendations for common issues.

View File

@@ -11,7 +11,7 @@
## Setup foobar Project
1. Copy and rename this folder to active/systemd_foobar
1. Copy and rename this folder to active/software_foobar
2. Find and replace foobar with the name of the service
3. Write the foobar.service spec
4. (OPTIONAL) Write the foobar.timer spec
@@ -33,16 +33,16 @@
ansible-playbook \
-i ansible/inventory.yaml \
-l podman \
active/systemd_foobar/install_foobar.yaml \
-e "@active/systemd_foobar/vars.yaml" \
-e "@active/systemd_foobar/secrets/vars.yaml"
active/software_foobar/install_foobar.yaml \
-e "@active/software_foobar/vars.yaml" \
-e "@active/software_foobar/secrets/vars.yaml"
```
## Upgrade foobar
## Backup foobar
Follow the [Borg Backup instructions](/active/systemd_borg/borg.md#set-up-a-client-for-backup)
Follow the [Borg Backup instructions](/active/software_borg/borg.md#set-up-a-client-for-backup)
## Restore foobar