Compare commits

...

42 Commits

Author SHA1 Message Date
7d2e8b6b7b add stories
All checks were successful
Podman DDNS Image / build-and-push-ddns (push) Successful in 58s
2026-02-06 20:28:43 -05:00
3bfa67e605 update ansible inventory 2026-02-06 20:28:21 -05:00
d929ac8888 wyoming debug probability 2026-02-06 20:27:59 -05:00
837ea91a52 add disk tests for virsh 2026-02-06 20:27:45 -05:00
71a27b1b91 add uv notes 2026-02-06 20:27:27 -05:00
8d250318b1 add systemd examples 2026-02-06 20:27:19 -05:00
041fbd0f5f add snapper notes 2026-02-06 20:24:24 -05:00
498e52c134 add samba user creation notes 2026-02-06 20:24:18 -05:00
dd11ef60cb add notes for resizing a qemu images 2026-02-06 20:23:24 -05:00
65208987ea update osbuild with more default packages 2026-02-06 20:23:12 -05:00
f530181e73 add openstack notes 2026-02-06 20:22:58 -05:00
bcd087dc94 never ever ever ever allow force overwrite luks driver 2026-02-06 20:22:47 -05:00
59bdafed6a freeipa install updates 2026-02-06 20:21:55 -05:00
2a7a177cec firewalld notes 2026-02-06 20:21:46 -05:00
fdc0f0f2c5 add podman etc notes 2026-02-06 20:21:33 -05:00
27180d92b1 add more btrfs admin notes 2026-02-06 20:21:11 -05:00
2b62ad0956 clean up borg backup notes 2026-02-06 20:20:56 -05:00
fb19c81d47 add podman speedtest 2026-02-06 20:20:37 -05:00
1d39fb4dc6 add podman ollama notes 2026-02-06 20:20:11 -05:00
03c8e95275 nginx updates 2026-02-06 20:19:52 -05:00
90105f8997 add nextcloud aio docker notes 2026-02-06 20:19:36 -05:00
c1e083ec04 rename minecraft compose 2026-02-06 20:19:24 -05:00
5b79da95fd add local ai podman docs 2026-02-06 20:19:10 -05:00
1737c7a79a start keycloak docs 2026-02-06 20:18:55 -05:00
8ea3fe5fc4 add docker jellyfin 2026-02-06 20:18:31 -05:00
73d78db715 add docker immich docs 2026-02-06 20:18:19 -05:00
e7835970a2 add gitea docker install 2026-02-06 20:18:00 -05:00
f242895b51 update podman ddns to be more efficient 2026-02-06 20:17:46 -05:00
70718f209b add certbot 2026-02-06 20:17:16 -05:00
d089c981c8 add deskwork caddy 2026-02-06 20:16:51 -05:00
495cacfb96 rename connorbricktracker compose 2026-02-06 20:16:31 -05:00
90ba436a47 add ubuntu 25.10 2026-02-06 20:13:55 -05:00
3f2ac15591 add power profiles and koji to fedora server 2026-02-06 20:13:46 -05:00
2559aebd5d move truenas 2026-02-06 20:13:33 -05:00
ddb9720800 home assistant extended ai updates 2026-02-06 20:11:36 -05:00
525e14965d framework desktop offline ai updates 2026-02-06 20:11:19 -05:00
7626cdf998 framework 16 ai updates 2026-02-06 20:10:45 -05:00
40f221376f esphome updates 2026-02-06 20:10:20 -05:00
27e9c42d4c add driveripper notes 2026-02-06 20:09:57 -05:00
43159dca7e add browsing bambu printer with ftps 2026-02-06 20:09:07 -05:00
57dcd6a806 add gitlab container docs 2026-02-06 20:08:50 -05:00
463595af75 update codium extensions 2026-02-06 20:08:34 -05:00
102 changed files with 3493 additions and 469 deletions

View File

@@ -14,4 +14,21 @@ stkb.rewrap
streetsidesoftware.code-spell-checker streetsidesoftware.code-spell-checker
tamasfe.even-better-toml tamasfe.even-better-toml
vue.volar vue.volar
yzhang.markdown-all-in-one yzhang.markdown-all-in-onecharliermarsh.ruff
eamodio.gitlens
franneck94.vscode-python-config
franneck94.vscode-python-dev-extension-pack
hashicorp.hcl
ms-pyright.pyright
ms-python.debugpy
ms-python.mypy-type-checker
ms-python.python
ms-python.vscode-python-envs
njpwerner.autodocstring
njqdev.vscode-python-typehint
redhat.vscode-yaml
stkb.rewrap
streetsidesoftware.code-spell-checker
tamasfe.even-better-toml
vue.volar
yzhang.markdown-all-in-one

4
.vscode/vscode.md vendored
View File

@@ -109,13 +109,13 @@ previous positions.
To save a list of installed extensions run: To save a list of installed extensions run:
```bash ```bash
code --list-extensions >> vscode_extensions.txt code --list-extensions >> .vscode/code_oss_extensions.txt
``` ```
To install that list of extensions run: To install that list of extensions run:
```bash ```bash
cat vscode_extensions.txt | xargs -L 1 code --install-extension cat .vscode/code_oss_extensions.txt | xargs -L 1 code --install-extension
``` ```
### Continue ### Continue

View File

@@ -0,0 +1,16 @@
services:
gitlab:
image: gitlab/gitlab-ce:18.7.3-ce.0
container_name: gitlab
restart: always
hostname: 'gitlab.example.com'
ports:
- '80:80'
- '443:443'
- '22:22'
volumes:
- '$GITLAB_HOME/gitlab.rb:/etc/gitlab/gitlab.rb:ro'
- '$GITLAB_HOME/config:/etc/gitlab'
- '$GITLAB_HOME/logs:/var/log/gitlab'
- '$GITLAB_HOME/data:/var/opt/gitlab'
shm_size: '256m'

View File

@@ -0,0 +1,66 @@
# Gitlab
## Docker Install
```bash
# Change the server's SSH port in /etc/ssh/sshd_config
Port = 2424
# Then tell selinux about it
semanage port -a -t ssh_port_t -p tcp 2424
# And add the firewall rule
firewall-cmd --add-port=2424/tcp --permanent
firewall-cmd --reload
# Reload SSH
systemctl restart sshd
# Make a Gitlab directory
mkdir -p /srv/gitlab
# Add the following to .bashrc (used in the compose file)
export GITLAB_HOME=/srv/gitlab
```
Create your `gitlab-compose.yaml`. See the file in this repo for an example.
Also create the file `secrets/gitlab.rb` with your configuration. Should look something like this:
```ruby
# Add any other gitlab.rb configuration here, each on its own line
external_url 'https://gitlab.reeseapps.com'
nginx['listen_port'] = 80
nginx['listen_https'] = false
nginx['proxy_set_headers'] = {
"X-Forwarded-Proto" => "https",
"X-Forwarded-Ssl" => "on",
"Host" => "gitlab.mydomain.de",
"X-Real-IP" => "$$remote_addr",
"X-Forwarded-For" => "$$proxy_add_x_forwarded_for",
"Upgrade" => "$$http_upgrade",
"Connection" => "$$connection_upgrade"
}
gitlab_rails['smtp_enable'] = true
gitlab_rails['smtp_address'] = "email-smtp.us-east-1.amazonaws.com"
gitlab_rails['smtp_port'] = 465
gitlab_rails['smtp_user_name'] = ""
gitlab_rails['smtp_password'] = ""
gitlab_rails['smtp_domain'] = ""
gitlab_rails['smtp_authentication'] = "login"
gitlab_rails['smtp_ssl'] = true
gitlab_rails['smtp_force_ssl'] = true
```
Copy `gitlab.rb` and `gitlab-compose.yaml` to your server:
```bash
scp active/container_gitlab/gitlab-compose.yaml gitlab:
scp active/container_gitlab/secrets/gitlab.rb gitlab:/srv/gitlab
```
Then docker compose up:
```bash
docker compose -f gitlab-compose.yaml up -d
```
The initial username is root and the password will be at `/srv/gitlab/config/initial_root_password`.

View File

@@ -1,8 +1,6 @@
# Bambu P1S Notes # Bambu P1S Notes
## Bambu P1S ## WiFi
### WiFi
If you want to set up WiFi without logging in follow this: If you want to set up WiFi without logging in follow this:
@@ -18,13 +16,35 @@ If you want to set up WiFi without logging in follow this:
3. Put the SD card back in the printer. It should connect automatically. 3. Put the SD card back in the printer. It should connect automatically.
### Connecting via LAN mode ## Connecting via LAN mode
- Make sure you open 1990 and 2021 (tcp + udp) in your firewall application. - Make sure you open 1990 and 2021 (tcp + udp) in your firewall application.
### Troubleshooting ## Browsing SD card via LAN mode
#### Under Extruding | Key | Value |
| -------- | ------------------- |
| Port | 990 |
| User | bblp |
| Password | Printer Access Code |
<https://linuxconfig.org/lftp-tutorial-on-linux-with-examples>
1. Install lftp: `sudo dnf install lftp`
2. Start lftp with `lftp`
3. Bookmark your printer with `bookmark add p1s ftps://bblp:<access-code>@<printer ip>`
4. List your bookmarks with `bookmark list`
5. Connect to your printer via the bookmark with `lft p1s`
6. Ignore certificate error with `set ssl:verify-certificate/39:FD:60:8A:46:3B:96:A6:9A:71:72:6E:EA:9E:5A:D2:3A:DE:9B:55 no`
7. Ignore all certificate errors with `set ssl:verify-certificate no`
9. `ls` and `cd` work as expected. To clear the screen, use `ctrl + L`.
10. Copy files to the host with `get remote-file. -o sample.php`. This will copy to the directory you were in before you ran `lftp`.
11. Copy directories to the host with `mirror -P 10 remote_dir local_dir` where `-P` is the number of parallel transfers.
12. Type `exit` to exit.
## Troubleshooting
### Under Extruding
I've had an issue where prints will look "clumpy" or "rough". This was resolved after a complete I've had an issue where prints will look "clumpy" or "rough". This was resolved after a complete
hotend swap. I did some testing on the bad nozzle before replacing it and found it could still hotend swap. I did some testing on the bad nozzle before replacing it and found it could still

View File

@@ -0,0 +1,12 @@
[Unit]
Description=Runs btrfs monitor
After=network.target
Wants=network-online.target
[Service]
Restart=no
Type=oneshot
ExecStart=/root/scripts/btrfs-monitor.sh
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,5 @@
#!/bin/bash
bash /usr/local/scripts/email-disk-stats.sh
bash /usr/local/scripts/email-filesystem-stats.sh

View File

@@ -0,0 +1,11 @@
[Unit]
Description=Run btrfs monitoring script every day
[Timer]
OnCalendar=*-*-* 8:00:00
AccuracySec=10min
Persistent=true
Unit=btrfs-monitor.service
[Install]
WantedBy=timers.target

View File

@@ -0,0 +1,219 @@
# Driveripper
- [Driveripper](#driveripper)
- [General Principles](#general-principles)
- [Important Locations](#important-locations)
- [Monitoring Scripts](#monitoring-scripts)
- [Quick Ansible Commands](#quick-ansible-commands)
- [Disk Mounts](#disk-mounts)
- [Disk Performance Testing](#disk-performance-testing)
- [General VM Notes](#general-vm-notes)
- [Best Practices](#best-practices)
- [OSBuild Composer](#osbuild-composer)
- [Retired Disks](#retired-disks)
- [Sending emails](#sending-emails)
## General Principles
1. Ingress: DDNS, HTTP, TLS, TCP/UDP Proxy
2. Backup: Borg, Storage, etc
3. Monitor: Glances, Emails, Scrubs, Defragmentation, PMLogger, etc.
4. Access: SSH, Cockpit
5. Virtualize: Libvirtd
## Important Locations
- `/etc/luks-keys`: luks keys
- `/usr/local/scripts`: admin scripts
## Monitoring Scripts
```bash
scp active/device_driveripper/btrfs-monitor.service driveripper:/etc/systemd/system/
scp active/device_driveripper/btrfs-monitor.timer driveripper:/etc/systemd/system/
systemctl enable --now btrfs-monitor.timer
```
## Quick Ansible Commands
```bash
# Update Caddy
ansible-playbook -i ansible/inventory.yaml -l proxy active/podman_caddy/install_caddy.yaml
# Update DDNS
ansible-playbook -i ansible/inventory.yaml -l proxy active/podman_ddns/install_ddns.yaml
```
## Disk Mounts
1. All btrfs `subvolid=5` volumes should be mounted under `/btrfs`
1. Since `/media` is for removable media, and `/mnt` is for temporary mounts, all
btrfs subvolumes should be mounted in `/srv` for consistency.
## Disk Performance Testing
```bash
# Write
dd if=/dev/zero of=test.dat bs=1M count=10000 conv=fsync status=progress
# Read
dd if=test.dat of=/dev/null bs=1M count=10000 conv=fsync status=progress
```
## General VM Notes
1. Use ext4 as your filesystem within the VM to cut down on multi COW
2. Use raw images (convert from qcow2 where possible) to cut down on multi COW
3. Create a folder at `/srv/vm/boot` for boot disks
4. Create a folder at `/srv/vm/data` for data disks
### Best Practices
1. Remove the default bridge: `virsh net-undefined default`
## OSBuild Composer
1. Blueprints are stored in /srv/smb/ducoterra/images/blueprints
2. Builds are stored in /srv/smb/ducoterra/images/builds
```bash
# Push a blueprint
composer-cli blueprints push /srv/smb/ducoterra/images/blueprints/fedora43-base.toml
# Start a build
composer-cli compose start fedora43-base qcow2
# Watch the build
watch composer-cli compose list running
# Remove the old image
mv /srv/smb/ducoterra/images/builds/fedora-43-base.qcow2 /srv/smb/ducoterra/images/builds/fedora-43-base-old.qcow2
# Extract the build
composer-cli compose image \
--filename /srv/smb/ducoterra/images/builds/fedora-43-base.qcow2 \
92a2f1dd-9238-4d50-ab1d-cd7c8e355cac
# Convert the build to raw
qemu-img convert -f qcow2 -O raw \
/srv/smb/pool0/ducoterra/images/builds/fedora-43-base.qcow2 \
/srv/vm/pool1/fedora-43-base.raw
# Install (Change password for default user ducoterra!)
virt-install \
--name "gitlab" \
--boot uefi,firmware.feature0.name=secure-boot,firmware.feature0.enabled=no \
--cpu host-passthrough --vcpus sockets=1,cores=8,threads=2 \
--ram=8192 \
--os-variant=fedora41 \
--network bridge:bridge0 \
--graphics none \
--console pty,target.type=virtio \
--import --disk "path=/srv/vm/pool1/gitlab.raw,bus=virtio"
# convert a cloud-init image to raw
qemu-img convert -f qcow2 -O raw \
/srv/smb/ducoterra/images/cloud/Fedora-Cloud-Base-Generic-43-1.6.x86_64.qcow2 \
/srv/vm/pool1/fedora-43-cloud.raw
# Resize
qemu-img resize -f raw /srv/vm/pool1/fedora-43-cloud.raw 128G
# Install with cloud-init
virt-install \
--name "freeipa" \
--boot uefi,firmware.feature0.name=secure-boot,firmware.feature0.enabled=no \
--cpu host-passthrough --vcpus sockets=1,cores=8,threads=2 \
--ram=8192 \
--os-variant=fedora41 \
--network bridge:bridge0 \
--graphics none \
--import --disk "path=/srv/vm/pool1/freeipa-boot.raw,bus=virtio" \
--cloud-init disable=yes,user-data="/srv/smb/ducoterra/images/cloud-init/fedora/cloud-init/user-data,meta-data=/srv/smb/ducoterra/images/cloud-init/fedora/cloud-init/meta-data"
# Reattach to console if needed
virsh console fedora-43-base-test
passwd ducoterra
hostnamectl hostname <hostname>
# Attaching an extra drive
fdisk /dev/vdb
> g
> n
> enter enter enter
> w
mkfs.ext4 /dev/vdb1
lsblk -fs # grab UUID
vim /etc/fstab
systemctl daemon-reload
mount -a --mkdir
```
## Retired Disks
Retired 12-19-2025
![alt text](image.png)
Retired 12-19-2025
![alt text](image-1.png)
### Sending emails
```bash
# s-nail is mailx
dnf install -y msmtp
vim /etc/ssmtp/ssmtp.conf
```
Fill out the configuration like this:
```conf
# A system wide configuration file is optional.
# If it exists, it usually defines a default account.
# This allows msmtp to be used like /usr/sbin/sendmail.
account default
# The SMTP smarthost
host stmp-server.com
# Use TLS on port 465. On this port, TLS starts without STARTTLS.
port 465
tls on
tls_starttls off
# Construct envelope-from addresses of the form "user@oursite.example"
from driveripper@reeselink.com
# Do not allow programs to override this envelope-from address via -f
allow_from_override off
# Always set a From header that matches the envelope-from address
set_from_header on
# Syslog logging with facility LOG_MAIL instead of the default LOG_USER
syslog LOG_MAIL
# User info
auth on
user username
password password
```
Send an email like this:
```bash
# Basic message
printf "Subject: Test\n\nhello there username." | msmtp admin@ducoterra.net
# Or some btrfs stats
printf "Subject: Pool0 Stats\n\n$(btrfs device stats /btrfs/pool0)" | msmtp admin@ducoterra.net
# Or some more btrfs stats
printf "Subject: Pool0 Stats\n\n$(btrfs filesystem show)" | msmtp admin@ducoterra.net
# Or some smartmontools stats
printf "To: admin@ducoterra.net\n\nFrom: driveripper@reeselink.com\n\nSubject: Disk Stats\n\n$(smartctl -a /dev/sda | grep "SMART Attributes" -A 18)" | msmtp admin@ducoterra.net
```

View File

@@ -0,0 +1,19 @@
#!/bin/bash
set -x
export EMAIL_FILE=/root/disk-stats-email.txt
export EMAIL_TO=admin@ducoterra.net
printf "Subject: Driveripper Disk Stats\n\nTo: ${EMAIL_TO}\n\n" > ${EMAIL_FILE}
printf "Pool 0\n" >> ${EMAIL_FILE}
btrfs device stats /btrfs/pool0 >> ${EMAIL_FILE}
printf "\nPool 1\n" >> ${EMAIL_FILE}
btrfs device stats /btrfs/pool1 >> ${EMAIL_FILE}
printf "\nBackup 0\n" >> ${EMAIL_FILE}
btrfs device stats /btrfs/backup0 >> ${EMAIL_FILE}
cat ${EMAIL_FILE} | msmtp ${EMAIL_TO}

View File

@@ -0,0 +1,19 @@
#!/bin/bash
set -x
export EMAIL_FILE=/root/filesystem-usage-email.txt
export EMAIL_TO=admin@ducoterra.net
printf "Subject: Driveripper Filesystem Usage\n\nTo: ${EMAIL_TO}\n\n" > ${EMAIL_FILE}
printf "Pool 0\n" >> ${EMAIL_FILE}
btrfs filesystem usage /btrfs/pool0 >> ${EMAIL_FILE}
printf "\nPool 1\n" >> ${EMAIL_FILE}
btrfs filesystem usage /btrfs/pool1 >> ${EMAIL_FILE}
printf "\nBackup 0\n" >> ${EMAIL_FILE}
btrfs filesystem usage /btrfs/backup0 >> ${EMAIL_FILE}
cat ${EMAIL_FILE} | msmtp ${EMAIL_TO}

Binary file not shown.

After

Width:  |  Height:  |  Size: 39 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

View File

@@ -0,0 +1,67 @@
esphome:
name: atom-s3-lite
friendly_name: Atom S3 Lite
esp32:
board: m5stack-atoms3
framework:
type: arduino
# Enable logging
logger:
level: debug
# Enable Home Assistant API
api:
encryption:
key: !secret great_room_atom_echo_key
wifi:
ssid: !secret wifi_ssid
password: !secret wifi_password
domain: .reeselink.com
fast_connect: true
enable_btm: true
# Enable fallback hotspot (captive portal) in case wifi connection fails
ap:
ssid: "Great-Room-Atom-Echo"
password: !secret hotspot_password
ota:
- platform: esphome
password: !secret ota_password
captive_portal:
external_components:
- source:
type: git
url: https://github.com/chill-Division/M5Stack-ESPHome/
ref: main
components: sonic_i2c
i2c:
sda: 2
scl: 1
scan: true
id: bus_1
sensor:
- platform: sonic_i2c
i2c_id: bus_1
address: 0x57
name: "Ultrasonic Sensor 1"
id: ultrasonic1
unit_of_measurement: mm
update_interval: 1s
filters:
- filter_out: nan
- lambda: |-
if (x == 0) {
return {}; // This filters out the reading
} else {
return x; // This passes the reading through
}
- sliding_window_moving_average:
window_size: 1
send_every: 1

View File

@@ -58,7 +58,7 @@ sudo usermod -a -G dialout ducoterra
Then "run" your config file Then "run" your config file
```bash ```bash
cd active/device_esp32 cd active/device_esphome
uv venv uv venv
uv pip install esphome uv pip install esphome
source .venv/bin/activate source .venv/bin/activate

View File

@@ -40,6 +40,39 @@ ota:
captive_portal: captive_portal:
external_components:
- source:
type: git
url: https://github.com/chill-Division/M5Stack-ESPHome/
ref: main
components: sonic_i2c
i2c:
sda: 26
scl: 32
scan: true
id: bus_1
sensor:
- platform: sonic_i2c
i2c_id: bus_1
address: 0x57
name: "Ultrasonic Sensor 1"
id: ultrasonic1
unit_of_measurement: mm
update_interval: 5s
filters:
- filter_out: nan
- lambda: |-
if (x == 0) {
return {}; // This filters out the reading
} else {
return x; // This passes the reading through
}
- sliding_window_moving_average:
window_size: 10
send_every: 20
button: button:
- platform: factory_reset - platform: factory_reset
id: factory_reset_btn id: factory_reset_btn

View File

@@ -1,5 +1,102 @@
# Framework Laptop 16 # Framework Laptop 16
## Local AI
### Setup
#### Create the AI user
```bash
# Create your local ai user. This will be the user you launch podman processes from.
useradd -m ai
loginctl enable-linger ai
su -l ai
mkdir -p /home/ai/.config/containers/systemd/
```
#### Create the models dir
```bash
mkdir -p /home/ai/models/{text,image,video,embedding,tts,stt}
```
#### Install the Hugging Face CLI
<https://huggingface.co/docs/huggingface_hub/en/guides/cli#getting-started>
```bash
# Install
curl -LsSf https://hf.co/cli/install.sh | bash
# Login
hf auth login
```
### Models
```bash
# nomic-embed-text-v2 (embed)
mkdir /home/ai/models/embedding/nomic-embed-text-v2
hf download --local-dir /home/ai/models/embedding/nomic-embed-text-v2 ggml-org/Nomic-Embed-Text-V2-GGUF
# qwen2.5-coder-3b-fim (completion)
mkdir /home/ai/models/text/qwen2.5-coder-3b-fim
hf download --local-dir /home/ai/models/text/qwen2.5-coder-3b-fim ggml-org/Qwen2.5-Coder-3B-Q8_0-GGUF
```
### Testing
```bash
# Embedding Server
podman run \
--rm \
--name llama-server-embed \
--device=/dev/kfd \
--device=/dev/dri \
-p 8010:8010 \
-v /home/ai/models/text:/models:z \
localhost/llama-cpp-vulkan:latest \
--port 8010 \
-ngl all \
-ub 2048 \
-b 2048 \
--ctx-size 2048 \
--embeddings \
--models-dir /models \
-m /models/embedding/nomic-embed-text-v2/nomic-embed-text-v2-moe-q8_0.gguf
# Completion Server
podman run \
--rm \
--name llama-server-completion \
-p 8011:8011 \
--device=/dev/kfd \
--device=/dev/dri \
-v /home/ai/models:/models:z \
localhost/llama-cpp-vulkan:latest \
--port 8011 \
-c 0 \
--perf \
--n-gpu-layers all \
--models-dir /models \
-m /models/text/qwen2.5-coder-3b/qwen2.5-coder-3b-q8_0.gguf
```
### Quadlets
```bash
sudo install -C -o ai -g ai active/device_framework_16/quadlets/* /home/ai/.config/containers/systemd/
sudo machinectl shell ai@
systemctl --user daemon-reload
systemctl --user restart llama-completion.service
systemctl --user restart llama-embed.service
```
## Keyboard VIA
Access keyboard configuration at <https://keyboard.frame.work/>
## Keyboard VIA ## Keyboard VIA
Access keyboard configuration at <https://keyboard.frame.work/> Access keyboard configuration at <https://keyboard.frame.work/>

View File

@@ -0,0 +1,33 @@
[Unit]
Description=A Llama CPP Server Running GPT OSS 120b
[Container]
# Image is built locally via podman build
Image=localhost/llama-cpp-vulkan:latest
# Downloaded models volume
Volume=/home/ai/models:/models:z
# Ports
PublishPort=8012:8012
# GPU Device
AddDevice=/dev/kfd
AddDevice=/dev/dri
# Server command
Exec=--port 8012 \
-c 0 \
--perf \
--n-gpu-layers all \
--models-dir /models \
-m /models/text/qwen2.5-coder-1.5b-instruct/qwen2.5-coder-1.5b-instruct-q8_0.gguf
[Service]
Restart=always
# Extend Timeout to allow time to pull the image
TimeoutStartSec=900
[Install]
# Start by default on boot
WantedBy=multi-user.target default.target

View File

@@ -0,0 +1,33 @@
[Unit]
Description=A Llama CPP Server Running GPT OSS 120b
[Container]
# Image is built locally via podman build
Image=localhost/llama-cpp-vulkan:latest
# Downloaded models volume
Volume=/home/ai/models:/models:z
# Ports
PublishPort=8011:8011
# GPU Device
AddDevice=/dev/kfd
AddDevice=/dev/dri
# Server command
Exec=--port 8011 \
-c 0 \
--perf \
--n-gpu-layers all \
--models-dir /models \
-m /models/text/qwen2.5-coder-3b-fim/qwen2.5-coder-3b-q8_0.gguf
[Service]
Restart=always
# Extend Timeout to allow time to pull the image
TimeoutStartSec=900
[Install]
# Start by default on boot
WantedBy=multi-user.target default.target

View File

@@ -0,0 +1,35 @@
[Unit]
Description=A Llama CPP Server Running GPT OSS 120b
[Container]
# Image is built locally via podman build
Image=localhost/llama-cpp-vulkan:latest
# Downloaded models volume
Volume=/home/ai/models:/models:z
# Ports
PublishPort=8010:8010
# GPU Device
AddDevice=/dev/kfd
AddDevice=/dev/dri
# Server command
Exec=--port 8010 \
-m /models/embedding/nomic-embed-text-v2/nomic-embed-text-v2-moe-q8_0.gguf \
-ngl all \
-ub 2048 \
-b 2048 \
--ctx-size 2048 \
--embeddings \
--models-dir /models
[Service]
Restart=always
# Extend Timeout to allow time to pull the image
TimeoutStartSec=900
[Install]
# Start by default on boot
WantedBy=multi-user.target default.target

View File

@@ -4,21 +4,38 @@
- [BIOS](#bios) - [BIOS](#bios)
- [References](#references) - [References](#references)
- [Notes](#notes) - [Notes](#notes)
- [Firmware and Kernel](#firmware-and-kernel)
- [Kernel args](#kernel-args)
- [Volume Locations](#volume-locations) - [Volume Locations](#volume-locations)
- [Setup](#setup) - [Setup](#setup)
- [Create the AI user](#create-the-ai-user) - [Create the AI user](#create-the-ai-user)
- [Helper aliases](#helper-aliases) - [Helper aliases](#helper-aliases)
- [Create the models dir](#create-the-models-dir) - [Create the models dir](#create-the-models-dir)
- [Install the Hugging Face CLI](#install-the-hugging-face-cli) - [Install the Hugging Face CLI](#install-the-hugging-face-cli)
- [Samba Model Storage](#samba-model-storage)
- [Download models](#download-models) - [Download models](#download-models)
- [Text models](#text-models) - [Text models](#text-models)
- [GPT-OSS](#gpt-oss)
- [Mistral](#mistral)
- [Nemotron](#nemotron)
- [Qwen](#qwen)
- [GLM](#glm)
- [Llama](#llama)
- [Gemma](#gemma)
- [Dolphin (Abliterated)](#dolphin-abliterated)
- [Image models](#image-models) - [Image models](#image-models)
- [Create the systemd-ai pod](#create-the-systemd-ai-pod) - [Z-Image](#z-image)
- [Flux](#flux)
- [Qwen Image 2512](#qwen-image-2512)
- [Embedding Models](#embedding-models)
- [Nomic](#nomic)
- [llama.cpp](#llamacpp) - [llama.cpp](#llamacpp)
- [stable-diffusion.cpp](#stable-diffusioncpp) - [stable-diffusion.cpp](#stable-diffusioncpp)
- [open-webui](#open-webui) - [open-webui](#open-webui)
- [VLLM](#vllm)
- [Install the whole thing with quadlets (TM)](#install-the-whole-thing-with-quadlets-tm) - [Install the whole thing with quadlets (TM)](#install-the-whole-thing-with-quadlets-tm)
- [Install the update script](#install-the-update-script) - [Install the update script](#install-the-update-script)
- [Voice Cloning](#voice-cloning)
## BIOS ## BIOS
@@ -32,6 +49,22 @@
## Notes ## Notes
### Firmware and Kernel
See: <https://github.com/kyuz0/amd-strix-halo-toolboxes?tab=readme-ov-file#-stable-configuration>
Current stable is kernel 6.18.3-200 with linux-firmware 20251111
### Kernel args
Edit /etc/default/grub and add the following:
```conf
amd_iommu=off amdgpu.gttsize=126976 ttm.pages_limit=32505856
```
Then `grub2-mkconfig -o /boot/grub2/grub.cfg` and `reboot`.
### Volume Locations ### Volume Locations
`~/.local/share/containers/storage/volumes/` `~/.local/share/containers/storage/volumes/`
@@ -45,7 +78,8 @@
useradd -m ai useradd -m ai
loginctl enable-linger ai loginctl enable-linger ai
su -l ai su -l ai
mkdir -p ~/.config/containers/systemd/ mkdir -p /home/ai/.config/containers/systemd/
mkdir -p /home/ai/.ssh
``` ```
Models are big. You'll want some tools to help find large files quickly when space runs out. Models are big. You'll want some tools to help find large files quickly when space runs out.
@@ -75,7 +109,7 @@ alias sd-edit-logs='journalctl --user -xeu stable-diffusion-edit-server'
### Create the models dir ### Create the models dir
```bash ```bash
mkdir -p /home/ai/models/{text,image,video} mkdir -p /home/ai/models/{text,image,video,embedding,tts,stt}
``` ```
### Install the Hugging Face CLI ### Install the Hugging Face CLI
@@ -90,12 +124,34 @@ curl -LsSf https://hf.co/cli/install.sh | bash
hf auth login hf auth login
``` ```
### Samba Model Storage
I recommend adding network storage for keeping models offloaded. This mounts a samba share at `/srv/models`.
```bash
# Add this to /etc/fstab
//driveripper.reeselink.com/smb_models /srv/models cifs _netdev,nofail,uid=1001,gid=1001,credentials=/etc/samba/credentials 0 0
# Then mount
systemctl daemon-reload
mount -a --mkdir
```
Here are some sync commands that I use to keep the samba share in sync with the home directory:
```bash
# Sync models from home dir to the samba share
rsync -av --progress /home/ai/models/ /srv/models/
```
### Download models ### Download models
#### Text models #### Text models
<https://huggingface.co/ggml-org/collections> <https://huggingface.co/ggml-org/collections>
##### GPT-OSS
```bash ```bash
# gpt-oss-120b # gpt-oss-120b
mkdir /home/ai/models/text/gpt-oss-120b mkdir /home/ai/models/text/gpt-oss-120b
@@ -104,7 +160,11 @@ hf download --local-dir /home/ai/models/text/gpt-oss-120b ggml-org/gpt-oss-120b-
# gpt-oss-20b # gpt-oss-20b
mkdir /home/ai/models/text/gpt-oss-20b mkdir /home/ai/models/text/gpt-oss-20b
hf download --local-dir /home/ai/models/text/gpt-oss-20b ggml-org/gpt-oss-20b-GGUF hf download --local-dir /home/ai/models/text/gpt-oss-20b ggml-org/gpt-oss-20b-GGUF
```
##### Mistral
```bash
# devstral-2-123b # devstral-2-123b
mkdir /home/ai/models/text/devstral-2-123b mkdir /home/ai/models/text/devstral-2-123b
hf download --local-dir /home/ai/models/text/devstral-2-123b unsloth/Devstral-2-123B-Instruct-2512-GGUF Q4_K_M/Devstral-2-123B-Instruct-2512-Q4_K_M-00001-of-00002.gguf hf download --local-dir /home/ai/models/text/devstral-2-123b unsloth/Devstral-2-123B-Instruct-2512-GGUF Q4_K_M/Devstral-2-123B-Instruct-2512-Q4_K_M-00001-of-00002.gguf
@@ -121,10 +181,22 @@ hf download --local-dir /home/ai/models/text/ministral-3-14b ggml-org/Ministral-
# ministral-3-3b-instruct # ministral-3-3b-instruct
mkdir /home/ai/models/text/ministral-3-3b-instruct mkdir /home/ai/models/text/ministral-3-3b-instruct
hf download --local-dir /home/ai/models/text/ministral-3-3b-instruct ggml-org/Ministral-3-3B-Instruct-2512-GGUF hf download --local-dir /home/ai/models/text/ministral-3-3b-instruct ggml-org/Ministral-3-3B-Instruct-2512-GGUF
```
##### Nemotron
```bash
# nemotron-nano-30b # nemotron-nano-30b
mkdir /home/ai/models/text/nemotron-nano-30b mkdir /home/ai/models/text/nemotron-nano-30b
hf download --local-dir /home/ai/models/text/nemotron-nano-30b ggml-org/Nemotron-Nano-3-30B-A3B-GGUF Nemotron-Nano-3-30B-A3B-Q4_K_M.gguf hf download --local-dir /home/ai/models/text/nemotron-nano-30b ggml-org/Nemotron-Nano-3-30B-A3B-GGUF Nemotron-Nano-3-30B-A3B-Q4_K_M.gguf
```
##### Qwen
```bash
# qwen3-30b-a3b-thinking
mkdir /home/ai/models/text/qwen3-30b-a3b-thinking
hf download --local-dir /home/ai/models/text/qwen3-30b-a3b-thinking ggml-org/Qwen3-30B-A3B-GGUF Qwen3-30B-A3B-Q4_K_M.gguf
# qwen3-30b-a3b-instruct # qwen3-30b-a3b-instruct
mkdir /home/ai/models/text/qwen3-30b-a3b-instruct mkdir /home/ai/models/text/qwen3-30b-a3b-instruct
@@ -133,17 +205,98 @@ hf download --local-dir /home/ai/models/text/qwen3-30b-a3b-instruct ggml-org/Qwe
# qwen3-coder-30b-a3b-instruct # qwen3-coder-30b-a3b-instruct
mkdir /home/ai/models/text/qwen3-coder-30b-a3b-instruct mkdir /home/ai/models/text/qwen3-coder-30b-a3b-instruct
hf download --local-dir /home/ai/models/text/qwen3-coder-30b-a3b-instruct ggml-org/Qwen3-Coder-30B-A3B-Instruct-Q8_0-GGUF hf download --local-dir /home/ai/models/text/qwen3-coder-30b-a3b-instruct ggml-org/Qwen3-Coder-30B-A3B-Instruct-Q8_0-GGUF
# qwen3-coder-next
mkdir /home/ai/models/text/qwen3-coder-next
hf download --local-dir /home/ai/models/text/qwen3-coder-next --include "unsloth/Qwen3-Coder-Next-GGUF Q5_K_M/*.gguf"
# qwen3-vl-30b-thinking
mkdir /home/ai/models/text/qwen3-vl-30b-thinking
hf download --local-dir /home/ai/models/text/qwen3-vl-30b-thinking unsloth/Qwen3-VL-30B-A3B-Thinking-1M-GGUF Qwen3-VL-30B-A3B-Thinking-1M-Q4_K_M.gguf
hf download --local-dir /home/ai/models/text/qwen3-vl-30b-thinking unsloth/Qwen3-VL-30B-A3B-Thinking-1M-GGUF mmproj-F16.gguf
# qwen3-vl-8b-instruct
mkdir /home/ai/models/text/qwen3-vl-8b-instruct
hf download --local-dir /home/ai/models/text/qwen3-vl-8b-instruct Qwen/Qwen3-VL-8B-Instruct-GGUF Qwen3VL-8B-Instruct-Q4_K_M.gguf
hf download --local-dir /home/ai/models/text/qwen3-vl-8b-instruct Qwen/Qwen3-VL-8B-Instruct-GGUF mmproj-Qwen3VL-8B-Instruct-Q8_0.gguf
# qwen3-4b-2507-abliterated
mkdir /home/ai/models/text/qwen3-4b-2507-abliterated
hf download --local-dir /home/ai/models/text/qwen3-4b-2507-abliterated prithivMLmods/Qwen3-4B-2507-abliterated-GGUF Qwen3-4B-Thinking-2507-abliterated-GGUF/Qwen3-4B-Thinking-2507-abliterated.Q4_K_M.gguf
```
##### GLM
```bash
# glm-4.7-flash-30b
mkdir /home/ai/models/text/glm-4.7-flash-30b
hf download --local-dir /home/ai/models/text/glm-4.7-flash-30b unsloth/GLM-4.7-Flash-GGUF GLM-4.7-Flash-Q4_K_M.gguf
```
##### Llama
```bash
# llama4-scout
mkdir /home/ai/models/text/llama4-scout
# Remember to move the gguf files into the llama4-scout folder, otherwise it won't pick up
hf download --local-dir /home/ai/models/text/llama4-scout unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF --include "Q4_K_M/*.gguf"
hf download --local-dir /home/ai/models/text/llama4-scout unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF mmproj-F16.gguf
```
##### Gemma
```bash
# Note "it" vs "pt" suffixes. "it" is instruction following, "pt" is the base model (not as good for out-of-the-box use)
# gemma-3-27b-it
mkdir /home/ai/models/text/gemma-3-27b-it
hf download --local-dir /home/ai/models/text/gemma-3-27b-it unsloth/gemma-3-27b-it-GGUF gemma-3-27b-it-Q4_K_M.gguf
hf download --local-dir /home/ai/models/text/gemma-3-27b-it unsloth/gemma-3-27b-it-GGUF mmproj-F16.gguf
```
##### Dolphin (Abliterated)
```bash
# dolphin-x1-8b
mkdir /home/ai/models/text/dolphin-x1-8b
hf download --local-dir /home/ai/models/text/dolphin-x1-8b dphn/Dolphin-X1-8B-GGUF Dolphin-X1-8B-Q4_K_M.gguf
# dolphin-mistral-24b-venice
mkdir /home/ai/models/text/dolphin-mistral-24b-venice
hf download --local-dir /home/ai/models/text/dolphin-mistral-24b-venice bartowski/cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-GGUF cognitivecomputations_Dolphin-Mistral-24B-Venice-Edition-Q4_K_M.gguf
``` ```
#### Image models #### Image models
##### Z-Image
```bash ```bash
# z-turbo # z-turbo
# Fastest image generation in 8 steps. Great a text and prompt following.
# Lacks variety.
mkdir /home/ai/models/image/z-turbo mkdir /home/ai/models/image/z-turbo
hf download --local-dir /home/ai/models/image/z-turbo QuantStack/FLUX.1-Kontext-dev-GGUF flux1-kontext-dev-Q4_K_M.gguf hf download --local-dir /home/ai/models/image/z-turbo QuantStack/FLUX.1-Kontext-dev-GGUF flux1-kontext-dev-Q4_K_M.gguf
hf download --local-dir /home/ai/models/image/z-turbo black-forest-labs/FLUX.1-schnell ae.safetensors hf download --local-dir /home/ai/models/image/z-turbo black-forest-labs/FLUX.1-schnell ae.safetensors
hf download --local-dir /home/ai/models/image/z-turbo unsloth/Qwen3-4B-Instruct-2507-GGUF Qwen3-4B-Instruct-2507-Q4_K_M.gguf hf download --local-dir /home/ai/models/image/z-turbo unsloth/Qwen3-4B-Instruct-2507-GGUF Qwen3-4B-Instruct-2507-Q4_K_M.gguf
# z-image
# Full version of z-turbo. Needs 28-50 steps.
# Note, image quality not as good as z-turbo
mkdir /home/ai/models/image/z-image
hf download --local-dir /home/ai/models/image/z-image unsloth/Z-Image-GGUF z-image-Q4_K_M.gguf
hf download --local-dir /home/ai/models/image/z-image black-forest-labs/FLUX.1-schnell ae.safetensors
hf download --local-dir /home/ai/models/image/z-image unsloth/Qwen3-4B-Instruct-2507-GGUF Qwen3-4B-Instruct-2507-Q4_K_M.gguf
```
##### Flux
```bash
# flux2-klein
# Capable of generating images in 4 steps
mkdir /home/ai/models/image/flux2-klein
hf download --local-dir /home/ai/models/image/flux2-klein leejet/FLUX.2-klein-9B-GGUF flux-2-klein-9b-Q4_0.gguf
hf download --local-dir /home/ai/models/image/flux2-klein black-forest-labs/FLUX.2-dev ae.safetensors
hf download --local-dir /home/ai/models/image/flux2-klein unsloth/Qwen3-8B-GGUF Qwen3-8B-Q4_K_M.gguf
# flux-1-kontext # flux-1-kontext
mkdir /home/ai/models/image/flux-1-kontext mkdir /home/ai/models/image/flux-1-kontext
hf download --local-dir /home/ai/models/image/flux-1-kontext leejet/Z-Image-Turbo-GGUF z_image_turbo-Q4_K.gguf hf download --local-dir /home/ai/models/image/flux-1-kontext leejet/Z-Image-Turbo-GGUF z_image_turbo-Q4_K.gguf
@@ -152,12 +305,21 @@ hf download --local-dir /home/ai/models/image/flux-1-kontext comfyanonymous/flux
hf download --local-dir /home/ai/models/image/flux-1-kontext comfyanonymous/flux_text_encoders t5xxl_fp16.safetensors hf download --local-dir /home/ai/models/image/flux-1-kontext comfyanonymous/flux_text_encoders t5xxl_fp16.safetensors
``` ```
### Create the systemd-ai pod ##### Qwen Image 2512
You'll at least want the ai pod and network. Copy `ai.pod` and `ai.network` out ```bash
of `quadlets` into `~/.config/containers/systemd`.
Then run `systemctl --user daemon-reload && systemctl --user start ai-pod` ```
#### Embedding Models
##### Nomic
```bash
# nomic-embed-text-v2
mkdir /home/ai/models/embedding/nomic-embed-text-v2
hf download --local-dir /home/ai/models/embedding/nomic-embed-text-v2 ggml-org/Nomic-Embed-Text-V2-GGUF
```
## llama.cpp ## llama.cpp
@@ -172,20 +334,38 @@ export BUILD_TAG=$(date +"%Y-%m-%d-%H-%M-%S")
# Vulkan # Vulkan
podman build -f .devops/vulkan.Dockerfile -t llama-cpp-vulkan:${BUILD_TAG} -t llama-cpp-vulkan:latest . podman build -f .devops/vulkan.Dockerfile -t llama-cpp-vulkan:${BUILD_TAG} -t llama-cpp-vulkan:latest .
# ROCM
podman build -f .devops/rocm.Dockerfile -t llama-cpp-rocm:${BUILD_TAG} -t llama-cpp-rocm:latest .
# Run llama server (Available on port 8000) # Run llama server (Available on port 8000)
# Add `--n-cpu-moe 32` to gpt-oss-120b to keep minimal number of expert in GPU # Add `--n-cpu-moe 32` to gpt-oss-120b to keep minimal number of expert in GPU
podman run \ podman run \
--rm \ --rm \
--name llama-server-demo \ --name llama-server-demo \
--pod systemd-ai \
--device=/dev/kfd \ --device=/dev/kfd \
--device=/dev/dri \ --device=/dev/dri \
--pod systemd-ai-internal \
-v /home/ai/models/text:/models:z \ -v /home/ai/models/text:/models:z \
localhost/llama-cpp-vulkan:latest \ localhost/llama-cpp-vulkan:latest \
--port 8000 \ --port 8000 \
-c 64000 \ -c 32000 \
-b 64000 \ --perf \
-ub 500 \ --n-gpu-layers all \
--jinja \
--models-max 1 \
--models-dir /models
# ROCM
podman run \
--rm \
--name llama-server-demo \
--device=/dev/kfd \
--device=/dev/dri \
--pod systemd-ai-internal \
-v /home/ai/models/text:/models:z \
localhost/llama-cpp-rocm:latest \
--port 8000 \
-c 0 \
--perf \ --perf \
--n-gpu-layers all \ --n-gpu-layers all \
--jinja \ --jinja \
@@ -222,14 +402,67 @@ localhost/stable-diffusion-cpp:latest \
--llm /models/image/z-turbo/Qwen3-4B-Instruct-2507-Q4_K_M.gguf \ --llm /models/image/z-turbo/Qwen3-4B-Instruct-2507-Q4_K_M.gguf \
--cfg-scale 1.0 \ --cfg-scale 1.0 \
-v \ -v \
-H 1024 \
-W 1024 \
--seed -1 \ --seed -1 \
--steps 8 \ --steps 8 \
--vae-conv-direct \ --vae-conv-direct \
-H 1024 \
-W 1024 \
-o /output/output.png \ -o /output/output.png \
-p "A photorealistic dragon" -p "A photorealistic dragon"
# z-image
podman run --rm \
-v /home/ai/models:/models:z \
-v /home/ai/output:/output:z \
--device /dev/kfd \
--device /dev/dri \
localhost/stable-diffusion-cpp:latest \
--diffusion-model /models/image/z-image/z-image-Q4_K_M.gguf \
--vae /models/image/z-image/ae.safetensors \
--llm /models/image/z-image/Qwen3-4B-Instruct-2507-Q4_K_M.gguf \
--cfg-scale 1.0 \
-v \
--seed -1 \
--steps 28 \
--vae-conv-direct \
-H 1024 \
-W 1024 \
-o /output/output.png \
-p "A photorealistic dragon"
# flux2-klein
podman run --rm \
-v /home/ai/models:/models:z \
-v /home/ai/output:/output:z \
--device /dev/kfd \
--device /dev/dri \
localhost/stable-diffusion-cpp:latest \
--diffusion-model /models/image/flux2-klein/flux-2-klein-9b-Q4_0.gguf \
--vae /models/image/flux2-klein/ae.safetensors \
--llm /models/image/flux2-klein/Qwen3-8B-Q4_K_M.gguf \
--cfg-scale 1.0 \
--steps 4 \
-v \
--seed -1 \
--vae-conv-direct \
-H 1024 \
-W 1024 \
-o /output/output.png \
-p "A photorealistic dragon"
# Edit with flux2 klein
.\bin\Release\sd-cli.exe \
--diffusion-model /models/image/flux2-klein/flux-2-klein-9b-Q4_0.gguf \
--vae /models/image/flux2-klein/ae.safetensors \
--llm /models/image/flux2-klein/Qwen3-8B-Q4_K_M.gguf \
--cfg-scale 1.0 \
--sampling-method euler \
-v \
--vae-conv-direct \
--steps 4
-r .\kontext_input.png \
-p "change 'flux.cpp' to 'klein.cpp'" \
# Edit with flux kontext # Edit with flux kontext
podman run --rm \ podman run --rm \
-v /home/ai/models:/models:z \ -v /home/ai/models:/models:z \
@@ -271,6 +504,33 @@ podman run \
ghcr.io/open-webui/open-webui:main ghcr.io/open-webui/open-webui:main
``` ```
Use the following connections:
| Service | Endpoint |
| -------------------- | ----------------------------------------- |
| llama.cpp | <http://host.containers.internal:8000> |
| stable-diffusion.cpp | <http://host.containers.internal:1234/v1> |
## VLLM
```bash
--group-add=video \
--cap-add=SYS_PTRACE \
--security-opt seccomp=unconfined \
--env "HF_TOKEN=$HF_TOKEN" \
--ipc=host \
mkdir -p /home/ai/vllm/.cache/huggingface
podman run --rm \
--device /dev/kfd \
--device /dev/dri \
-v /home/ai/vllm/.cache/huggingface:/root/.cache/huggingface:z \
-p 8002:8000 \
docker.io/vllm/vllm-openai-rocm:latest \
--model Qwen/Qwen3-0.6B
```
## Install the whole thing with quadlets (TM) ## Install the whole thing with quadlets (TM)
```bash ```bash
@@ -278,7 +538,7 @@ ghcr.io/open-webui/open-webui:main
scp -r active/device_framework_desktop/quadlets/* deskwork-ai:.config/containers/systemd/ scp -r active/device_framework_desktop/quadlets/* deskwork-ai:.config/containers/systemd/
ssh deskwork-ai ssh deskwork-ai
systemctl --user daemon-reload systemctl --user daemon-reload
systemctl --user restart ai-pod.service systemctl --user restart ai-internal-pod.service
``` ```
Note, all services will be available at `host.containers.internal`. So llama.cpp Note, all services will be available at `host.containers.internal`. So llama.cpp
@@ -290,8 +550,10 @@ will be up at `http://host.containers.internal:8000`.
# 1. Builds the latest llama.cpp and stable-diffusion.cpp # 1. Builds the latest llama.cpp and stable-diffusion.cpp
# 2. Pulls the latest open-webui # 2. Pulls the latest open-webui
# 3. Restarts all services # 3. Restarts all services
scp active/device_framework_desktop/update-script.sh deskwork: scp active/device_framework_desktop/update-script.sh deskwork-ai:
ssh deskwork-ai ssh deskwork-ai
chmod +x update-script.sh chmod +x update-script.sh
./update-script.sh ./update-script.sh
``` ```
## Voice Cloning

View File

@@ -0,0 +1,6 @@
[Pod]
# ai-external is the primary network
Network=ai-external.network
Network=ai-internal.network
# open-webui
PublishPort=8080:8080/tcp

View File

@@ -0,0 +1,3 @@
[Network]
IPv6=true
Internal=true

View File

@@ -0,0 +1,8 @@
[Pod]
Network=ai-internal.network
# llama.cpp
PublishPort=8000:8000/tcp
# stable-diffusion.cpp gen
PublishPort=1234:1234/tcp
# stable-diffusion.cpp edit
PublishPort=1235:1235/tcp

View File

@@ -1,8 +0,0 @@
[Pod]
Network=ai.network
# llama.cpp
PublishPort=8000:8000/tcp
# open-webui
PublishPort=8080:8080/tcp
# stable-diffusion.cpp
PublishPort=1234:1234/tcp

View File

@@ -2,8 +2,8 @@
Description=A Llama CPP Server Running GPT OSS 120b Description=A Llama CPP Server Running GPT OSS 120b
[Container] [Container]
# Shared AI pod # Shared AI internal pod
Pod=ai.pod Pod=ai-internal.pod
# Image is built locally via podman build # Image is built locally via podman build
Image=localhost/llama-cpp-vulkan:latest Image=localhost/llama-cpp-vulkan:latest
@@ -18,8 +18,6 @@ AddDevice=/dev/dri
# Server command # Server command
Exec=--port 8000 \ Exec=--port 8000 \
-c 48000 \ -c 48000 \
-b 48000 \
-ub 500 \
--perf \ --perf \
--n-gpu-layers all \ --n-gpu-layers all \
--jinja \ --jinja \
@@ -27,7 +25,7 @@ Exec=--port 8000 \
--models-dir /models --models-dir /models
# Health Check # Health Check
HealthCmd=CMD-SHELL curl --fail http://127.0.0.1:8000/props?model=gpt-oss-120b || exit 1 HealthCmd=CMD-SHELL curl --fail http://127.0.0.1:8000/props || exit 1
HealthInterval=10s HealthInterval=10s
HealthRetries=3 HealthRetries=3
HealthStartPeriod=10s HealthStartPeriod=10s

View File

@@ -2,8 +2,8 @@
Description=An Open Webui Frontend for Local AI Services Description=An Open Webui Frontend for Local AI Services
[Container] [Container]
# Shared AI pod # Shared AI external pod
Pod=ai.pod Pod=ai-external.pod
# Open Webui base image # Open Webui base image
Image=ghcr.io/open-webui/open-webui:main Image=ghcr.io/open-webui/open-webui:main

View File

@@ -2,8 +2,8 @@
Description=A Stable Diffusion CPP Server for Editing Images Description=A Stable Diffusion CPP Server for Editing Images
[Container] [Container]
# Shared AI pod # Shared AI Internal pod
Pod=ai.pod Pod=ai-internal.pod
# Vulkan image for AMD GPU # Vulkan image for AMD GPU
Image=localhost/stable-diffusion-cpp:latest Image=localhost/stable-diffusion-cpp:latest
@@ -21,16 +21,14 @@ Entrypoint=/sd-server
# Server args # Server args
Exec=-l 0.0.0.0 \ Exec=-l 0.0.0.0 \
--listen-port 1235 \ --listen-port 1235 \
--diffusion-model /models/image/flux-1-kontext/flux1-kontext-dev-Q4_K_M.gguf \ --diffusion-model /models/image/flux2-klein/flux-2-klein-9b-Q4_0.gguf \
--vae /models/image/flux-1-kontext/ae.safetensors \ --vae /models/image/flux2-klein/ae.safetensors \
--clip_l /models/image/flux-1-kontext/clip_l.safetensors \ --llm /models/image/flux2-klein/Qwen3-8B-Q4_K_M.gguf \
--t5xxl /models/image/flux-1-kontext/t5xxl_fp16.safetensors \
--cfg-scale 1.0 \ --cfg-scale 1.0 \
--sampling-method euler \ --sampling-method euler \
--vae-conv-direct \ -v \
--seed -1 \ --steps 4 \
--steps 28 \ --vae-conv-direct
-v
[Service] [Service]
Restart=always Restart=always

View File

@@ -2,8 +2,8 @@
Description=A Stable Diffusion CPP Server for Generating Images Description=A Stable Diffusion CPP Server for Generating Images
[Container] [Container]
# Shared AI pod # Shared AI internal pod
Pod=ai.pod Pod=ai-internal.pod
# Vulkan image for AMD GPU # Vulkan image for AMD GPU
Image=localhost/stable-diffusion-cpp:latest Image=localhost/stable-diffusion-cpp:latest
@@ -24,8 +24,6 @@ Exec=-l 0.0.0.0 \
--diffusion-model /models/image/z-turbo/z_image_turbo-Q4_K.gguf \ --diffusion-model /models/image/z-turbo/z_image_turbo-Q4_K.gguf \
--vae /models/image/z-turbo/ae.safetensors \ --vae /models/image/z-turbo/ae.safetensors \
--llm /models/image/z-turbo/qwen_3_4b.safetensors \ --llm /models/image/z-turbo/qwen_3_4b.safetensors \
-l 0.0.0.0 \
--listen-port 1234 \
--cfg-scale 1.0 \ --cfg-scale 1.0 \
--vae-conv-direct \ --vae-conv-direct \
-v \ -v \

View File

@@ -0,0 +1,42 @@
[Unit]
Description=A Stable Diffusion CPP Server for Editing Images
[Container]
# Shared AI pod
Pod=ai.pod
# Vulkan image for AMD GPU
Image=localhost/stable-diffusion-cpp:latest
# Shared models directory
Volume=/home/ai/models:/models:z
# GPU Device
AddDevice=/dev/kfd
AddDevice=/dev/dri
# Override entrypoint to use server
Entrypoint=/sd-server
# Server args
Exec=-l 0.0.0.0 \
--listen-port 1235 \
--diffusion-model /models/image/flux-1-kontext/flux1-kontext-dev-Q4_K_M.gguf \
--vae /models/image/flux-1-kontext/ae.safetensors \
--clip_l /models/image/flux-1-kontext/clip_l.safetensors \
--t5xxl /models/image/flux-1-kontext/t5xxl_fp16.safetensors \
--cfg-scale 1.0 \
--sampling-method euler \
--vae-conv-direct \
--seed -1 \
--steps 28 \
-v
[Service]
Restart=always
# Extend Timeout to allow time to pull the image
TimeoutStartSec=900
[Install]
# Start by default on boot
WantedBy=multi-user.target default.target

View File

@@ -15,6 +15,8 @@ git pull
git submodule update --init --recursive git submodule update --init --recursive
podman build -f Dockerfile.vulkan -t stable-diffusion-cpp:${BUILD_TAG} -t stable-diffusion-cpp:latest . podman build -f Dockerfile.vulkan -t stable-diffusion-cpp:${BUILD_TAG} -t stable-diffusion-cpp:latest .
systemctl --user restart ai-internal-pod
podman image pull ghcr.io/open-webui/open-webui:main podman image pull ghcr.io/open-webui/open-webui:main
systemctl --user restart ai-pod systemctl --user restart ai-external-pod

View File

@@ -23,6 +23,8 @@
- [Philips Hue Switches](#philips-hue-switches) - [Philips Hue Switches](#philips-hue-switches)
- [Datetimes](#datetimes) - [Datetimes](#datetimes)
- [LG TV Switch](#lg-tv-switch) - [LG TV Switch](#lg-tv-switch)
- [Raspberry Pi Docker](#raspberry-pi-docker)
- [Extended OpenAI Conversation](#extended-openai-conversation)
## Certificates ## Certificates
@@ -708,3 +710,43 @@ Stolen from <https://www.fabriziomusacchio.com/blog/2021-08-15-strftime_Cheat_Sh
target: target:
device_id: "{{device_id('media_player.living_room_lg_tv')}}" device_id: "{{device_id('media_player.living_room_lg_tv')}}"
``` ```
## Raspberry Pi Docker
```bash
mkdir venv
docker run -d \
--name homeassistant \
--privileged \
--restart=unless-stopped \
-e TZ=America/New_York \
-v /root/config:/config \
-v /run/dbus:/run/dbus:ro \
--network=host \
ghcr.io/home-assistant/home-assistant:stable
sudo apt update && sudo apt install ffmpeg
source venv/bin/activate
pip install openwakeword
pip install -U openai-whisper
pip install piper-tts
```
## Extended OpenAI Conversation
Setting up a new llama.cpp agent:
| Field | Value |
| ------------------- | ------------------------------------ |
| Name | llama-cpp.reeselink.com |
| API Key | placeholder |
| Base URL | `https://llama-cpp.reeselink.com/v1` |
| API Version | v1 |
| Organization | blank |
| Skip Authentication | Yes |
| API Provider | OpenAI |
Model Name: `ggml-org/gpt-oss-120b-GGUF`

View File

@@ -2,9 +2,9 @@
- [Fedora Server](#fedora-server) - [Fedora Server](#fedora-server)
- [Installation](#installation) - [Installation](#installation)
- [Power Profiles with Tuned](#power-profiles-with-tuned)
- [Setup SSH](#setup-ssh) - [Setup SSH](#setup-ssh)
- [DNF](#dnf) - [DNF](#dnf)
- [Power Profiles with Tuned](#power-profiles-with-tuned)
- [Fail2Ban](#fail2ban) - [Fail2Ban](#fail2ban)
- [BTRFS Parent Volumes](#btrfs-parent-volumes) - [BTRFS Parent Volumes](#btrfs-parent-volumes)
- [BTRFS Snapshots](#btrfs-snapshots) - [BTRFS Snapshots](#btrfs-snapshots)
@@ -48,6 +48,42 @@
- [Chroot into a mounted disk](#chroot-into-a-mounted-disk) - [Chroot into a mounted disk](#chroot-into-a-mounted-disk)
- [Resize Last Partition to Fill Available Space](#resize-last-partition-to-fill-available-space) - [Resize Last Partition to Fill Available Space](#resize-last-partition-to-fill-available-space)
- [LUKS performance](#luks-performance) - [LUKS performance](#luks-performance)
- [Set up firewall](#set-up-firewall)
- [Dual Boot with Fingerprint Scanner Issues](#dual-boot-with-fingerprint-scanner-issues)
- [Revert Kernel (if needed)](#revert-kernel-if-needed)
- [Reverting linux-firmware](#reverting-linux-firmware)
- [Make DNF Fast](#make-dnf-fast)
- [Install Useful Packages](#install-useful-packages)
- [Set Hostname](#set-hostname)
- [Install updates](#install-updates)
- [Extensions](#extensions)
- [Gnome Tweaks](#gnome-tweaks)
- [Flatpack](#flatpack)
- [Snap](#snap)
- [AppImage Launcher](#appimage-launcher)
- [Backups](#backups-1)
- [Create Encrypted Drive](#create-encrypted-drive)
- [Backup Disks](#backup-disks)
- [Disk Health](#disk-health)
- [Create BTRBK Config](#create-btrbk-config)
- [Create Systemd Timer](#create-systemd-timer)
- [Test, Start and Enable service](#test-start-and-enable-service)
- [Restore](#restore)
- [Firewall CMD](#firewall-cmd)
- [Bluetooth](#bluetooth)
- [Airpods](#airpods)
- [ZRAM](#zram)
- [Automatic Disk Decryption with TPM2](#automatic-disk-decryption-with-tpm2)
- [Firefox GPU Rendering](#firefox-gpu-rendering)
- [Gnome Software Updates (packagekitd and software)](#gnome-software-updates-packagekitd-and-software)
- [Turn Off Fingerprint When Laptop Lid Closed](#turn-off-fingerprint-when-laptop-lid-closed)
- [Power Button Behavior](#power-button-behavior)
- [Discord](#discord)
- [Discord sharing not working](#discord-sharing-not-working)
- [Minecraft](#minecraft)
- [Sound Devices](#sound-devices)
- [Install ffmpegthumbnailer, remove totem](#install-ffmpegthumbnailer-remove-totem)
- [Add compatibility for HEIC to mogrify](#add-compatibility-for-heic-to-mogrify)
<https://docs.fedoraproject.org/en-US/fedora-server/installation/postinstallation-tasks/#_manage_system_updates> <https://docs.fedoraproject.org/en-US/fedora-server/installation/postinstallation-tasks/#_manage_system_updates>
@@ -75,12 +111,6 @@ and the operator will store information about each server.
5. Take note of the ipv4 and ipv6 address. Update any DNS records at this time. 5. Take note of the ipv4 and ipv6 address. Update any DNS records at this time.
6. Install and reboot 6. Install and reboot
## Power Profiles with Tuned
1. `dnf install tuned`
2. `systemctl enable --now tuned`
3. `tuned-adm profile virtual-host`
## Setup SSH ## Setup SSH
See [README](/README.md#ssh-setup) See [README](/README.md#ssh-setup)
@@ -94,9 +124,15 @@ echo 'fastestmirror=1' >> /etc/dnf/dnf.conf
dnf clean all dnf clean all
dnf update --refresh -y dnf update --refresh -y
# libdnf5 is required for ansible to work # libdnf5 is required for ansible to work
dnf install -y glances tmux vim python3-libdnf5 dnf install -y git glances tmux vim python3-libdnf5 borgbackup tpm2-tools
``` ```
## Power Profiles with Tuned
1. `dnf install tuned`
2. `systemctl enable --now tuned`
3. `tuned-adm profile virtual-host`
## Fail2Ban ## Fail2Ban
On the server: On the server:
@@ -167,15 +203,16 @@ mount -a --mkdir
We'll be using snapper, a tool for automating and controlling snapshot behavior. We'll be using snapper, a tool for automating and controlling snapshot behavior.
```bash ```bash
dnf install snapper dnf-plugin-snapper dnf install -y snapper dnf-plugin-snapper
# Allow selinux management # Allow selinux management
semanage permissive -a snapperd_t semanage permissive -a snapperd_t
# Note, if you mess something up you can run snapper -c root delete-config to delete # Note, if you mess something up you can run snapper -c root delete-config to delete
# System configs are stored in /etc/sysconfig/snapper as well as /etc/snapper # System configs are stored in /etc/sysconfig/snapper as well as /etc/snapper
snapper -c boot create-config /boot
snapper -c root create-config / snapper -c root create-config /
snapper -c data create-config /path/to/other/data snapper -c home create-config /home
# Enable automatic snapshots # Enable automatic snapshots
systemctl enable --now snapper-timeline.timer systemctl enable --now snapper-timeline.timer
@@ -747,3 +784,703 @@ Which will tell you how fast you can theoretically write/read to encrypted drive
The default encryption used by most modern operating systems is AES-XTS. The default encryption used by most modern operating systems is AES-XTS.
You can see your system's cipher and key with `cryptsetup luksDump /dev/nvme0n1p1 | grep -i cipher` You can see your system's cipher and key with `cryptsetup luksDump /dev/nvme0n1p1 | grep -i cipher`
## Set up firewall
sudo vim /etc/firewalld/firewalld.conf
```conf
LogDenied=all
```
```bash
# Restart and reload the firewalld service
sudo systemctl restart firewalld.service
# List all available firewalld zones and what they do
sudo firewall-cmd --list-all-zones
# Set the default firewall zone to public
sudo firewall-cmd --set-default-zone public
# Open a port temporarily
sudo firewall-cmd --add-port=25565/tcp
```
Monitor blocked traffic
```bash
sudo journalctl -f | grep -i REJECT
sudo journalctl -f | grep -i DROP
```
## Dual Boot with Fingerprint Scanner Issues
<https://commuity.frame.work/t/fingerprint-scanner-compatibility-with-linux-ubuntu-fedora-etc/1501/206>
The script below will clear the existing fingerprints from the internal
storage.
```bash
#! /usr/bin/python3
import gi
gi.require_version('FPrint', '2.0')
from gi.repository import FPrint
ctx = FPrint.Context()
for dev in ctx.get_devices():
print(dev)
print(dev.get_driver())
print(dev.props.device_id);
dev.open_sync()
dev.clear_storage_sync()
print("All prints deleted.")
dev.close_sync()
```
## Revert Kernel (if needed)
With koji
```bash
# Install koji (if you don't have it)
sudo dnf install -y koji
# Search for the desired kernel version
koji search build kernel-6.18.3*
# Create a temporary directory to store the donwloaded kernel packages
sudo -i
mkdir /root/kernel-download-6.18.3
cd /root/kernel-download-6.18.3
# Download the kernel packages
koji download-build --arch=x86_64 kernel-6.18.3-200.fc43
rm -f *debug*.rpm
rm -f *uki*.rpm
# Install all downloaded rpms
dnf install ./*.rpm
reboot
```
If you need to revert to previously installed kernels:
```bash
# Find the kernels you have installed
rpm -qa kernel
# List available kernels
ls /boot | grep vmlinuz
# Revert to a previous kernel
grubby --set-default /boot/vmlinuz-6.18.3...
```
This might throw errors when updating to newer kernels (file not found)
To restore back to the original kernel:
```bash
# If this directory exists, it's the problem
ls -d "/boot/efi/$(cat /etc/machine-id)"
# Move that directory out of the way
mv "/boot/efi/$(cat /etc/machine-id)" "/boot/efi/$(cat /etc/machine-id)_disabled"
# Reinstall the new kernel
dnf reinstall 'kernel*-0:6.18.8*'
```
## Reverting linux-firmware
```bash
sudo -i
mkdir -p /root/linux-firmware-downgrade
cd /root/linux-firmware-downgrade
wget -r -np -nd -A '*.rpm' https://kojipkgs.fedoraproject.org/packages/linux-firmware/20251111/1.fc43/noarch/
dnf install ./*.rpm
dracut -f --kver 6.18.3...
reboot
```
## Make DNF Fast
```bash
echo 'fastestmirror=1' | sudo tee -a /etc/dnf/dnf.conf
echo 'max_parallel_downloads=10' | sudo tee -a /etc/dnf/dnf.conf
echo 'deltarpm=true' | sudo tee -a /etc/dnf/dnf.conf
```
/etc/dnf/dnf.conf
```conf
[main]
gpgcheck=1
installonly_limit=3
clean_requirements_on_remove=True
best=False
skip_if_unavailable=True
fastestmirror=1
max_parallel_downloads=10
deltarpm=true
```
## Install Useful Packages
```bash
sudo dnf install vim
```
## Set Hostname
```bash
hostnamectl set-hostname ducolaptop
```
## Install updates
```bash
sudo dnf upgrade --refresh
sudo dnf check
sudo dnf autoremove
sudo fwupdmgr get-devices
sudo fwupdmgr refresh --force
sudo fwupdmgr get-updates
sudo fwupdmgr update
sudo reboot now
```
## Extensions
```bash
sudo dnf install -y gnome-extensions-app gnome-tweaks
sudo dnf install -y gnome-shell-extension-appindicator
```
1. Another Window Session Manager by 5q0Fw
Restores windows on shutdown/reboot. Can be configured to automatically
save the last state before restart. Pair this with "restore session" in
firefox/chrome and you've got yourself a really good mac hibernate
equivalent.
2. Dash to Dock by michele_g
Make the dock behave like macos. Hide when it would cover a window. Show
when the mouse hovers over the bottom of the screen. Add some sane default
shortcuts. Etc.
3. Tactile by lundal
Power-user tiling! Behaves like Windows Power Toys FancyZones.
4. Vitals by corecoding
Adds quick-glance stats about your system to the menu bar. Use to monitor
CPU usage, memory availability, network speed, battery wattage, etc.
## Gnome Tweaks
1. Fonts -> Monospace Text -> Fira Code Regular
2. Keyboard & Mouse -> Acceleration Profile -> Flat
3. Keyboard & Mouse -> Mouse Click Emulation -> Fingers
4. Top Bar -> Activities Overview Hot Corner -> Off
5. Top Bar -> Battery Percentage -> On
6. Top Bar -> Clock -> Weekday -> On
7. Top Bar -> Clock -> Seconds -> On
8. Windows -> Center New Windows -> On
## Flatpack
```bash
flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
flatpak update
```
## Snap
```bash
sudo dnf install -y snapd
sudo ln -s /var/lib/snapd/snap /snap # for classic snap support
ln -s /var/lib/snapd/desktop/applications ~/.local/share/applications/snap # make apps show up in gnome
sudo reboot now
```
## AppImage Launcher
Download RPM from
<https://github.com/TheAssassin/AppImageLauncher/releases/tag/v2.2.0>
## Backups
### Create Encrypted Drive
```bash
# Create an encrypted drive
sudo cryptsetup luksFormat /dev/sdb1
# LUKS Disk Encryption can use up to 8 key slots to store passwords. We can use these keys to auto mount LUKS device.
# cryptsetup luksDump /dev/sda
# Create a lukskeys
mkdir -p /home/ducoterra/.lukskeys
# Generate key
dd if=/dev/random bs=32 count=1 of=/home/ducoterra/.lukskeys/btr_backup
# Change key mode
chmod 600 /home/ducoterra/.lukskeys
# Luks add a key
sudo cryptsetup luksAddKey /dev/sda /home/ducoterra/.lukskeys/btr_backup
# Get UUID of disk with
sudo blkid /dev/sda1
# Add key to crypttab
echo 'btr_backup UUID=1d7ce570-e695-47a0-9dda-5f14b5b20e21 /home/ducoterra/.lukskeys/btr_backup luks' >> /etc/crypttab
# Create read-only backup mount point
sudo btrfs sub create /mnt/btr_backup
sudo btrfs property set /mnt/btr_backup ro true
# Add to fstab
echo '/dev/mapper/btr_backup /mnt/btr_backup btrfs x-systemd.device-timeout=0,x-gvfs-show,x-gvfs-name=btr_backup,ssd,nofail,noatime,discard=async,compress=zstd 0 0' >> /etc/fstab
# mount
sudo cryptsetup luksOpen /dev/disk/by-uuid/1d7ce570-e695-47a0-9dda-5f14b5b20e21 backup0 --key-file=/home/ducoterra/.lukskeys/backup0
# close (or fix issues)
sudo cryptsetup luksClose backup0
```
### Backup Disks
Backup disks will respect the following naming convention:
brand_size_purpose_year_month
So for a backup drive you would create:
`wd_4tb_backup_2023_01`
Or for an archive drive:
`samsung_1tb_archive_2023_01`
#### Disk Health
`smartctl -a /dev/sda`
### Create BTRBK Config
`sudo vim /etc/btrbk/btrbk.conf`
```conf
snapshot_create ondemand
snapshot_preserve_min 2d
snapshot_preserve 14d
snapshot_dir snapshots
target_preserve_min no
target_preserve 20d 10w *m
volume /mnt/btr_pool
target /mnt/btr_backup
subvolume root
subvolume home
```
### Create Systemd Timer
`sudo vim /etc/systemd/system/btrbk.service`
```conf
[Unit]
Description=Runs btrbk with config file at /etc/btrbk/btrbk.conf
[Service]
ExecStart=btrbk -c /etc/btrbk/btrbk.conf -v run
```
`sudo vim /etc/systemd/system/btrbk.timer`
```conf
[Unit]
Description=Run btrbk every hour
[Timer]
OnCalendar=hourly
AccuracySec=10min
Persistent=true
Unit=btrbk.service
[Install]
WantedBy=timers.target
```
### Test, Start and Enable service
Test your service:the available storage space on our NAS to the iSCSI target
and the other half
```bash
sudo btrbk -c /etc/btrbk/btrbk.conf -v run
```
Enable your service:
```bash
sudo systemctl start btrbk.timer
sudo systemctl enable btrbk.timer
```
### Restore
In the event you need to restore your system from a disaster do the following:
1. Reinstall fedora via a live image
2. After install, disk should be mounted at /mnt/sysimage
3. Copy the new fstab and crypttab to somewhere safe
4. rsync -av [etc, home, opt, root, usr, var]
5. `mount /dev/Y /mnt/sysimage/boot`
6. `mount /dev/Z /mnt/sysimage/boot/efi`
7. `mount --bind /dev /mnt/sysimage/dev`
8. `mount --bind /proc /mnt/sysimage/proc`
9. `mount --bind /sys /mnt/sysimage/sys`
10. `chroot /mnt/sysimage`
11. Edit fstab and crypttab so they match the new partitions
12. Update /etc/default/grub to match the new luks uuid
13. grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg
14. reboot
## Firewall CMD
1. Enable firewall
```bash
systemctl start firewall-cmd
systemctl enable firewall-cmd
```
2. Set default behavior to drop everything
```bash
firewall-cmd --set-default-zone=drop
systemctl reload firewall-cmd
```
## Bluetooth
### Airpods
Edit: /etc/bluetooth/main.conf Set ControllerMode = bredr
1. restart bluetooth service
2. connect airpods
3. comment line out
4. restart bluetooth service again
## ZRAM
Edit /etc/systemd/zram-generator.conf
```conf
[zram0]
zram-size = min(ram / 2, 16384)
compression-algorithm = lzo-rle
options =
writeback-device = /dev/zvol/tarta-zoot/swap-writeback
```
## Automatic Disk Decryption with TPM2
It's a friendlier experience to just encrypt your root partition with tpm and
lunlock your remaining drives with key files stored at /etc/ukskeys. This way
you only need to reregister one key with your tpm and the remaining drives will
be unlocked automatically.
Add your luks keys to the tpm module and set up boot parameters:
```bash
# Enroll for the first time
sudo -E systemd-cryptenroll --tpm2-device=auto --tpm2-pcrs=0+2+4+7 /dev/nvme1n1p3
# Add tpm2 configuration option to /etc/crypttab
# You can get the uuid with lsblk and blkid
luks-d9828faa-2b8c-4184-9e74-9054ae328c6d UUID=d9828faa-2b8c-4184-9e74-9054ae328c6d none tpm2-device=auto,discard
# Add rd.luks.options=tpm2-device=auto to grub
sudo grubby --args="rd.luks.options=tpm2-device=auto" --update-kernel=ALL
# Regenerate the boot parameters
sudo dracut -f
```
We can configure our system to automatically reenroll the tpm device on boot so
you only need to enter your password on kernel updates.
```bash
# Create a new random password for your disk
sudo cryptsetup luksAddKey /dev/nvme0n1p3
```
/etc/systemd/system/tpm_luks_cryptenroll.service
```conf
[Unit]
Description=Automatically runs systemd-cryptenroll on login
[Service]
Type=oneshot
ExecStart=/usr/bin/systemd-cryptenroll --tpm2-device=auto --tpm2-pcrs=0+2+4+7 --wipe-slot=tpm2 /dev/nvme0n1p3
# new random password you just created
Environment=PASSWORD=
[Install]
WantedBy=multi-user.target
```
```bash
# Now enable the service
sudo systemctl enable tpm_luks_cryptenroll.service
```
## Firefox GPU Rendering
<https://community.frame.work/t/linux-battery-life-tuning/6665>
```bash
dnf install intel-media-driver intel-gpu-tools
```
Type in about:config in the address bar and hit enter.
Set media.rdd-ffmpeg.enabled, media.ffmpeg.vaapi.enabled and
media.navigator.mediadatadecoder_vpx_enabled to true.
Close and reopen your browser
Run the command sudo intel_gpu_top, play a 4k video and check whether the Video
section is above 0.00%
## Gnome Software Updates (packagekitd and software)
To prevent Gnome Shell from starting Software open Settings->Search and disable
Software from there.
Disable auto-updates
```bash
sudo systemctl disable packagekit
sudo systemctl stop packagekit
dconf write /org/gnome/software/allow-updates false
dconf write /org/gnome/software/download-updates false
```
## Turn Off Fingerprint When Laptop Lid Closed
To disable fingerprint authentication when the laptop lid is closed, and
re-enable when it is reopened, we will use acpid to bind to the button/lid.*
event to a custom script that will stop and mask the fprintd service on lid
close, and unmask and start the fprintd service on lid open.
We also check that the HDMI cable is connected by testing the contents of
/sys/class/drm/card0-HDMI-A-1/status.
Follow the steps below:
1. Create a .locks file in your home dir: `mkdir ~/.locks`
2. Create file /etc/acpi/laptop-lid.sh with the following contents:
```bash
#!/bin/bash
lock=/home/ducoterra/.locks/fprint-disabled.lock
if grep -Fq closed /proc/acpi/button/lid/LID0/state # &&
# This is used to detect if a display is connected.
# For USB C displayport use:
# grep -Fxq connected /sys/class/drm/card1-DP-2/status
# For hdmi use:
# grep -Fxq connected /sys/class/drm/card0-HDMI-A-1/status
then
touch "$lock"
systemctl stop fprintd
systemctl mask fprintd
elif [ -f "$lock" ]
then
systemctl unmask fprintd
systemctl start fprintd
rm -f "$lock"
fi
```
3. Make the file executable with
`chmod +x /etc/acpi/laptop-lid.sh`
4. Create file /etc/acpi/events/laptop-lid with the following contents:
```bash
event=button/lid.*
action=/etc/acpi/laptop-lid.sh
```
5. Restart the acpid service with:
`sudo service acpid restart`
Now the fingerprint will be used only when the lid is open.
In order to restore the correct state of the fprintd service if you
disconnect/reconnect while the laptop is off, you may call the above script
from a systemd init file. The steps to do this are the following:
1. Create a file named /etc/systemd/system/laptop-lid.service with the
following contents:
```bash
[Unit]
Description=Laptop Lid
After=suspend.target
[Service]
ExecStart=/etc/acpi/laptop-lid.sh
[Install]
WantedBy=multi-user.target
WantedBy=suspend.target
```
2. Reload the systemd config files with
`sudo systemctl daemon-reload`
3. Start the service with
`sudo systemctl start laptop-lid.service`
4. Enable the service so that it starts automatically on boot
`sudo systemctl enable laptop-lid.service`
Now the status should be correct even after connecting/disconnecting when the
computer is off.
## Power Button Behavior
The power button is controlled from 2 locations:
1. DCONF (or gnoem settings) at `gnome.settings-daemon.plugins.power`
2. ACPI at /etc/acpi/events/powerconf
The powerconf acpi configuration will execute at the same time the gnome settings do.
This can lead to situations where the gnome settings say "suspend" but the acpi settings
say "shutdown". On waking up your laptop it will immediately shutdown.
The solution is to comment out everything in /etc/acpi/events/powerconf and rely on the
gnome settings **OR** set the gnome settings to "nothing" and edit
`/etc/acpi/actions/power.sh` with the behavior you expect. Either way you should pick
one to control power button behavior.
## Discord
vim ~/.local/share/applications/Discord.desktop
```conf
[Desktop Entry]
Encoding=UTF-8
Name=Discord
Exec=/home/ducoterra/Applications/Discord/Discord
Icon=/home/ducoterra/Applications/Discord/discord.png
Type=Application
Categories=Communication;
```
### Discord sharing not working
THIS IS PROBABLY A PER-APP THING
Likely the thing you're trying to share doesn't work, it's not wayland's fault.
If you're trying to share firefox - download the firefox binary and dnf
uninstall the other one. For whatever reason the preinstalled binary doesn't
like screen share. You can use the following:
~/.local/share/applications/firefox.desktop
```conf
[Desktop Entry]
Encoding=UTF-8
Name=Firefox
Exec=/home/ducoterra/Applications/firefox/firefox-bin
Icon=/home/ducoterra/.icons/firefox.svg
Type=Application
Categories=Browser;
```
## Minecraft
1. You can find extra java versions at /etc/alternatives
2. You need to `dnf install xrandr` to launch any modpacks
3. You can create a desktop icon by putting this at
~/.local/share/applications/*.desktop:
```conf
[Desktop Entry]
Type=Application
Version=1.0
Name=Minecraft
Comment=Minecraft Launcher
Path=/home/ducoterra/Applications
Exec=minecraft-launcher
Icon=/home/ducoterra/Icons/minecraft-launcher.png
Terminal=false
Categories=Games;
```
## Sound Devices
If you want to disable a specific device or tell Fedora not to use a specific device
as output or input (looking at you yeti microphone, you're not a speaker), you can
install pulse audio control for much more fine-tuned... control.
Setting your speakers to analog output seems to work best for a USB dac if it has
a separate volume knob since this ties the volume knob on the dac to the internal
volume of your computer.
Setting your mic to analog input works just fine on a yeti usb mic.
```bash
sudo dnf install pavucontrol
```
## Install ffmpegthumbnailer, remove totem
totem-thumbnailer crashes all the time and isn't as good as ffmpeg's thumbnailer.
What's more, totem video player ("Videos" by default on gnome) is not as good as vlc
and doesn't work very well for anything more than basic video playback.
```bash
sudo dnf remove totem
sudo dnf install ffmpegthumbnailer
```
## Add compatibility for HEIC to mogrify
```bash
sudo dnf install libheic-freeworld
```

View File

@@ -1,6 +1,7 @@
# Truenas # Truenas
- [Truenas](#truenas) - [Truenas](#truenas)
- [DEPRECATION NOTICE](#deprecation-notice)
- [Bios settings](#bios-settings) - [Bios settings](#bios-settings)
- [Datasets, Snapshots, and Encryption](#datasets-snapshots-and-encryption) - [Datasets, Snapshots, and Encryption](#datasets-snapshots-and-encryption)
- [Periodic Snapshot Recommendations](#periodic-snapshot-recommendations) - [Periodic Snapshot Recommendations](#periodic-snapshot-recommendations)
@@ -51,6 +52,19 @@
- [But First, ZFS on RPi](#but-first-zfs-on-rpi) - [But First, ZFS on RPi](#but-first-zfs-on-rpi)
- [Pi Setup](#pi-setup) - [Pi Setup](#pi-setup)
## DEPRECATION NOTICE
I no longer use Truenas for the following reasons
1. Upgrades breaking VMs
2. Upgrades breaking app compatibility
3. Opaque process for backing up and restoring the "ix-*" directories
4. Opaque process for snapshotting app data
5. Difficulty decrypting nested datasets with multiple keys
6. Truenas not exporting the correct JSON key format to decrypt nested datasets
7. Lack standard support for libvirt
8. Incredibly slow loading of snapshots in the UI
## Bios settings ## Bios settings
You can check the bios version with `dmidecode -t bios -q` You can check the bios version with `dmidecode -t bios -q`
@@ -261,7 +275,7 @@ your keys will be correct after this process.
# List all datasets and format them for json keys # List all datasets and format them for json keys
export LIST_DATASET=pool0/dcsi export LIST_DATASET=pool0/dcsi
echo "{" && \ echo "{" && \
for DATASET_PATH in $(sudo zfs list -r $LIST_DATASET -H -o name); do echo " \"$DATASET_PATH\": \"key_here\","; done && \ for DATASET_PATH in $(zfs list -r $LIST_DATASET -H -o name); do echo " \"$DATASET_PATH\": \"key_here\","; done && \
echo "}" echo "}"
# If the dataset's children have all the encryption keys # If the dataset's children have all the encryption keys
@@ -460,7 +474,7 @@ dd if=/tmp/test of=/dev/null bs=1024k
```bash ```bash
# HDD # HDD
smartctl -a /dev/ada1 | grep "SMART Attributes" -A 18 smartctl -a /dev/sda1 | grep "SMART Attributes" -A 18
# NVME # NVME
smartctl -a /dev/nvme1 | grep "SMART/Health Information" -A 17 smartctl -a /dev/nvme1 | grep "SMART/Health Information" -A 17

View File

@@ -0,0 +1,78 @@
# Ubuntu Server 25.10
- [Ubuntu Server 25.10](#ubuntu-server-2510)
- [Install](#install)
- [Automatic Updates](#automatic-updates)
- [Disable Swap](#disable-swap)
- [Base Software](#base-software)
- [Podman](#podman)
- [Troubleshooting](#troubleshooting)
- [nmcli device unmanaged](#nmcli-device-unmanaged)
Note these instructions differentiate between an `operator` and a `server`. The operator can be
any machine that configure the server. A pipeline, laptop, dedicated server, etc. are all options.
The server can be its own operator, though that's not recommended since servers should be ephemeral
and the operator will store information about each server.
## Install
1. Walk through the normal installation up to disks
2. Select custom
3. Select /dev/vda and select use as boot
4. Select "free space" and create a new gpt ext4 "/boot" partition
5. Select "free space" and create a new gpt btrfs "/" partition
6. Continue
## Automatic Updates
On the server:
```bash
apt install -y unattended-upgrades
systemctl enable --now unattended-upgrades.service
```
## Disable Swap
```bash
swapoff -a
```
## Base Software
On the server:
```bash
# Install btop for system monitoring
apt install -y btop net-tools vim tmux cockpit python3-venv python3-pip cifs-utils
# Cockpit
systemctl enable --now cockpit
```
## Podman
```bash
apt install -y podman
```
## Troubleshooting
### nmcli device unmanaged
Ubuntu installs a config file that sets most devices unmanaged:
/usr/lib/NetworkManager/conf.d/10-globally-managed-devices.conf:
[keyfile]
unmanaged-devices=*,except:type:wifi,except:type:gsm,except:type:cdma
To disable this, You can create a blank file with the same name in /etc:
sudo touch /etc/NetworkManager/conf.d/10-globally-managed-devices.conf
sudo systemctl restart NetworkManager
Then `nmcli device set <device-name> managed yes`
Then turn on "Connect Automatically" in Cockpit.

View File

@@ -1,16 +0,0 @@
services:
bricktracker:
container_name: BrickTracker
restart: unless-stopped
image: gitea.baerentsen.space/frederikbaerentsen/bricktracker:1.2.2
ports:
- "3333:3333"
volumes:
- /home/connorbricktracker/data:/var/lib/bricktracker
- /home/connorbricktracker/static/instructions:/app/static/instructions
- /home/connorbricktracker/static/minifigures:/app/static/minifigures
- /home/connorbricktracker/static/parts:/app/static/parts
- /home/connorbricktracker/static/sets:/app/static/sets
env_file: "/home/connorbricktracker/.env"
security_opt:
- label=disable

View File

@@ -0,0 +1,16 @@
services:
bricktracker:
container_name: BrickTracker
restart: unless-stopped
image: gitea.baerentsen.space/frederikbaerentsen/bricktracker:1.2.2
ports:
- "3333:3333"
volumes:
- /srv/bricktracker/connorbricktracker/data:/var/lib/bricktracker
- /srv/bricktracker/connorbricktracker/static/instructions:/app/static/instructions
- /srv/bricktracker/connorbricktracker/static/minifigures:/app/static/minifigures
- /srv/bricktracker/connorbricktracker/static/parts:/app/static/parts
- /srv/bricktracker/connorbricktracker/static/sets:/app/static/sets
env_file: "/srv/bricktracker/connorbricktracker/.env"
security_opt:
- label=disable

View File

@@ -37,15 +37,37 @@ https://something-else.reeseapps.com:443 {
} }
``` ```
Make sure to add [your route53 configuration](https://github.com/caddy-dns/route53?tab=readme-ov-file#configuration)
```conf
tls {
dns route53 {
access_key_id "..."
secret_access_key "..."
region "us-east-1"
wait_for_route53_sync true
skip_route53_sync_on_delete true
route53_max_wait 2m
max_retries 5
}
}
```
The playbook limits the installer to `hosts: caddy` so make sure you have a caddy The playbook limits the installer to `hosts: caddy` so make sure you have a caddy
host in your inventory. host in your inventory.
Now you can install the Caddy service with something like: Now you can install the Caddy service with something like:
```bash ```bash
# Base Proxy
ansible-playbook \ ansible-playbook \
-i ansible/inventory.yaml \ -i ansible/inventory.yaml \
active/podman_caddy/install_caddy.yaml active/podman_caddy/install_caddy_proxy.yaml
# Deskwork (AI) Proxy
ansible-playbook \
-i ansible/inventory.yaml \
active/podman_caddy/install_caddy_deskwork.yaml
``` ```
See ansible playbook [install_caddy.yaml](/active/podman_caddy/install_caddy.yaml) See ansible playbook [install_caddy.yaml](/active/podman_caddy/install_caddy.yaml)

View File

@@ -1,5 +1,5 @@
- name: Create DDNS Service - name: Create Caddy Proxy
hosts: caddy hosts: deskwork-root
tasks: tasks:
- name: Create /etc/caddy dir - name: Create /etc/caddy dir
ansible.builtin.file: ansible.builtin.file:
@@ -8,7 +8,7 @@
mode: '0755' mode: '0755'
- name: Copy Caddyfile - name: Copy Caddyfile
template: template:
src: secrets/Caddyfile src: secrets/deskwork.Caddyfile
dest: /etc/caddy/Caddyfile dest: /etc/caddy/Caddyfile
owner: root owner: root
group: root group: root

View File

@@ -0,0 +1,38 @@
- name: Create Caddy Proxy
hosts: caddy
tasks:
- name: Copy Containerfile for build
template:
src: Containerfile
dest: /etc/caddy/Containerfile
owner: root
group: root
mode: '0644'
- name: Build Caddy Image
shell:
cmd: podman build -t gitea.reeseapps.com/services/caddy:latest -f /etc/caddy/Containerfile
- name: Create /etc/caddy dir
ansible.builtin.file:
path: /etc/caddy
state: directory
mode: '0755'
- name: Copy Caddyfile
template:
src: secrets/proxy.Caddyfile
dest: /etc/caddy/Caddyfile
owner: root
group: root
mode: '0644'
- name: Template Caddy Container Services
template:
src: caddy.container
dest: /etc/containers/systemd/caddy.container
owner: root
group: root
mode: '0644'
- name: Reload and start the Caddy service
ansible.builtin.systemd_service:
state: restarted
name: caddy.service
enabled: true
daemon_reload: true

View File

@@ -0,0 +1,173 @@
# Podman certbot
- [Podman certbot](#podman-certbot)
- [Setup certbot Project](#setup-certbot-project)
- [Install certbot](#install-certbot)
- [Create the certbot user](#create-the-certbot-user)
- [Write the certbot compose spec](#write-the-certbot-compose-spec)
- [A Note on Volumes](#a-note-on-volumes)
- [Convert certbot compose spec to quadlets](#convert-certbot-compose-spec-to-quadlets)
- [Create any container-mounted directories](#create-any-container-mounted-directories)
- [Start and enable your systemd quadlet](#start-and-enable-your-systemd-quadlet)
- [Expose certbot](#expose-certbot)
- [firewalld](#firewalld)
- [Backup certbot](#backup-certbot)
- [Upgrade certbot](#upgrade-certbot)
- [Upgrade Quadlets](#upgrade-quadlets)
- [Uninstall](#uninstall)
- [Notes](#notes)
- [SELinux](#selinux)
## Setup certbot Project
- [ ] Copy and rename this folder to active/podman_certbot
- [ ] Find and replace certbot with the name of the service.
- [ ] Create the rootless user to run the podman containers
- [ ] Write the compose.yaml spec for your service
- [ ] Convert the compose.yaml spec to a quadlet
- [ ] Install the quadlet on the podman server
- [ ] Expose the quadlet service
- [ ] Install a backup service and timer
## Install certbot
### Create the certbot user
```bash
# SSH into your podman server as root
useradd certbot
loginctl enable-linger $(id -u certbot)
systemctl --user --machine=certbot@.host enable podman-restart
systemctl --user --machine=certbot@.host enable --now podman.socket
mkdir -p /home/certbot/.config/containers/systemd
```
### Write the certbot compose spec
```bash
podman run -it --rm --name certbot \
-v "/etc/letsencrypt:/etc/letsencrypt:Z" \
-v "/var/lib/letsencrypt:/var/lib/letsencrypt:Z" \
certbot/certbot certonly -d keycloak.reeseapps.com -d keycloak.reeselink.com
```
#### A Note on Volumes
Named volumes are stored at `/home/certbot/.local/share/containers/storage/volumes/`.
### Convert certbot compose spec to quadlets
Run the following to convert a compose.yaml into the various `.container` files for systemd:
```bash
# Generate the systemd service
podman run \
--security-opt label=disable \
--rm \
-v $(pwd)/active/podman_certbot/:/compose \
-v $(pwd)/active/podman_certbot/quadlets:/quadlets \
quay.io/k9withabone/podlet \
-f /quadlets \
-i \
--overwrite \
compose /compose/compose.yaml
# Copy the files to the server
export PODMAN_SERVER=
scp -r active/podman_certbot/quadlets/. $PODMAN_SERVER:/home/certbot/.config/containers/systemd/
ssh $PODMAN_SERVER chown -R certbot:certbot /home/certbot/.config/containers/systemd/
```
### Create any container-mounted directories
SSH into your podman server as root:
```bash
machinectl shell certbot@
podman unshare
mkdir some_volume
# Chown to the namespaced user with UID 1000
# This will be some really obscure UID outside the namespace
# This will also solve most permission denied errors
chown -R 1000:1000 some_volume
```
### Start and enable your systemd quadlet
SSH into your podman server as root:
```bash
machinectl shell certbot@
systemctl --user daemon-reload
systemctl --user restart certbot
# Enable auto-update service which will pull new container images automatically every day
systemctl --user enable --now podman-auto-update.timer
```
### Expose certbot
1. If you need a domain, follow the [DDNS instructions](/active/podman_ddns/ddns.md#install-a-new-ddns-service)
2. For a web service, follow the [Caddy instructions](/active/podman_caddy/caddy.md#adding-a-new-caddy-record)
3. Finally, follow your OS's guide for opening ports via its firewall service.
#### firewalld
```bash
# command to get current active zone and default zone
firewall-cmd --get-active-zones
firewall-cmd --get-default-zone
# command to open 443 on tcp
firewall-cmd --permanent --zone=<zone> --add-port=443/tcp
# command to open 80 and 443 on tcp and udp
firewall-cmd --permanent --zone=<zone> --add-port={80,443}/{tcp,udp}
# command to list available services and then open http and https
firewall-cmd --get-services
firewall-cmd --permanent --zone=<zone> --add-service={http,https}
```
## Backup certbot
Follow the [Borg Backup instructions](/active/systemd_borg/borg.md#set-up-a-client-for-backup)
## Upgrade certbot
### Upgrade Quadlets
Upgrades should be a repeat of [writing the compose spec](#convert-certbot-compose-spec-to-quadlets) and [installing the quadlets](#start-and-enable-your-systemd-quadlet)
```bash
export PODMAN_SERVER=
scp -r quadlets/. $PODMAN_SERVER$:/home/certbot/.config/containers/systemd/
ssh certbot systemctl --user daemon-reload
ssh certbot systemctl --user restart certbot
```
## Uninstall
```bash
# Stop the user's services
systemctl --user disable podman-restart
podman container stop --all
systemctl --user disable --now podman.socket
systemctl --user disable --now podman-auto-update.timer
# Delete the user (this won't delete their home directory)
# userdel might spit out an error like:
# userdel: user certbot is currently used by process 591255
# kill those processes and try again
userdel certbot
```
## Notes
### SELinux
<https://blog.christophersmart.com/2021/01/31/podman-volumes-and-selinux/>
:z allows a container to share a mounted volume with all other containers.
:Z allows a container to reserve a mounted volume and prevents any other container from accessing.

View File

@@ -0,0 +1,3 @@
# Quadlets
Put your quadlets here.

View File

@@ -0,0 +1,10 @@
[Unit]
Description=Runs certbot renew
After=syslog.target network.target auditd.service
Wants=network-online.target
[Service]
ExecStart=/usr/bin/command -with -arguments
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,11 @@
[Unit]
Description=Daily certbot certificate renewal
[Timer]
OnCalendar=daily
AccuracySec=12h
Persistent=true
Unit=certbot.service
[Install]
WantedBy=timers.target

View File

@@ -15,6 +15,9 @@ ENV PATH="/root/.local/bin/:$PATH"
# Copy the project into the image # Copy the project into the image
COPY update.py uv.lock pyproject.toml /app/ COPY update.py uv.lock pyproject.toml /app/
# Copy the records file
COPY records.yaml /etc/ddns/records.yaml
# Sync the project into a new environment, using the frozen lockfile # Sync the project into a new environment, using the frozen lockfile
WORKDIR /app WORKDIR /app
RUN uv sync --frozen RUN uv sync --frozen

View File

@@ -4,15 +4,14 @@ After=network-online.target
Wants=network-online.target Wants=network-online.target
[Container] [Container]
Environment=ROUTE53_RECORD={{ item.record }} Environment=ROUTE53_RECORDS_FILE=/etc/ddns/records.yaml
Environment=HOSTED_ZONE_ID={{ item.hosted_zone_id }}
Environment=AWS_ACCESS_KEY_ID={{ aws.access_key_id }} Environment=AWS_ACCESS_KEY_ID={{ aws.access_key_id }}
Environment=AWS_SECRET_ACCESS_KEY={{ aws.secret_access_key }} Environment=AWS_SECRET_ACCESS_KEY={{ aws.secret_access_key }}
{% if item.skip_ipv6 | default(false) %} {% if item.skip_ipv6 | default(false) %}
Environment=SKIP_IPV6=true Environment=GLOBAL_SKIP_IPV6=true
{% endif %} {% endif %}
{% if item.skip_ipv4 | default(false) %} {% if item.skip_ipv4 | default(false) %}
Environment=SKIP_IPV4=true Environment=GLOBAL_SKIP_IPV4=true
{% endif %} {% endif %}
Image=gitea.reeseapps.com/services/ddns:latest Image=gitea.reeseapps.com/services/ddns:latest
Network=ddns.network Network=ddns.network

View File

@@ -3,9 +3,7 @@
- [DDNS for Route53](#ddns-for-route53) - [DDNS for Route53](#ddns-for-route53)
- [Quickly Update DDNS Records](#quickly-update-ddns-records) - [Quickly Update DDNS Records](#quickly-update-ddns-records)
- [Install a New DDNS Service](#install-a-new-ddns-service) - [Install a New DDNS Service](#install-a-new-ddns-service)
- [Ansible 3D Server Records](#ansible-3d-server-records) - [Ansible Caddy Records](#ansible-caddy-records)
- [Ansible Unifi External Records](#ansible-unifi-external-records)
- [Ansible Hostname reeselink records](#ansible-hostname-reeselink-records)
- [Development](#development) - [Development](#development)
- [Testing](#testing) - [Testing](#testing)
- [Building Container Image](#building-container-image) - [Building Container Image](#building-container-image)
@@ -59,13 +57,9 @@ Now you can install the DDNS service with something like:
```bash ```bash
ansible-playbook \ ansible-playbook \
# specify your inventory
-i ansible/inventory.yaml \ -i ansible/inventory.yaml \
# -l limits to a particular host -l proxy \
-l 3dserver \ active/podman_ddns/install_ddns.yaml
active/podman_ddns/install_ddns.yaml \
# -e brings in our secrets/records.yaml
-e "@active/podman_ddns/secrets/records.yaml"
``` ```
See ansible playbook [install_ddns.yaml](/install_ddns.yaml) See ansible playbook [install_ddns.yaml](/install_ddns.yaml)
@@ -75,34 +69,14 @@ multiple servers. If you have a podman server, it'll have its own
`podman-records.yaml`. If you have a docker server, it'll have its own `podman-records.yaml`. If you have a docker server, it'll have its own
`docker-records.yaml`. Etc. etc. `docker-records.yaml`. Etc. etc.
### Ansible 3D Server Records ### Ansible Caddy Records
```bash ```bash
ansible-playbook \ ansible-playbook \
-i ansible/inventory.yaml \ -i ansible/inventory.yaml \
-l 3dserver \ -l caddy \
active/podman_ddns/install_ddns.yaml \ active/podman_ddns/install_ddns.yaml \
-e "@active/podman_ddns/secrets/3dserver_records.yaml" -e "@active/podman_ddns/secrets/records.yaml"
```
### Ansible Unifi External Records
```bash
ansible-playbook \
-i ansible/inventory.yaml \
-l unifi-external \
active/podman_ddns/install_ddns.yaml \
-e "@active/podman_ddns/secrets/unifi_external_record.yaml"
```
### Ansible Hostname reeselink records
```bash
export PLAYBOOK_PATH=active/podman_ddns
ansible-playbook \
-i ansible/inventory.yaml \
${PLAYBOOK_PATH}/install_ddns.yaml \
-e "@${PLAYBOOK_PATH}/secrets/hostname_reeselink_record.yaml"
``` ```
## Development ## Development

View File

@@ -1,11 +1,11 @@
[Unit] [Unit]
Description=Run ddns.{{ item.record }}.service every hour Description=Run ddns.service every hour
[Timer] [Timer]
OnCalendar=hourly OnCalendar=hourly
AccuracySec=10min AccuracySec=10min
Persistent=true Persistent=true
Unit=ddns.{{ item.record }}.service Unit=ddns.service
[Install] [Install]
WantedBy=timers.target WantedBy=timers.target

View File

@@ -3,11 +3,29 @@
vars_files: vars_files:
- secrets/vars.yaml - secrets/vars.yaml
tasks: tasks:
- name: Create /etc/ddns dir - name: Create container build dir
ansible.builtin.file: ansible.builtin.file:
path: /etc/ddns path: /tmp/ddns
state: directory state: directory
mode: '0755' mode: '0755'
- name: Copy container build files
copy:
src: "{{ item }}"
dest: /tmp/ddns/
with_items:
- uv.lock
- pyproject.toml
- update.py
- Containerfile
- secrets/records.yaml
- name: Run container build
shell:
cmd: podman build -t gitea.reeseapps.com/services/ddns:latest -f ./Containerfile
chdir: /tmp/ddns/
- name: Remove container build dir
ansible.builtin.file:
path: /tmp/ddns
state: absent
- name: Copy ddns.network - name: Copy ddns.network
template: template:
src: ddns.network src: ddns.network
@@ -15,31 +33,27 @@
owner: root owner: root
group: root group: root
mode: '0644' mode: '0644'
- name: Template DDNS Container Services - name: Template DDNS Container Service
template: template:
src: ddns.container src: ddns.container
dest: /etc/containers/systemd/ddns.{{ item.record }}.container dest: /etc/containers/systemd/ddns.container
owner: root owner: root
group: root group: root
mode: '0644' mode: '0644'
loop: "{{ records }}" - name: Template DDNS Container Timer
- name: Template DDNS Container Timers
template: template:
src: ddns.timer src: ddns.timer
dest: /etc/systemd/system/ddns.{{ item.record }}.timer dest: /etc/systemd/system/ddns.timer
owner: root owner: root
group: root group: root
mode: '0644' mode: '0644'
loop: "{{ records }}"
- name: Reload ddns timer - name: Reload ddns timer
ansible.builtin.systemd_service: ansible.builtin.systemd_service:
state: restarted state: restarted
name: ddns.{{ item.record }}.timer name: ddns.timer
enabled: true enabled: true
daemon_reload: true daemon_reload: true
loop: "{{ records }}"
- name: Run ddns service - name: Run ddns service
ansible.builtin.systemd_service: ansible.builtin.systemd_service:
state: restarted state: restarted
name: ddns.{{ item.record }}.service name: ddns.service
loop: "{{ records }}"

View File

@@ -8,4 +8,6 @@ dependencies = [
"boto3>=1.37.30", "boto3>=1.37.30",
"boto3-stubs[all]>=1.38.23", "boto3-stubs[all]>=1.38.23",
"pytest>=8.3.5", "pytest>=8.3.5",
"pyyaml>=6.0.3",
"types-pyyaml>=6.0.12.20250915",
] ]

View File

@@ -6,10 +6,17 @@ export ROUTE53_RECORD=something.mydomain.com
import logging import logging
import os import os
import subprocess import subprocess
from typing import TYPE_CHECKING import yaml
import sys
from typing import TYPE_CHECKING, TypedDict
import boto3 import boto3
try:
from yaml import CLoader as Loader
except ImportError:
from yaml import Loader # type: ignore
if TYPE_CHECKING: if TYPE_CHECKING:
from mypy_boto3_route53 import Route53Client from mypy_boto3_route53 import Route53Client
@@ -21,10 +28,20 @@ logging.basicConfig(
logger = logging.getLogger(__name__) logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO) logger.setLevel(logging.INFO)
HOSTED_ZONE_ID = os.getenv("HOSTED_ZONE_ID") ROUTE53_RECORDS_FILE = os.getenv("ROUTE53_RECORDS_FILE")
ROUTE53_RECORD = os.getenv("ROUTE53_RECORD") GLOBAL_SKIP_IPV4 = os.getenv("GLOBAL_SKIP_IPV4", "false").lower() == "true"
SKIP_IPV4 = os.getenv("SKIP_IPV4", "false").lower() == "true" GLOBAL_SKIP_IPV6 = os.getenv("GLOBAL_SKIP_IPV6", "false").lower() == "true"
SKIP_IPV6 = os.getenv("SKIP_IPV6", "false").lower() == "true"
class RecordType(TypedDict):
record: str
hosted_zone_id: str
skip_ipv4: bool | None
skip_ipv6: bool | None
class RecordYamlStruct(TypedDict):
records: list[RecordType]
def get_ipv4() -> str: def get_ipv4() -> str:
@@ -92,18 +109,20 @@ def update_ipv6(hosted_zone_id: str, record: str, public_ipv6: str):
def main(): def main():
if not HOSTED_ZONE_ID: if not ROUTE53_RECORDS_FILE:
logger.error("HOSTED_ZONE_ID env var not found!") logger.error("ROUTE53_RECORDS_FILE env var not found!")
exit(1) exit(1)
if not ROUTE53_RECORD: try:
logger.error("ROUTE53_RECORD env var not found!") with open(ROUTE53_RECORDS_FILE) as f:
exit(1) records_file_contents: RecordYamlStruct = yaml.load(f, Loader)
except FileNotFoundError as e:
logger.error(e)
sys.exit(1)
logger.info(f"Attempting to update {ROUTE53_RECORD} from {HOSTED_ZONE_ID}.") if GLOBAL_SKIP_IPV4:
public_ipv4 = None
if SKIP_IPV4: logger.warning("Globally skipping IPv4.")
logger.warning("Skipping IPv4.")
else: else:
logger.info("Getting IPv4 address from ifconfig.me") logger.info("Getting IPv4 address from ifconfig.me")
public_ipv4 = get_ipv4() public_ipv4 = get_ipv4()
@@ -111,14 +130,10 @@ def main():
logger.error("Public IPv4 not found.") logger.error("Public IPv4 not found.")
exit(1) exit(1)
logger.info(f"Public IPv4 is {public_ipv4}") logger.info(f"Public IPv4 is {public_ipv4}")
update_ipv4(
hosted_zone_id=HOSTED_ZONE_ID,
record=ROUTE53_RECORD,
public_ipv4=public_ipv4,
)
if SKIP_IPV6: if GLOBAL_SKIP_IPV6:
logger.warning("Skipping IPv6") public_ipv6 = None
logger.warning("Globally Skipping IPv6")
else: else:
logger.info("Getting IPv6 address from ifconfig.me") logger.info("Getting IPv6 address from ifconfig.me")
public_ipv6 = get_ipv6() public_ipv6 = get_ipv6()
@@ -126,11 +141,32 @@ def main():
logger.error("Public IPv6 not found.") logger.error("Public IPv6 not found.")
exit(1) exit(1)
logger.info(f"Public IPv6 is {public_ipv6}") logger.info(f"Public IPv6 is {public_ipv6}")
update_ipv6(
hosted_zone_id=HOSTED_ZONE_ID, for record in records_file_contents["records"]:
record=ROUTE53_RECORD,
public_ipv6=public_ipv6, logger.info(f"Attempting to update {record['record']} from {record['hosted_zone_id']}.")
)
if record.get("skip_ipv4"):
logger.info(f"{record['record']} requested to skip IPv4")
elif GLOBAL_SKIP_IPV4 or not public_ipv4:
logger.info("Globally skipping IPv4")
else:
update_ipv4(
hosted_zone_id=record["hosted_zone_id"],
record=record["record"],
public_ipv4=public_ipv4,
)
if record.get("skip_ipv6"):
logger.info(f"{record['record']} requested to skip IPv6")
elif GLOBAL_SKIP_IPV6 or not public_ipv6:
logger.info("Globally skipping IPv6")
else:
update_ipv6(
hosted_zone_id=record["hosted_zone_id"],
record=record["record"],
public_ipv6=public_ipv6,
)
if __name__ == "__main__": if __name__ == "__main__":

View File

@@ -480,6 +480,8 @@ dependencies = [
{ name = "boto3" }, { name = "boto3" },
{ name = "boto3-stubs", extra = ["all"] }, { name = "boto3-stubs", extra = ["all"] },
{ name = "pytest" }, { name = "pytest" },
{ name = "pyyaml" },
{ name = "types-pyyaml" },
] ]
[package.metadata] [package.metadata]
@@ -487,6 +489,8 @@ requires-dist = [
{ name = "boto3", specifier = ">=1.37.30" }, { name = "boto3", specifier = ">=1.37.30" },
{ name = "boto3-stubs", extras = ["all"], specifier = ">=1.38.23" }, { name = "boto3-stubs", extras = ["all"], specifier = ">=1.38.23" },
{ name = "pytest", specifier = ">=8.3.5" }, { name = "pytest", specifier = ">=8.3.5" },
{ name = "pyyaml", specifier = ">=6.0.3" },
{ name = "types-pyyaml", specifier = ">=6.0.12.20250915" },
] ]
[[package]] [[package]]
@@ -5399,6 +5403,70 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/ec/57/56b9bcc3c9c6a792fcbaf139543cee77261f3651ca9da0c93f5c1221264b/python_dateutil-2.9.0.post0-py2.py3-none-any.whl", hash = "sha256:a8b2bc7bffae282281c8140a97d3aa9c14da0b136dfe83f850eea9a5f7470427", size = 229892, upload-time = "2024-03-01T18:36:18.57Z" }, { url = "https://files.pythonhosted.org/packages/ec/57/56b9bcc3c9c6a792fcbaf139543cee77261f3651ca9da0c93f5c1221264b/python_dateutil-2.9.0.post0-py2.py3-none-any.whl", hash = "sha256:a8b2bc7bffae282281c8140a97d3aa9c14da0b136dfe83f850eea9a5f7470427", size = 229892, upload-time = "2024-03-01T18:36:18.57Z" },
] ]
[[package]]
name = "pyyaml"
version = "6.0.3"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/05/8e/961c0007c59b8dd7729d542c61a4d537767a59645b82a0b521206e1e25c2/pyyaml-6.0.3.tar.gz", hash = "sha256:d76623373421df22fb4cf8817020cbb7ef15c725b9d5e45f17e189bfc384190f", size = 130960, upload-time = "2025-09-25T21:33:16.546Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/f4/a0/39350dd17dd6d6c6507025c0e53aef67a9293a6d37d3511f23ea510d5800/pyyaml-6.0.3-cp310-cp310-macosx_10_13_x86_64.whl", hash = "sha256:214ed4befebe12df36bcc8bc2b64b396ca31be9304b8f59e25c11cf94a4c033b", size = 184227, upload-time = "2025-09-25T21:31:46.04Z" },
{ url = "https://files.pythonhosted.org/packages/05/14/52d505b5c59ce73244f59c7a50ecf47093ce4765f116cdb98286a71eeca2/pyyaml-6.0.3-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:02ea2dfa234451bbb8772601d7b8e426c2bfa197136796224e50e35a78777956", size = 174019, upload-time = "2025-09-25T21:31:47.706Z" },
{ url = "https://files.pythonhosted.org/packages/43/f7/0e6a5ae5599c838c696adb4e6330a59f463265bfa1e116cfd1fbb0abaaae/pyyaml-6.0.3-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:b30236e45cf30d2b8e7b3e85881719e98507abed1011bf463a8fa23e9c3e98a8", size = 740646, upload-time = "2025-09-25T21:31:49.21Z" },
{ url = "https://files.pythonhosted.org/packages/2f/3a/61b9db1d28f00f8fd0ae760459a5c4bf1b941baf714e207b6eb0657d2578/pyyaml-6.0.3-cp310-cp310-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:66291b10affd76d76f54fad28e22e51719ef9ba22b29e1d7d03d6777a9174198", size = 840793, upload-time = "2025-09-25T21:31:50.735Z" },
{ url = "https://files.pythonhosted.org/packages/7a/1e/7acc4f0e74c4b3d9531e24739e0ab832a5edf40e64fbae1a9c01941cabd7/pyyaml-6.0.3-cp310-cp310-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:9c7708761fccb9397fe64bbc0395abcae8c4bf7b0eac081e12b809bf47700d0b", size = 770293, upload-time = "2025-09-25T21:31:51.828Z" },
{ url = "https://files.pythonhosted.org/packages/8b/ef/abd085f06853af0cd59fa5f913d61a8eab65d7639ff2a658d18a25d6a89d/pyyaml-6.0.3-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:418cf3f2111bc80e0933b2cd8cd04f286338bb88bdc7bc8e6dd775ebde60b5e0", size = 732872, upload-time = "2025-09-25T21:31:53.282Z" },
{ url = "https://files.pythonhosted.org/packages/1f/15/2bc9c8faf6450a8b3c9fc5448ed869c599c0a74ba2669772b1f3a0040180/pyyaml-6.0.3-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:5e0b74767e5f8c593e8c9b5912019159ed0533c70051e9cce3e8b6aa699fcd69", size = 758828, upload-time = "2025-09-25T21:31:54.807Z" },
{ url = "https://files.pythonhosted.org/packages/a3/00/531e92e88c00f4333ce359e50c19b8d1de9fe8d581b1534e35ccfbc5f393/pyyaml-6.0.3-cp310-cp310-win32.whl", hash = "sha256:28c8d926f98f432f88adc23edf2e6d4921ac26fb084b028c733d01868d19007e", size = 142415, upload-time = "2025-09-25T21:31:55.885Z" },
{ url = "https://files.pythonhosted.org/packages/2a/fa/926c003379b19fca39dd4634818b00dec6c62d87faf628d1394e137354d4/pyyaml-6.0.3-cp310-cp310-win_amd64.whl", hash = "sha256:bdb2c67c6c1390b63c6ff89f210c8fd09d9a1217a465701eac7316313c915e4c", size = 158561, upload-time = "2025-09-25T21:31:57.406Z" },
{ url = "https://files.pythonhosted.org/packages/6d/16/a95b6757765b7b031c9374925bb718d55e0a9ba8a1b6a12d25962ea44347/pyyaml-6.0.3-cp311-cp311-macosx_10_13_x86_64.whl", hash = "sha256:44edc647873928551a01e7a563d7452ccdebee747728c1080d881d68af7b997e", size = 185826, upload-time = "2025-09-25T21:31:58.655Z" },
{ url = "https://files.pythonhosted.org/packages/16/19/13de8e4377ed53079ee996e1ab0a9c33ec2faf808a4647b7b4c0d46dd239/pyyaml-6.0.3-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:652cb6edd41e718550aad172851962662ff2681490a8a711af6a4d288dd96824", size = 175577, upload-time = "2025-09-25T21:32:00.088Z" },
{ url = "https://files.pythonhosted.org/packages/0c/62/d2eb46264d4b157dae1275b573017abec435397aa59cbcdab6fc978a8af4/pyyaml-6.0.3-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:10892704fc220243f5305762e276552a0395f7beb4dbf9b14ec8fd43b57f126c", size = 775556, upload-time = "2025-09-25T21:32:01.31Z" },
{ url = "https://files.pythonhosted.org/packages/10/cb/16c3f2cf3266edd25aaa00d6c4350381c8b012ed6f5276675b9eba8d9ff4/pyyaml-6.0.3-cp311-cp311-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:850774a7879607d3a6f50d36d04f00ee69e7fc816450e5f7e58d7f17f1ae5c00", size = 882114, upload-time = "2025-09-25T21:32:03.376Z" },
{ url = "https://files.pythonhosted.org/packages/71/60/917329f640924b18ff085ab889a11c763e0b573da888e8404ff486657602/pyyaml-6.0.3-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:b8bb0864c5a28024fac8a632c443c87c5aa6f215c0b126c449ae1a150412f31d", size = 806638, upload-time = "2025-09-25T21:32:04.553Z" },
{ url = "https://files.pythonhosted.org/packages/dd/6f/529b0f316a9fd167281a6c3826b5583e6192dba792dd55e3203d3f8e655a/pyyaml-6.0.3-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:1d37d57ad971609cf3c53ba6a7e365e40660e3be0e5175fa9f2365a379d6095a", size = 767463, upload-time = "2025-09-25T21:32:06.152Z" },
{ url = "https://files.pythonhosted.org/packages/f2/6a/b627b4e0c1dd03718543519ffb2f1deea4a1e6d42fbab8021936a4d22589/pyyaml-6.0.3-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:37503bfbfc9d2c40b344d06b2199cf0e96e97957ab1c1b546fd4f87e53e5d3e4", size = 794986, upload-time = "2025-09-25T21:32:07.367Z" },
{ url = "https://files.pythonhosted.org/packages/45/91/47a6e1c42d9ee337c4839208f30d9f09caa9f720ec7582917b264defc875/pyyaml-6.0.3-cp311-cp311-win32.whl", hash = "sha256:8098f252adfa6c80ab48096053f512f2321f0b998f98150cea9bd23d83e1467b", size = 142543, upload-time = "2025-09-25T21:32:08.95Z" },
{ url = "https://files.pythonhosted.org/packages/da/e3/ea007450a105ae919a72393cb06f122f288ef60bba2dc64b26e2646fa315/pyyaml-6.0.3-cp311-cp311-win_amd64.whl", hash = "sha256:9f3bfb4965eb874431221a3ff3fdcddc7e74e3b07799e0e84ca4a0f867d449bf", size = 158763, upload-time = "2025-09-25T21:32:09.96Z" },
{ url = "https://files.pythonhosted.org/packages/d1/33/422b98d2195232ca1826284a76852ad5a86fe23e31b009c9886b2d0fb8b2/pyyaml-6.0.3-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:7f047e29dcae44602496db43be01ad42fc6f1cc0d8cd6c83d342306c32270196", size = 182063, upload-time = "2025-09-25T21:32:11.445Z" },
{ url = "https://files.pythonhosted.org/packages/89/a0/6cf41a19a1f2f3feab0e9c0b74134aa2ce6849093d5517a0c550fe37a648/pyyaml-6.0.3-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:fc09d0aa354569bc501d4e787133afc08552722d3ab34836a80547331bb5d4a0", size = 173973, upload-time = "2025-09-25T21:32:12.492Z" },
{ url = "https://files.pythonhosted.org/packages/ed/23/7a778b6bd0b9a8039df8b1b1d80e2e2ad78aa04171592c8a5c43a56a6af4/pyyaml-6.0.3-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:9149cad251584d5fb4981be1ecde53a1ca46c891a79788c0df828d2f166bda28", size = 775116, upload-time = "2025-09-25T21:32:13.652Z" },
{ url = "https://files.pythonhosted.org/packages/65/30/d7353c338e12baef4ecc1b09e877c1970bd3382789c159b4f89d6a70dc09/pyyaml-6.0.3-cp312-cp312-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:5fdec68f91a0c6739b380c83b951e2c72ac0197ace422360e6d5a959d8d97b2c", size = 844011, upload-time = "2025-09-25T21:32:15.21Z" },
{ url = "https://files.pythonhosted.org/packages/8b/9d/b3589d3877982d4f2329302ef98a8026e7f4443c765c46cfecc8858c6b4b/pyyaml-6.0.3-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:ba1cc08a7ccde2d2ec775841541641e4548226580ab850948cbfda66a1befcdc", size = 807870, upload-time = "2025-09-25T21:32:16.431Z" },
{ url = "https://files.pythonhosted.org/packages/05/c0/b3be26a015601b822b97d9149ff8cb5ead58c66f981e04fedf4e762f4bd4/pyyaml-6.0.3-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:8dc52c23056b9ddd46818a57b78404882310fb473d63f17b07d5c40421e47f8e", size = 761089, upload-time = "2025-09-25T21:32:17.56Z" },
{ url = "https://files.pythonhosted.org/packages/be/8e/98435a21d1d4b46590d5459a22d88128103f8da4c2d4cb8f14f2a96504e1/pyyaml-6.0.3-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:41715c910c881bc081f1e8872880d3c650acf13dfa8214bad49ed4cede7c34ea", size = 790181, upload-time = "2025-09-25T21:32:18.834Z" },
{ url = "https://files.pythonhosted.org/packages/74/93/7baea19427dcfbe1e5a372d81473250b379f04b1bd3c4c5ff825e2327202/pyyaml-6.0.3-cp312-cp312-win32.whl", hash = "sha256:96b533f0e99f6579b3d4d4995707cf36df9100d67e0c8303a0c55b27b5f99bc5", size = 137658, upload-time = "2025-09-25T21:32:20.209Z" },
{ url = "https://files.pythonhosted.org/packages/86/bf/899e81e4cce32febab4fb42bb97dcdf66bc135272882d1987881a4b519e9/pyyaml-6.0.3-cp312-cp312-win_amd64.whl", hash = "sha256:5fcd34e47f6e0b794d17de1b4ff496c00986e1c83f7ab2fb8fcfe9616ff7477b", size = 154003, upload-time = "2025-09-25T21:32:21.167Z" },
{ url = "https://files.pythonhosted.org/packages/1a/08/67bd04656199bbb51dbed1439b7f27601dfb576fb864099c7ef0c3e55531/pyyaml-6.0.3-cp312-cp312-win_arm64.whl", hash = "sha256:64386e5e707d03a7e172c0701abfb7e10f0fb753ee1d773128192742712a98fd", size = 140344, upload-time = "2025-09-25T21:32:22.617Z" },
{ url = "https://files.pythonhosted.org/packages/d1/11/0fd08f8192109f7169db964b5707a2f1e8b745d4e239b784a5a1dd80d1db/pyyaml-6.0.3-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:8da9669d359f02c0b91ccc01cac4a67f16afec0dac22c2ad09f46bee0697eba8", size = 181669, upload-time = "2025-09-25T21:32:23.673Z" },
{ url = "https://files.pythonhosted.org/packages/b1/16/95309993f1d3748cd644e02e38b75d50cbc0d9561d21f390a76242ce073f/pyyaml-6.0.3-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:2283a07e2c21a2aa78d9c4442724ec1eb15f5e42a723b99cb3d822d48f5f7ad1", size = 173252, upload-time = "2025-09-25T21:32:25.149Z" },
{ url = "https://files.pythonhosted.org/packages/50/31/b20f376d3f810b9b2371e72ef5adb33879b25edb7a6d072cb7ca0c486398/pyyaml-6.0.3-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ee2922902c45ae8ccada2c5b501ab86c36525b883eff4255313a253a3160861c", size = 767081, upload-time = "2025-09-25T21:32:26.575Z" },
{ url = "https://files.pythonhosted.org/packages/49/1e/a55ca81e949270d5d4432fbbd19dfea5321eda7c41a849d443dc92fd1ff7/pyyaml-6.0.3-cp313-cp313-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:a33284e20b78bd4a18c8c2282d549d10bc8408a2a7ff57653c0cf0b9be0afce5", size = 841159, upload-time = "2025-09-25T21:32:27.727Z" },
{ url = "https://files.pythonhosted.org/packages/74/27/e5b8f34d02d9995b80abcef563ea1f8b56d20134d8f4e5e81733b1feceb2/pyyaml-6.0.3-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:0f29edc409a6392443abf94b9cf89ce99889a1dd5376d94316ae5145dfedd5d6", size = 801626, upload-time = "2025-09-25T21:32:28.878Z" },
{ url = "https://files.pythonhosted.org/packages/f9/11/ba845c23988798f40e52ba45f34849aa8a1f2d4af4b798588010792ebad6/pyyaml-6.0.3-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:f7057c9a337546edc7973c0d3ba84ddcdf0daa14533c2065749c9075001090e6", size = 753613, upload-time = "2025-09-25T21:32:30.178Z" },
{ url = "https://files.pythonhosted.org/packages/3d/e0/7966e1a7bfc0a45bf0a7fb6b98ea03fc9b8d84fa7f2229e9659680b69ee3/pyyaml-6.0.3-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:eda16858a3cab07b80edaf74336ece1f986ba330fdb8ee0d6c0d68fe82bc96be", size = 794115, upload-time = "2025-09-25T21:32:31.353Z" },
{ url = "https://files.pythonhosted.org/packages/de/94/980b50a6531b3019e45ddeada0626d45fa85cbe22300844a7983285bed3b/pyyaml-6.0.3-cp313-cp313-win32.whl", hash = "sha256:d0eae10f8159e8fdad514efdc92d74fd8d682c933a6dd088030f3834bc8e6b26", size = 137427, upload-time = "2025-09-25T21:32:32.58Z" },
{ url = "https://files.pythonhosted.org/packages/97/c9/39d5b874e8b28845e4ec2202b5da735d0199dbe5b8fb85f91398814a9a46/pyyaml-6.0.3-cp313-cp313-win_amd64.whl", hash = "sha256:79005a0d97d5ddabfeeea4cf676af11e647e41d81c9a7722a193022accdb6b7c", size = 154090, upload-time = "2025-09-25T21:32:33.659Z" },
{ url = "https://files.pythonhosted.org/packages/73/e8/2bdf3ca2090f68bb3d75b44da7bbc71843b19c9f2b9cb9b0f4ab7a5a4329/pyyaml-6.0.3-cp313-cp313-win_arm64.whl", hash = "sha256:5498cd1645aa724a7c71c8f378eb29ebe23da2fc0d7a08071d89469bf1d2defb", size = 140246, upload-time = "2025-09-25T21:32:34.663Z" },
{ url = "https://files.pythonhosted.org/packages/9d/8c/f4bd7f6465179953d3ac9bc44ac1a8a3e6122cf8ada906b4f96c60172d43/pyyaml-6.0.3-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:8d1fab6bb153a416f9aeb4b8763bc0f22a5586065f86f7664fc23339fc1c1fac", size = 181814, upload-time = "2025-09-25T21:32:35.712Z" },
{ url = "https://files.pythonhosted.org/packages/bd/9c/4d95bb87eb2063d20db7b60faa3840c1b18025517ae857371c4dd55a6b3a/pyyaml-6.0.3-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:34d5fcd24b8445fadc33f9cf348c1047101756fd760b4dacb5c3e99755703310", size = 173809, upload-time = "2025-09-25T21:32:36.789Z" },
{ url = "https://files.pythonhosted.org/packages/92/b5/47e807c2623074914e29dabd16cbbdd4bf5e9b2db9f8090fa64411fc5382/pyyaml-6.0.3-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:501a031947e3a9025ed4405a168e6ef5ae3126c59f90ce0cd6f2bfc477be31b7", size = 766454, upload-time = "2025-09-25T21:32:37.966Z" },
{ url = "https://files.pythonhosted.org/packages/02/9e/e5e9b168be58564121efb3de6859c452fccde0ab093d8438905899a3a483/pyyaml-6.0.3-cp314-cp314-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:b3bc83488de33889877a0f2543ade9f70c67d66d9ebb4ac959502e12de895788", size = 836355, upload-time = "2025-09-25T21:32:39.178Z" },
{ url = "https://files.pythonhosted.org/packages/88/f9/16491d7ed2a919954993e48aa941b200f38040928474c9e85ea9e64222c3/pyyaml-6.0.3-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:c458b6d084f9b935061bc36216e8a69a7e293a2f1e68bf956dcd9e6cbcd143f5", size = 794175, upload-time = "2025-09-25T21:32:40.865Z" },
{ url = "https://files.pythonhosted.org/packages/dd/3f/5989debef34dc6397317802b527dbbafb2b4760878a53d4166579111411e/pyyaml-6.0.3-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:7c6610def4f163542a622a73fb39f534f8c101d690126992300bf3207eab9764", size = 755228, upload-time = "2025-09-25T21:32:42.084Z" },
{ url = "https://files.pythonhosted.org/packages/d7/ce/af88a49043cd2e265be63d083fc75b27b6ed062f5f9fd6cdc223ad62f03e/pyyaml-6.0.3-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:5190d403f121660ce8d1d2c1bb2ef1bd05b5f68533fc5c2ea899bd15f4399b35", size = 789194, upload-time = "2025-09-25T21:32:43.362Z" },
{ url = "https://files.pythonhosted.org/packages/23/20/bb6982b26a40bb43951265ba29d4c246ef0ff59c9fdcdf0ed04e0687de4d/pyyaml-6.0.3-cp314-cp314-win_amd64.whl", hash = "sha256:4a2e8cebe2ff6ab7d1050ecd59c25d4c8bd7e6f400f5f82b96557ac0abafd0ac", size = 156429, upload-time = "2025-09-25T21:32:57.844Z" },
{ url = "https://files.pythonhosted.org/packages/f4/f4/a4541072bb9422c8a883ab55255f918fa378ecf083f5b85e87fc2b4eda1b/pyyaml-6.0.3-cp314-cp314-win_arm64.whl", hash = "sha256:93dda82c9c22deb0a405ea4dc5f2d0cda384168e466364dec6255b293923b2f3", size = 143912, upload-time = "2025-09-25T21:32:59.247Z" },
{ url = "https://files.pythonhosted.org/packages/7c/f9/07dd09ae774e4616edf6cda684ee78f97777bdd15847253637a6f052a62f/pyyaml-6.0.3-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:02893d100e99e03eda1c8fd5c441d8c60103fd175728e23e431db1b589cf5ab3", size = 189108, upload-time = "2025-09-25T21:32:44.377Z" },
{ url = "https://files.pythonhosted.org/packages/4e/78/8d08c9fb7ce09ad8c38ad533c1191cf27f7ae1effe5bb9400a46d9437fcf/pyyaml-6.0.3-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:c1ff362665ae507275af2853520967820d9124984e0f7466736aea23d8611fba", size = 183641, upload-time = "2025-09-25T21:32:45.407Z" },
{ url = "https://files.pythonhosted.org/packages/7b/5b/3babb19104a46945cf816d047db2788bcaf8c94527a805610b0289a01c6b/pyyaml-6.0.3-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:6adc77889b628398debc7b65c073bcb99c4a0237b248cacaf3fe8a557563ef6c", size = 831901, upload-time = "2025-09-25T21:32:48.83Z" },
{ url = "https://files.pythonhosted.org/packages/8b/cc/dff0684d8dc44da4d22a13f35f073d558c268780ce3c6ba1b87055bb0b87/pyyaml-6.0.3-cp314-cp314t-manylinux2014_s390x.manylinux_2_17_s390x.manylinux_2_28_s390x.whl", hash = "sha256:a80cb027f6b349846a3bf6d73b5e95e782175e52f22108cfa17876aaeff93702", size = 861132, upload-time = "2025-09-25T21:32:50.149Z" },
{ url = "https://files.pythonhosted.org/packages/b1/5e/f77dc6b9036943e285ba76b49e118d9ea929885becb0a29ba8a7c75e29fe/pyyaml-6.0.3-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:00c4bdeba853cc34e7dd471f16b4114f4162dc03e6b7afcc2128711f0eca823c", size = 839261, upload-time = "2025-09-25T21:32:51.808Z" },
{ url = "https://files.pythonhosted.org/packages/ce/88/a9db1376aa2a228197c58b37302f284b5617f56a5d959fd1763fb1675ce6/pyyaml-6.0.3-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:66e1674c3ef6f541c35191caae2d429b967b99e02040f5ba928632d9a7f0f065", size = 805272, upload-time = "2025-09-25T21:32:52.941Z" },
{ url = "https://files.pythonhosted.org/packages/da/92/1446574745d74df0c92e6aa4a7b0b3130706a4142b2d1a5869f2eaa423c6/pyyaml-6.0.3-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:16249ee61e95f858e83976573de0f5b2893b3677ba71c9dd36b9cf8be9ac6d65", size = 829923, upload-time = "2025-09-25T21:32:54.537Z" },
{ url = "https://files.pythonhosted.org/packages/f0/7a/1c7270340330e575b92f397352af856a8c06f230aa3e76f86b39d01b416a/pyyaml-6.0.3-cp314-cp314t-win_amd64.whl", hash = "sha256:4ad1906908f2f5ae4e5a8ddfce73c320c2a1429ec52eafd27138b7f1cbe341c9", size = 174062, upload-time = "2025-09-25T21:32:55.767Z" },
{ url = "https://files.pythonhosted.org/packages/f1/12/de94a39c2ef588c7e6455cfbe7343d3b2dc9d6b6b2f40c4c6565744c873d/pyyaml-6.0.3-cp314-cp314t-win_arm64.whl", hash = "sha256:ebc55a14a21cb14062aa4162f906cd962b28e2e9ea38f9b4391244cd8de4ae0b", size = 149341, upload-time = "2025-09-25T21:32:56.828Z" },
]
[[package]] [[package]]
name = "s3transfer" name = "s3transfer"
version = "0.11.4" version = "0.11.4"
@@ -5468,6 +5536,15 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/4c/82/1ee2e5c9d28deac086ab3a6ff07c8bc393ef013a083f546c623699881715/types_awscrt-0.27.2-py3-none-any.whl", hash = "sha256:49a045f25bbd5ad2865f314512afced933aed35ddbafc252e2268efa8a787e4e", size = 37761, upload-time = "2025-05-16T03:10:07.466Z" }, { url = "https://files.pythonhosted.org/packages/4c/82/1ee2e5c9d28deac086ab3a6ff07c8bc393ef013a083f546c623699881715/types_awscrt-0.27.2-py3-none-any.whl", hash = "sha256:49a045f25bbd5ad2865f314512afced933aed35ddbafc252e2268efa8a787e4e", size = 37761, upload-time = "2025-05-16T03:10:07.466Z" },
] ]
[[package]]
name = "types-pyyaml"
version = "6.0.12.20250915"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/7e/69/3c51b36d04da19b92f9e815be12753125bd8bc247ba0470a982e6979e71c/types_pyyaml-6.0.12.20250915.tar.gz", hash = "sha256:0f8b54a528c303f0e6f7165687dd33fafa81c807fcac23f632b63aa624ced1d3", size = 17522, upload-time = "2025-09-15T03:01:00.728Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/bd/e0/1eed384f02555dde685fff1a1ac805c1c7dcb6dd019c916fe659b1c1f9ec/types_pyyaml-6.0.12.20250915-py3-none-any.whl", hash = "sha256:e7d4d9e064e89a3b3cae120b4990cd370874d2bf12fa5f46c97018dd5d3c9ab6", size = 20338, upload-time = "2025-09-15T03:00:59.218Z" },
]
[[package]] [[package]]
name = "types-s3transfer" name = "types-s3transfer"
version = "0.12.0" version = "0.12.0"

View File

@@ -0,0 +1,41 @@
networks:
gitea:
external: false
services:
server:
image: docker.gitea.com/gitea:1.25.3
container_name: gitea
environment:
- USER_UID=1001
- USER_GID=1001
- GITEA__database__DB_TYPE=postgres
- GITEA__database__HOST=db:5432
- GITEA__database__NAME=gitea
- GITEA__database__USER=gitea
- GITEA__database__PASSWD=gitea
restart: always
networks:
- gitea
volumes:
- /srv/gitea-data/data:/data
- /srv/gitea-data/custom:/var/lib/gitea/custom
- /etc/timezone:/etc/timezone:ro
- /etc/localtime:/etc/localtime:ro
ports:
- "3000:3000"
- "2222:22"
depends_on:
- db
db:
image: docker.io/library/postgres:15
restart: always
environment:
- POSTGRES_USER=gitea
- POSTGRES_PASSWORD=gitea
- POSTGRES_DB=gitea
networks:
- gitea
volumes:
- /srv/gitea-db/postgres:/var/lib/postgresql/data

View File

@@ -1,6 +1,7 @@
# Gitea # Gitea
- [Gitea](#gitea) - [Gitea](#gitea)
- [Gitea on Docker](#gitea-on-docker)
- [Gitea on Rootless Podman](#gitea-on-rootless-podman) - [Gitea on Rootless Podman](#gitea-on-rootless-podman)
- [A note on directories](#a-note-on-directories) - [A note on directories](#a-note-on-directories)
- [Create the gitea user](#create-the-gitea-user) - [Create the gitea user](#create-the-gitea-user)
@@ -14,6 +15,20 @@
- [Cache Cleanup](#cache-cleanup) - [Cache Cleanup](#cache-cleanup)
- [Email Notifications](#email-notifications) - [Email Notifications](#email-notifications)
## Gitea on Docker
<https://docs.gitea.com/installation/install-with-docker>
Prereqs
1. Mount data dirs at `/srv/gitea-data` and `/srv/gitea-db`
2. Create a gitea user and update gitea-compose.yaml with the correct UID
```bash
scp active/podman_gitea/gitea-compose.yaml gitea:
docker compose -f gitea-compose.yaml up -d
```
## Gitea on Rootless Podman ## Gitea on Rootless Podman
### A note on directories ### A note on directories
@@ -141,8 +156,6 @@ you have.
### Install ### Install
```bash ```bash
touch config.yaml
export GITEA_TOKEN= export GITEA_TOKEN=
docker run \ docker run \
-v /var/run/docker.sock:/var/run/docker.sock \ -v /var/run/docker.sock:/var/run/docker.sock \
@@ -197,6 +210,10 @@ docker builder prune -a
To run it every day at noon: `crontab -e` To run it every day at noon: `crontab -e`
```bash
dnf install cronie cronie-anacron
```
```bash ```bash
0 12 * * * yes | docker builder prune -a 0 12 * * * yes | docker builder prune -a
0 12 * * * docker image prune -a -f 0 12 * * * docker image prune -a -f

View File

@@ -2,7 +2,8 @@
- [Podman immich](#podman-immich) - [Podman immich](#podman-immich)
- [Setup immich Project](#setup-immich-project) - [Setup immich Project](#setup-immich-project)
- [Install immich](#install-immich) - [Install immich with Docker](#install-immich-with-docker)
- [Install immich with Rootless Podman](#install-immich-with-rootless-podman)
- [Create the immich user](#create-the-immich-user) - [Create the immich user](#create-the-immich-user)
- [Write the immich compose spec](#write-the-immich-compose-spec) - [Write the immich compose spec](#write-the-immich-compose-spec)
- [A Note on Volumes](#a-note-on-volumes) - [A Note on Volumes](#a-note-on-volumes)
@@ -30,7 +31,19 @@
- [ ] Expose the quadlet service - [ ] Expose the quadlet service
- [ ] Install a backup service and timer - [ ] Install a backup service and timer
## Install immich ## Install immich with Docker
<https://docs.immich.app/install/docker-compose/>
```bash
scp active/podman_immich/release-compose.yaml immich:
scp active/podman_immich/release-env immich:.env
mkdir /srv/immich
docker compose -f release-compose.yaml up -d
```
## Install immich with Rootless Podman
### Create the immich user ### Create the immich user
@@ -157,7 +170,7 @@ ssh immich systemctl --user restart immich
npm i -g @immich/cli npm i -g @immich/cli
# immich login [url] [key] # immich login [url] [key]
immich login http://192.168.1.216:2283/api HFEJ38DNSDUEG immich login http://192.168.1.216:2283/api <key here>
# Check the upload # Check the upload
immich upload --dry-run --recursive directory/ immich upload --dry-run --recursive directory/

View File

@@ -21,7 +21,7 @@ services:
- ${UPLOAD_LOCATION}:/data - ${UPLOAD_LOCATION}:/data
- /etc/localtime:/etc/localtime:ro - /etc/localtime:/etc/localtime:ro
env_file: env_file:
- .env - /root/.env
ports: ports:
- '2283:2283' - '2283:2283'
depends_on: depends_on:
@@ -42,14 +42,14 @@ services:
volumes: volumes:
- model-cache:/cache - model-cache:/cache
env_file: env_file:
- .env - /root/.env
restart: always restart: always
healthcheck: healthcheck:
disable: false disable: false
redis: redis:
container_name: immich_redis container_name: immich_redis
image: docker.io/valkey/valkey:8@sha256:81db6d39e1bba3b3ff32bd3a1b19a6d69690f94a3954ec131277b9a26b95b3aa image: docker.io/valkey/valkey:9@sha256:fb8d272e529ea567b9bf1302245796f21a2672b8368ca3fcb938ac334e613c8f
healthcheck: healthcheck:
test: redis-cli ping || exit 1 test: redis-cli ping || exit 1
restart: always restart: always

View File

@@ -1,10 +1,10 @@
# You can find documentation for all the supported env variables at https://docs.immich.app/install/environment-variables # You can find documentation for all the supported env variables at https://docs.immich.app/install/environment-variables
# The location where your uploaded files are stored # The location where your uploaded files are stored
UPLOAD_LOCATION=./library UPLOAD_LOCATION=/srv/immich-data/library
# The location where your database files are stored. Network shares are not supported for the database # The location where your database files are stored. Network shares are not supported for the database
DB_DATA_LOCATION=./postgres DB_DATA_LOCATION=/srv/immich-db/postgres
# To set a timezone, uncomment the next line and change Etc/UTC to a TZ identifier from this list: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List # To set a timezone, uncomment the next line and change Etc/UTC to a TZ identifier from this list: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List
# TZ=Etc/UTC # TZ=Etc/UTC

View File

@@ -0,0 +1,17 @@
services:
jellyfin:
image: jellyfin/jellyfin
container_name: jellyfin
ports:
- 8096:8096/tcp
- 7359:7359/udp
volumes:
- /srv/jellyfin/config:/config
- /srv/jellyfin/cache:/cache
- type: bind
source: /mnt/media
target: /media
read_only: true
restart: 'always'
environment:
- JELLYFIN_PublishedServerUrl=https://jellyfin.reeseapps.com

View File

@@ -1,7 +1,8 @@
# Jellyfin # Jellyfin
- [Jellyfin](#jellyfin) - [Jellyfin](#jellyfin)
- [Install](#install) - [Docker Install](#docker-install)
- [Rootless Podman Install](#rootless-podman-install)
- [Upgrade](#upgrade) - [Upgrade](#upgrade)
- [Mounting Media Directory](#mounting-media-directory) - [Mounting Media Directory](#mounting-media-directory)
@@ -9,7 +10,11 @@ They have podman rootless instructions!
<https://jellyfin.org/docs/general/installation/container/#managing-via-systemd> <https://jellyfin.org/docs/general/installation/container/#managing-via-systemd>
## Install ## Docker Install
<https://jellyfin.org/docs/general/installation/container>
## Rootless Podman Install
1. Create the jellyfin user 1. Create the jellyfin user

View File

@@ -0,0 +1,3 @@
# Compose
Put your compose.yaml here.

View File

@@ -0,0 +1,19 @@
services:
keycloak:
container_name: keycloak
restart: always
image: quay.io/keycloak/keycloak:26.4.5
ports:
- "9443:443"
- "8443:8443"
volumes:
- /home/foobar/data:/var/app/data
security_opt:
- label=disable
userns_mode: keep-id
command:
- "start"
- "--hostname"
- "https://keycloak.reeseapps.com"
- "--hostname-admin"
- "https://keycloak.reeselink.com:8443"

View File

@@ -0,0 +1,186 @@
# Podman keycloak
- [Podman keycloak](#podman-keycloak)
- [Setup keycloak Project](#setup-keycloak-project)
- [Install Keycloak with Docker](#install-keycloak-with-docker)
- [Install Keycloak with Podman](#install-keycloak-with-podman)
- [Create the keycloak user](#create-the-keycloak-user)
- [Write the keycloak compose spec](#write-the-keycloak-compose-spec)
- [A Note on Volumes](#a-note-on-volumes)
- [Convert keycloak compose spec to quadlets](#convert-keycloak-compose-spec-to-quadlets)
- [Create any container-mounted directories](#create-any-container-mounted-directories)
- [Start and enable your systemd quadlet](#start-and-enable-your-systemd-quadlet)
- [Expose keycloak](#expose-keycloak)
- [firewalld](#firewalld)
- [Backup keycloak](#backup-keycloak)
- [Upgrade keycloak](#upgrade-keycloak)
- [Upgrade Quadlets](#upgrade-quadlets)
- [Uninstall](#uninstall)
- [Notes](#notes)
- [SELinux](#selinux)
## Setup keycloak Project
- [ ] Copy and rename this folder to active/podman_keycloak
- [ ] Find and replace keycloak with the name of the service.
- [ ] Create the rootless user to run the podman containers
- [ ] Write the compose.yaml spec for your service
- [ ] Convert the compose.yaml spec to a quadlet
- [ ] Install the quadlet on the podman server
- [ ] Expose the quadlet service
- [ ] Install a backup service and timer
## Install Keycloak with Docker
<https://www.keycloak.org/getting-started/getting-started-docker>
```bash
# Test in dev mode
docker run -p 8080:8080 -e KC_BOOTSTRAP_ADMIN_USERNAME=admin -e KC_BOOTSTRAP_ADMIN_PASSWORD=admin quay.io/keycloak/keycloak:26.4.7 start-dev
```
## Install Keycloak with Podman
### Create the keycloak user
```bash
# SSH into your podman server as root
useradd keycloak
loginctl enable-linger $(id -u keycloak)
systemctl --user --machine=keycloak@.host enable podman-restart
systemctl --user --machine=keycloak@.host enable --now podman.socket
mkdir -p /home/keycloak/.config/containers/systemd
```
### Write the keycloak compose spec
<https://www.keycloak.org/getting-started/getting-started-podman>
```bash
# Based on the example
podman run \
-p 127.0.0.1:8080:8080 \
-e KC_BOOTSTRAP_ADMIN_USERNAME=admin \
-e KC_BOOTSTRAP_ADMIN_PASSWORD=admin \
quay.io/keycloak/keycloak:26.4.5 start-dev
```
#### A Note on Volumes
Named volumes are stored at `/home/keycloak/.local/share/containers/storage/volumes/`.
### Convert keycloak compose spec to quadlets
Run the following to convert a compose.yaml into the various `.container` files for systemd:
```bash
# Generate the systemd service
podman run \
--security-opt label=disable \
--rm \
-v $(pwd)/active/podman_keycloak/:/compose \
-v $(pwd)/active/podman_keycloak/quadlets:/quadlets \
quay.io/k9withabone/podlet \
-f /quadlets \
-i \
--overwrite \
compose /compose/compose.yaml
# Copy the files to the server
export PODMAN_SERVER=
scp -r active/podman_keycloak/quadlets/. $PODMAN_SERVER:/home/keycloak/.config/containers/systemd/
ssh $PODMAN_SERVER chown -R keycloak:keycloak /home/keycloak/.config/containers/systemd/
```
### Create any container-mounted directories
SSH into your podman server as root:
```bash
machinectl shell keycloak@
podman unshare
mkdir some_volume
# Chown to the namespaced user with UID 1000
# This will be some really obscure UID outside the namespace
# This will also solve most permission denied errors
chown -R 1000:1000 some_volume
```
### Start and enable your systemd quadlet
SSH into your podman server as root:
```bash
machinectl shell keycloak@
systemctl --user daemon-reload
systemctl --user restart keycloak
# Enable auto-update service which will pull new container images automatically every day
systemctl --user enable --now podman-auto-update.timer
```
### Expose keycloak
1. If you need a domain, follow the [DDNS instructions](/active/podman_ddns/ddns.md#install-a-new-ddns-service)
2. For a web service, follow the [Caddy instructions](/active/podman_caddy/caddy.md#adding-a-new-caddy-record)
3. Finally, follow your OS's guide for opening ports via its firewall service.
#### firewalld
```bash
# command to get current active zone and default zone
firewall-cmd --get-active-zones
firewall-cmd --get-default-zone
# command to open 443 on tcp
firewall-cmd --permanent --zone=<zone> --add-port=443/tcp
# command to open 80 and 443 on tcp and udp
firewall-cmd --permanent --zone=<zone> --add-port={80,443}/{tcp,udp}
# command to list available services and then open http and https
firewall-cmd --get-services
firewall-cmd --permanent --zone=<zone> --add-service={http,https}
```
## Backup keycloak
Follow the [Borg Backup instructions](/active/systemd_borg/borg.md#set-up-a-client-for-backup)
## Upgrade keycloak
### Upgrade Quadlets
Upgrades should be a repeat of [writing the compose spec](#convert-keycloak-compose-spec-to-quadlets) and [installing the quadlets](#start-and-enable-your-systemd-quadlet)
```bash
export PODMAN_SERVER=
scp -r quadlets/. $PODMAN_SERVER$:/home/keycloak/.config/containers/systemd/
ssh keycloak systemctl --user daemon-reload
ssh keycloak systemctl --user restart keycloak
```
## Uninstall
```bash
# Stop the user's services
systemctl --user disable podman-restart
podman container stop --all
systemctl --user disable --now podman.socket
systemctl --user disable --now podman-auto-update.timer
# Delete the user (this won't delete their home directory)
# userdel might spit out an error like:
# userdel: user keycloak is currently used by process 591255
# kill those processes and try again
userdel keycloak
```
## Notes
### SELinux
<https://blog.christophersmart.com/2021/01/31/podman-volumes-and-selinux/>
:z allows a container to share a mounted volume with all other containers.
:Z allows a container to reserve a mounted volume and prevents any other container from accessing.

View File

@@ -0,0 +1,3 @@
# Quadlets
Put your quadlets here.

View File

@@ -2,6 +2,7 @@
- [Local AI with Anything LLM](#local-ai-with-anything-llm) - [Local AI with Anything LLM](#local-ai-with-anything-llm)
- [Useful links I keep losing](#useful-links-i-keep-losing) - [Useful links I keep losing](#useful-links-i-keep-losing)
- [Podman](#podman)
- [Running Local AI on Ubuntu 24.04 with Nvidia GPU](#running-local-ai-on-ubuntu-2404-with-nvidia-gpu) - [Running Local AI on Ubuntu 24.04 with Nvidia GPU](#running-local-ai-on-ubuntu-2404-with-nvidia-gpu)
- [Running Local AI on Arch with AMD GPU](#running-local-ai-on-arch-with-amd-gpu) - [Running Local AI on Arch with AMD GPU](#running-local-ai-on-arch-with-amd-gpu)
- [Running Anything LLM](#running-anything-llm) - [Running Anything LLM](#running-anything-llm)
@@ -32,6 +33,12 @@
- [Example model config files from gallery](https://github.com/mudler/LocalAI/tree/master/gallery) - [Example model config files from gallery](https://github.com/mudler/LocalAI/tree/master/gallery)
- [List of all available models](https://github.com/mudler/LocalAI/blob/master/gallery/index.yaml) - [List of all available models](https://github.com/mudler/LocalAI/blob/master/gallery/index.yaml)
## Podman
```bash
docker run -ti --name local-ai -p 8081:8080 --device=/dev/kfd --device=/dev/dri --group-add=video --replace localai/localai:latest-gpu-vulkan
```
## Running Local AI on Ubuntu 24.04 with Nvidia GPU ## Running Local AI on Ubuntu 24.04 with Nvidia GPU
```bash ```bash
@@ -124,7 +131,7 @@ pipx install "huggingface_hub[cli]"
podman network create --ipv6 --label local-ai local-ai podman network create --ipv6 --label local-ai local-ai
# You might want to mount an external drive here. # You might want to mount an external drive here.
export MODEL_DIR=/models export MODEL_DIR=/srv/models
mkdir -p $MODEL_DIR mkdir -p $MODEL_DIR
# LOCALAI_SINGLE_ACTIVE_BACKEND will unload the previous model before loading the next one # LOCALAI_SINGLE_ACTIVE_BACKEND will unload the previous model before loading the next one
@@ -136,14 +143,13 @@ mkdir -p $MODEL_DIR
podman run \ podman run \
-d \ -d \
-p 8080:8080 \ -p 8080:8080 \
-e LOCALAI_API_KEY=$(cat ~/.localai/token) \
-e LOCALAI_SINGLE_ACTIVE_BACKEND=true \ -e LOCALAI_SINGLE_ACTIVE_BACKEND=true \
--device /dev/dri \ --device /dev/dri \
--device /dev/kfd \ --device /dev/kfd \
--name local-ai \ --name local-ai \
--network local-ai \ --replace \
-v $MODEL_DIR:/build/models \ -v $MODEL_DIR:/build/models:z \
-v localai-tmp:/tmp/generated \ -v localai-tmp:/tmp/generated:z \
quay.io/go-skynet/local-ai:master-hipblas-ffmpeg quay.io/go-skynet/local-ai:master-hipblas-ffmpeg
# The second (8081) will be our frontend. We'll protect it with basic auth. # The second (8081) will be our frontend. We'll protect it with basic auth.
@@ -153,9 +159,9 @@ podman run \
-d \ -d \
-p 8081:8080 \ -p 8081:8080 \
--name local-ai-webui \ --name local-ai-webui \
--network local-ai \ --replace \
-v $MODEL_DIR:/build/models \ -v $MODEL_DIR:/build/models:z \
-v localai-tmp:/tmp/generated \ -v localai-tmp:/tmp/generated:z \
quay.io/go-skynet/local-ai:master-hipblas-ffmpeg quay.io/go-skynet/local-ai:master-hipblas-ffmpeg
``` ```

View File

@@ -1,31 +0,0 @@
services:
testcraft:
image: gitea.reeseapps.com/services/minecraft:c1ca80b09b4645888e550efb0a2700b2ec1f1645
stdin_open: true
tty: true
volumes:
- /home/minecraft/testcraft:/mc_data
ports:
- 25565:25565
environment:
- MAX_RAM=4
- MIN_RAM=1
security_opt:
- "label=disable"
userns_mode: keep-id
restart: always
nimcraft:
image: gitea.reeseapps.com/services/minecraft:c1ca80b09b4645888e550efb0a2700b2ec1f1645
stdin_open: true
tty: true
volumes:
- /home/minecraft/nimcraft:/mc_data
ports:
- 25566:25565
environment:
- MAX_RAM=4
- MIN_RAM=1
security_opt:
- "label=disable"
userns_mode: keep-id
restart: always

View File

@@ -0,0 +1,25 @@
services:
testcraft:
image: gitea.reeseapps.com/services/minecraft:latest
stdin_open: true
tty: true
volumes:
- /srv/minecraft/testcraft:/mc_data
ports:
- 25565:25565
environment:
- MAX_RAM=4
- MIN_RAM=1
restart: always
nimcraft:
image: gitea.reeseapps.com/services/minecraft:latest
stdin_open: true
tty: true
volumes:
- /srv/minecraft/nimcraft:/mc_data
ports:
- 25566:25565
environment:
- MAX_RAM=4
- MIN_RAM=1
restart: always

View File

@@ -1,12 +1,14 @@
# Nextcloud AIO # Nextcloud AIO
- [Nextcloud AIO](#nextcloud-aio) - [Nextcloud AIO](#nextcloud-aio)
- [Recommended Install](#recommended-install)
- [Install with Rootless Podman](#install-with-rootless-podman) - [Install with Rootless Podman](#install-with-rootless-podman)
- [Create the nextcloud user](#create-the-nextcloud-user) - [Create the nextcloud user](#create-the-nextcloud-user)
- [Create the container autostart service](#create-the-container-autostart-service) - [Create the container autostart service](#create-the-container-autostart-service)
- [Install Nextcloud](#install-nextcloud) - [Install Nextcloud](#install-nextcloud)
- [Install Caddy](#install-caddy) - [Install Caddy](#install-caddy)
- [Firewall](#firewall) - [Firewall](#firewall)
- [Install with Docker](#install-with-docker)
- [Backups](#backups) - [Backups](#backups)
- [Manual Backups](#manual-backups) - [Manual Backups](#manual-backups)
- [Maintenance Mode](#maintenance-mode) - [Maintenance Mode](#maintenance-mode)
@@ -27,6 +29,16 @@
<https://github.com/nextcloud/all-in-one> <https://github.com/nextcloud/all-in-one>
## Recommended Install
<https://github.com/nextcloud/all-in-one#nextcloud-all-in-one>
1. Create Fedora VM
2. [Install Docker](https://docs.docker.com/engine/install/fedora/)
3. Create and mount a directory at `/srv/nextcloud-data`
4. `scp active/podman_nextcloud/nextcloud-compose.yaml nextcloud:`
5. `docker compose -f nextcloud-compose.yaml up -d`
## Install with Rootless Podman ## Install with Rootless Podman
Roughly taken from <https://github.com/nextcloud/all-in-one/discussions/3487> Roughly taken from <https://github.com/nextcloud/all-in-one/discussions/3487>
@@ -158,6 +170,25 @@ systemctl start caddy
Allow traffic to 11000 from your reverse proxy Allow traffic to 11000 from your reverse proxy
## Install with Docker
```bash
# For Linux and without a web server or reverse proxy already in place:
sudo docker run \
--init \
--sig-proxy=false \
--name nextcloud-aio-mastercontainer \
--restart always \
--publish 8080:8080 \
--env APACHE_PORT=11000 \
--env APACHE_IP_BINDING=0.0.0.0 \
--env APACHE_ADDITIONAL_NETWORK="" \
--env SKIP_DOMAIN_VALIDATION=false \
--volume nextcloud_aio_mastercontainer:/mnt/docker-aio-config \
--volume /var/run/docker.sock:/var/run/docker.sock:ro \
ghcr.io/nextcloud-releases/all-in-one:latest
```
## Backups ## Backups
IMPORTANT: you will need both KEY AND PASSPHRASE to access this repo! IMPORTANT: you will need both KEY AND PASSPHRASE to access this repo!

View File

@@ -0,0 +1,42 @@
name: nextcloud-aio # Add the container to the same compose project like all the sibling containers are added to automatically.
services:
nextcloud-aio-mastercontainer:
image: ghcr.io/nextcloud-releases/all-in-one:latest # This is the container image used. You can switch to ghcr.io/nextcloud-releases/all-in-one:beta if you want to help testing new releases. See https://github.com/nextcloud/all-in-one#how-to-switch-the-channel
init: true # This setting makes sure that signals from main process inside the container are correctly forwarded to children. See https://docs.docker.com/reference/compose-file/services/#init
restart: always # This makes sure that the container starts always together with the host OS. See https://docs.docker.com/reference/compose-file/services/#restart
container_name: nextcloud-aio-mastercontainer # This line is not allowed to be changed as otherwise AIO will not work correctly
volumes:
- nextcloud_aio_mastercontainer:/mnt/docker-aio-config # This line is not allowed to be changed as otherwise the built-in backup solution will not work
- /var/run/docker.sock:/var/run/docker.sock:ro # May be changed on macOS, Windows or docker rootless. See the applicable documentation. If adjusting, don't forget to also set 'WATCHTOWER_DOCKER_SOCKET_PATH'!
network_mode: bridge # This adds the container to the same network as docker run would do. Comment this line and uncomment the line below and the networks section at the end of the file if you want to define a custom MTU size for the docker network
ports:
- 8080:8080 # This is the AIO interface, served via https and self-signed certificate. See https://github.com/nextcloud/all-in-one#explanation-of-used-ports
security_opt: ["label:disable"] # Is needed when using SELinux. See https://github.com/nextcloud/all-in-one#are-there-known-problems-when-selinux-is-enabled
environment: # Is needed when using any of the options below
# AIO_DISABLE_BACKUP_SECTION: false # Setting this to true allows to hide the backup section in the AIO interface. See https://github.com/nextcloud/all-in-one#how-to-disable-the-backup-section
APACHE_PORT: 11000 # Is needed when running behind a web server or reverse proxy (like Apache, Nginx, Caddy, Cloudflare Tunnel and else). See https://github.com/nextcloud/all-in-one/blob/main/reverse-proxy.md
APACHE_IP_BINDING: 0.0.0.0 # Should be set when running behind a web server or reverse proxy (like Apache, Nginx, Caddy, Cloudflare Tunnel and else) that is running on the same host. See https://github.com/nextcloud/all-in-one/blob/main/reverse-proxy.md
# APACHE_ADDITIONAL_NETWORK: frontend_net # (Optional) Connect the apache container to an additional docker network. Needed when behind a web server or reverse proxy (like Apache, Nginx, Caddy, Cloudflare Tunnel and else) running in a different docker network on same server. See https://github.com/nextcloud/all-in-one/blob/main/reverse-proxy.md
# BORG_RETENTION_POLICY: --keep-within=7d --keep-weekly=4 --keep-monthly=6 # Allows to adjust borgs retention policy. See https://github.com/nextcloud/all-in-one#how-to-adjust-borgs-retention-policy
# COLLABORA_SECCOMP_DISABLED: false # Setting this to true allows to disable Collabora's Seccomp feature. See https://github.com/nextcloud/all-in-one#how-to-disable-collaboras-seccomp-feature
# DOCKER_API_VERSION: 1.44 # You can adjust the internally used docker api version with this variable. ⚠️⚠️⚠️ Warning: please note that only the default api version (unset this variable) is supported and tested by the maintainers of Nextcloud AIO. So use this on your own risk and things might break without warning. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-internally-used-docker-api-version
# FULLTEXTSEARCH_JAVA_OPTIONS: "-Xms1024M -Xmx1024M" # Allows to adjust the fulltextsearch java options. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-fulltextsearch-java-options
NEXTCLOUD_DATADIR: /srv/nextcloud-data # Allows to set the host directory for Nextcloud's datadir. ⚠️⚠️⚠️ Warning: do not set or adjust this value after the initial Nextcloud installation is done! See https://github.com/nextcloud/all-in-one#how-to-change-the-default-location-of-nextclouds-datadir
# NEXTCLOUD_MOUNT: /mnt/ # Allows the Nextcloud container to access the chosen directory on the host. See https://github.com/nextcloud/all-in-one#how-to-allow-the-nextcloud-container-to-access-directories-on-the-host
NEXTCLOUD_UPLOAD_LIMIT: 128G # Can be adjusted if you need more. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-upload-limit-for-nextcloud
NEXTCLOUD_MAX_TIME: 3600 # Can be adjusted if you need more. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-max-execution-time-for-nextcloud
NEXTCLOUD_MEMORY_LIMIT: 1024M # Can be adjusted if you need more. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-php-memory-limit-for-nextcloud
# NEXTCLOUD_TRUSTED_CACERTS_DIR: /path/to/my/cacerts # CA certificates in this directory will be trusted by the OS of the nextcloud container (Useful e.g. for LDAPS) See https://github.com/nextcloud/all-in-one#how-to-trust-user-defined-certification-authorities-ca
# NEXTCLOUD_STARTUP_APPS: deck twofactor_totp tasks calendar contacts notes # Allows to modify the Nextcloud apps that are installed on starting AIO the first time. See https://github.com/nextcloud/all-in-one#how-to-change-the-nextcloud-apps-that-are-installed-on-the-first-startup
# NEXTCLOUD_ADDITIONAL_APKS: imagemagick # This allows to add additional packages to the Nextcloud container permanently. Default is imagemagick but can be overwritten by modifying this value. See https://github.com/nextcloud/all-in-one#how-to-add-os-packages-permanently-to-the-nextcloud-container
# NEXTCLOUD_ADDITIONAL_PHP_EXTENSIONS: imagick # This allows to add additional php extensions to the Nextcloud container permanently. Default is imagick but can be overwritten by modifying this value. See https://github.com/nextcloud/all-in-one#how-to-add-php-extensions-permanently-to-the-nextcloud-container
# NEXTCLOUD_ENABLE_DRI_DEVICE: true # This allows to enable the /dev/dri device for containers that profit from it. ⚠️⚠️⚠️ Warning: this only works if the '/dev/dri' device is present on the host! If it should not exist on your host, don't set this to true as otherwise the Nextcloud container will fail to start! See https://github.com/nextcloud/all-in-one#how-to-enable-hardware-acceleration-for-nextcloud
# NEXTCLOUD_ENABLE_NVIDIA_GPU: true # This allows to enable the NVIDIA runtime and GPU access for containers that profit from it. ⚠️⚠️⚠️ Warning: this only works if an NVIDIA gpu is installed on the server. See https://github.com/nextcloud/all-in-one#how-to-enable-hardware-acceleration-for-nextcloud.
# NEXTCLOUD_KEEP_DISABLED_APPS: false # Setting this to true will keep Nextcloud apps that are disabled in the AIO interface and not uninstall them if they should be installed. See https://github.com/nextcloud/all-in-one#how-to-keep-disabled-apps
# SKIP_DOMAIN_VALIDATION: false # This should only be set to true if things are correctly configured. See https://github.com/nextcloud/all-in-one#how-to-skip-the-domain-validation
# TALK_PORT: 3478 # This allows to adjust the port that the talk container is using which is exposed on the host. See https://github.com/nextcloud/all-in-one#how-to-adjust-the-talk-port
# WATCHTOWER_DOCKER_SOCKET_PATH: /var/run/docker.sock # Needs to be specified if the docker socket on the host is not located in the default '/var/run/docker.sock'. Otherwise mastercontainer updates will fail. For macos it needs to be '/var/run/docker.sock'
volumes: # If you want to store the data on a different drive, see https://github.com/nextcloud/all-in-one#how-to-store-the-filesinstallation-on-a-separate-drive
nextcloud_aio_mastercontainer:
name: nextcloud_aio_mastercontainer # This line is not allowed to be changed as otherwise the built-in backup solution will not work

View File

@@ -1,20 +1,41 @@
user nginx; user nginx;
worker_processes auto; worker_processes auto;
error_log /var/log/nginx/error.log notice; error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid; pid /var/run/nginx.pid;
events { events {
worker_connections 1024; worker_connections 1024;
} }
stream { stream {
log_format stream_logs '$remote_addr [$time_local] $protocol $status $bytes_sent $bytes_received $session_time "$upstream_addr"';
access_log /dev/stdout stream_logs;
error_log stderr info;
server {
listen 3478;
listen [::]:3478;
proxy_pass nextcloud.reeselink.com:3478;
}
server { server {
listen 2222; listen 2222;
listen [::]:2222; listen [::]:2222;
proxy_pass podman.reeselink.com:2222; proxy_pass gitea.reeselink.com:2222;
}
server {
listen 25565;
listen [::]:25565;
proxy_pass minecraft.reeselink.com:25565;
}
server {
listen 25566;
listen [::]:25566;
proxy_pass minecraft.reeselink.com:25566;
} }
} }

View File

@@ -4,8 +4,7 @@
```bash ```bash
# Get the initial configuration # Get the initial configuration
podman run --rm --entrypoint=cat docker.io/nginx /etc/nginx/nginx.conf > nginx.conf scp active/podman_nginx/nginx.conf proxy:/etc/nginx/nginx.conf
scp nginx.conf 3dserver:/etc/nginx/nginx.conf
``` ```
```bash ```bash

View File

@@ -84,11 +84,12 @@ pgrep ollama | xargs -I '%' sh -c 'kill %'
## Run Anything LLM Interface ## Run Anything LLM Interface
```bash ```bash
podman run \ docker run \
-d \ -d \
-p 3001:3001 \ -p 3001:3001 \
--name anything-llm \ --name anything-llm \
--cap-add SYS_ADMIN \ --cap-add SYS_ADMIN \
--restart always \
-v anything-llm:/app/server \ -v anything-llm:/app/server \
-e STORAGE_DIR="/app/server/storage" \ -e STORAGE_DIR="/app/server/storage" \
docker.io/mintplexlabs/anythingllm docker.io/mintplexlabs/anythingllm

View File

@@ -0,0 +1,9 @@
# Speedtest
<https://github.com/librespeed/speedtest/blob/master/doc_docker.md>
## Install with Podman
```bash
podman run -p 8081:8080 -d --name speedtest ghcr.io/librespeed/speedtest
```

View File

@@ -1,12 +1,12 @@
[Unit] [Unit]
Description=Runs backup script for {{ repo_name }} Description=Runs the Borg backup script at /usr/local/script/borg-backup.sh
After=network.target After=network.target
Wants=network-online.target Wants=network-online.target
[Service] [Service]
Restart=no Restart=no
Type=oneshot Type=oneshot
ExecStart=/usr/local/script/backup-{{ repo_name }}.sh ExecStart=/usr/local/script/borg-backup.sh
[Install] [Install]
WantedBy=multi-user.target WantedBy=multi-user.target

View File

@@ -1,5 +1,7 @@
#!/bin/sh #!/bin/sh
set -x
{% for service in stop_services %} {% for service in stop_services %}
systemctl stop {{ service }} systemctl stop {{ service }}
{% endfor %} {% endfor %}
@@ -8,8 +10,12 @@ systemctl stop {{ service }}
systemctl --user --machine={{ systemd_user }}@.host stop {{ service }} systemctl --user --machine={{ systemd_user }}@.host stop {{ service }}
{% endfor %} {% endfor %}
{% for compose in docker_compose_down %}
docker compose -f {{compose}} down
{% endfor %}
# Setting this, so the repo does not need to be given on the commandline: # Setting this, so the repo does not need to be given on the commandline:
export BORG_REPO={{ borg_user }}@{{ borg_host }}:{{ repo_name }} export BORG_REPO={{ borg_user }}@{{ borg_host }}:.
# See the section "Passphrase notes" for more infos. # See the section "Passphrase notes" for more infos.
export BORG_PASSPHRASE={{ borg_passphrase }} export BORG_PASSPHRASE={{ borg_passphrase }}
@@ -34,6 +40,7 @@ borg create \
--exclude 'home/*/.cache/*' \ --exclude 'home/*/.cache/*' \
--exclude 'var/tmp/*' \ --exclude 'var/tmp/*' \
--exclude 'home/*/.snapshots/*' \ --exclude 'home/*/.snapshots/*' \
--exclude 'home/.snapshots/*' \
{% for dir in exclude_dirs %} {% for dir in exclude_dirs %}
--exclude '{{ dir }}' \ --exclude '{{ dir }}' \
{% endfor %} {% endfor %}
@@ -53,6 +60,10 @@ systemctl start {{ service }}
systemctl --user --machine={{ systemd_user }}@.host start {{ service }} systemctl --user --machine={{ systemd_user }}@.host start {{ service }}
{% endfor %} {% endfor %}
{% for compose in docker_compose_down %}
docker compose -f {{compose}} up -d
{% endfor %}
info "Pruning repository" info "Pruning repository"
# Use the `prune` subcommand to maintain 7 daily, 4 weekly and 6 monthly # Use the `prune` subcommand to maintain 7 daily, 4 weekly and 6 monthly

View File

@@ -1,11 +1,11 @@
[Unit] [Unit]
Description=Run Backup backup-{{ repo_name }}.service every day Description=Run Backup borg-backup.service every day
[Timer] [Timer]
OnCalendar=*-*-* 2:00:00 OnCalendar=*-*-* 2:00:00
AccuracySec=10min AccuracySec=10min
Persistent=true Persistent=true
Unit=backup-{{ repo_name }}.service Unit=borg-backup.service
[Install] [Install]
WantedBy=timers.target WantedBy=timers.target

View File

@@ -2,9 +2,7 @@
- [Borg Backup](#borg-backup) - [Borg Backup](#borg-backup)
- [Install Borg](#install-borg) - [Install Borg](#install-borg)
- [Set up a laptop or workstation client](#set-up-a-laptop-or-workstation-client) - [Set up a new client](#set-up-a-new-client)
- [Set up a new server client](#set-up-a-new-server-client)
- [Create a Backup Service](#create-a-backup-service)
- [Check backup service logs](#check-backup-service-logs) - [Check backup service logs](#check-backup-service-logs)
- [Run a Manual Backup](#run-a-manual-backup) - [Run a Manual Backup](#run-a-manual-backup)
- [Back up and Entire System](#back-up-and-entire-system) - [Back up and Entire System](#back-up-and-entire-system)
@@ -31,11 +29,22 @@ touch /home/backup/.ssh/authorized_keys
chown -R backup:backup /home/backup/.ssh chown -R backup:backup /home/backup/.ssh
``` ```
## Set up a laptop or workstation client ## Set up a new client
For backing up your laptop or personal account. Clients will either use a backup app (Pika) or the `backup.sh.j2` script in this
directory.
1. On your personal account, set up the borg connection We'll be using ssh authorized keys to control where the repo gets created. This
keeps client configuration simple, since clients will just need to reference
their repo at `.` rather than an absolute path.
For pika, you can set the backup location to something like `borg-backup:.`
(assuming you have an entry matching `borg-backup` in your `.ssh/config`)
Backups will be run as the root user. Generate them an SSH key to connect to the borg server.
1. On the client server, install borg. On Fedora this is `dnf install borgbackup`.
2. On the client server as root (or as the laptop user, for pika), create an SSH key for borg
```bash ```bash
export BACKUP_HOST="borg.reeselink.com" export BACKUP_HOST="borg.reeselink.com"
@@ -49,125 +58,58 @@ For backing up your laptop or personal account.
User backup User backup
Port 22 Port 22
EOF EOF
echo "export CLIENT_FQDN=${USER}.${HOSTNAME}.reeselink.com"
echo "export SSH_PUBKEY=\"$(cat ~/.ssh/id_${BACKUP_HOST}.pub)\""
``` ```
2. On the borg backup server as the backup user: 3. On borg.reeselink.com as the `backup` user, allow that SSH key to access the repo
```bash ```bash
# Use echo from above # Fill these out
export CLIENT_FQDN= export CLIENT_HOSTNAME="fqdn.reeseapps.com"
export SSH_PUBKEY= export SSH_PUBKEY="ssh-rsa abcd1234 backup@fqdn.something.com"
# Create the authkey entry to restrict the user's access to the borg repo folder # Just copy and paste these
export BORG_COMMAND="cd /home/backup/repos/${CLIENT_FQDN}; borg serve --restrict-to-path /home/backup/repos/${CLIENT_FQDN}" export BORG_COMMAND="cd /home/backup/repos/${CLIENT_HOSTNAME}; borg serve --restrict-to-path /home/backup/repos/${CLIENT_HOSTNAME}"
export AUTHKEY_ENTRY="command=\"${BORG_COMMAND}\",restrict ${SSH_PUBKEY}" export AUTHKEY_ENTRY="command=\"${BORG_COMMAND}\",restrict ${SSH_PUBKEY}"
echo $AUTHKEY_ENTRY >> /home/backup/.ssh/authorized_keys
# Create the directory # Inspect the authkey entry
mkdir repos/${CLIENT_FQDN} echo $AUTHKEY_ENTRY
# add the authkey entry to authorized_keys and create the repo dir
echo $AUTHKEY_ENTRY >> ~/.ssh/authorized_keys
mkdir /home/backup/repos/${CLIENT_HOSTNAME}
``` ```
3. On your personal account, create the repo and your first backup 4. On the client server, `ssh borg.reeselink.com` to accept the SSH key
5. Create the vars file for your client in `secrets/host_vars.yaml` like the following
```yaml
repo_name: my_repo
borg_user: backup
borg_host: borg.reeselink.com
borg_passphrase: ""
backup_dirs:
- /home/foobar
exclude_dirs: []
keep_daily: 7
keep_weekly: 4
keep_monthly: 1
stop_services: []
stop_user_services:
- foobar
docker_compose_down:
- /root/release-compose.yaml
```
6. Create the backup task with ansible
```bash ```bash
# Do not include the first / in the path ansible-playbook \
export PATH_TO_BACKUP=home/${USER} -i ansible/inventory.yaml \
export BACKUP_HOST="borg.reeselink.com" -l deskwork-root \
export BORG_REPO=${BACKUP_HOST}:home active/software_borg/install_backup.yaml \
-e "@active/software_borg/secrets/ai.deskwork_vars.yaml"
# If not initialized, do that now
borg init --encryption none $BORG_REPO
borg list
# Run backup and timestamp it
borg create \
--verbose \
--filter AME \
--list \
--stats \
--progress \
--show-rc \
--compression lz4 \
--exclude-caches \
${BORG_REPO}::$(date +"%F-%H-%M-%S") \
/${PATH_TO_BACKUP}
# Mount a borg archive
borg mount $BORG_REPO::2025-05-14-00-44-05 /mnt/
# Restore a borg archive to a location (dry run)
# First, cd to the location you want to extract to
cd ~
# Then, extract to that location. --strip-components takes the first n items off a path
borg extract --dry-run --list --strip-components 2 $BORG_REPO::my-files home/USERNAME
``` ```
## Set up a new server client
Backups will be run as the root user. Generate them an SSH key to
On the server as root:
```bash
export BACKUP_HOST="borg.reeselink.com"
ssh-keygen -C root@${HOSTNAME} -f ~/.ssh/id_${BACKUP_HOST}
cat <<EOF >> ~/.ssh/config
Host ${BACKUP_HOST}
Hostname ${BACKUP_HOST}
IdentityFile ~/.ssh/id_${BACKUP_HOST}
User backup
Port 22
EOF
```
Now on borg.reeselink.com as root:
```bash
export CLIENT_FQDN="fqdn.reeseapps.com"
export SSH_PUBKEY="ssh-rsa abcd1234 backup@fqdn.something.com"
export BORG_COMMAND="cd /home/backup/repos/${CLIENT_FQDN}; borg serve --restrict-to-path /home/backup/repos/${CLIENT_FQDN}"
export AUTHKEY_ENTRY="command=\"${BORG_COMMAND}\",restrict ${SSH_PUBKEY}"
echo $AUTHKEY_ENTRY >> /home/backup/.ssh/authorized_keys
mkdir /home/backup/repos/${CLIENT_FQDN}
chown -R backup:backup /home/backup/repos/${CLIENT_FQDN}
```
## Create a Backup Service
Create your vars file in `secrets/host_vars.yaml`
```yaml
repo_name: my_repo
borg_user: backup
borg_host: borg.reeselink.com
borg_passphrase: ""
backup_dirs:
- /home/foobar
exclude_dirs: []
keep_daily: 7
keep_weekly: 4
keep_monthly: 1
stop_services: []
stop_user_services:
- foobar
```
```bash
# Update all existing backup services for podman
for var_file in $(ls active/systemd_borg/secrets); do
ansible-playbook \
-i ansible/inventory.yaml \
-l 3dserver \
active/systemd_borg/install_backup.yaml \
-e "@active/systemd_borg/secrets/$var_file"
done
```
## Check backup service logs ## Check backup service logs
```bash ```bash

View File

@@ -8,32 +8,32 @@
path: /usr/local/script path: /usr/local/script
state: directory state: directory
mode: '0755' mode: '0755'
- name: Copy backup-{{ repo_name }}.service - name: Copy borg-backup.service
template: template:
src: backup.service src: backup.service
dest: /etc/systemd/system/backup-{{ repo_name }}.service dest: /etc/systemd/system/borg-backup.service
owner: root owner: root
group: root group: root
mode: '0644' mode: '0644'
- name: Copy backup-{{ repo_name }}.timer - name: Copy borg-backup.timer
template: template:
src: backup.timer src: backup.timer
dest: /etc/systemd/system/backup-{{ repo_name }}.timer dest: /etc/systemd/system/borg-backup.timer
owner: root owner: root
group: root group: root
mode: '0644' mode: '0644'
- name: Template backup-{{ repo_name }}.sh - name: Template borg-backup.sh
template: template:
src: backup.sh.j2 src: backup.sh.j2
dest: /usr/local/script/backup-{{ repo_name }}.sh dest: /usr/local/script/borg-backup.sh
owner: root owner: root
group: root group: root
mode: '0744' mode: '0744'
- name: Initialize Repo {{ borg_user }}@{{ borg_host }}:{{ repo_name }} - name: Initialize Repo {{ borg_user }}@{{ borg_host }}:.
script: /usr/bin/borg init -e none {{ borg_user }}@{{ borg_host }}:{{ repo_name }} script: /usr/bin/borg init -e none {{ borg_user }}@{{ borg_host }}:.
ignore_errors: yes ignore_errors: yes
- name: Reload backup timer backup-{{ repo_name }}.timer - name: Reload backup timer borg-backup.timer
ansible.builtin.systemd_service: ansible.builtin.systemd_service:
name: backup-{{ repo_name }}.timer name: borg-backup.timer
enabled: true enabled: true
daemon_reload: true daemon_reload: true

View File

@@ -1,21 +1,29 @@
# BTRFS # BTRFS
- [BTRFS](#btrfs) - [BTRFS](#btrfs)
- [Naming Conventions](#naming-conventions)
- [Creating an Array](#creating-an-array) - [Creating an Array](#creating-an-array)
- [Converting an Array Between RAID Versions](#converting-an-array-between-raid-versions)
- [Mounting the Array](#mounting-the-array) - [Mounting the Array](#mounting-the-array)
- [Adding Disks](#adding-disks) - [Adding Disks](#adding-disks)
- [Replacing a Disk](#replacing-a-disk) - [Replacing a Disk](#replacing-a-disk)
- [Scrubbing the Array](#scrubbing-the-array) - [Scrubbing the Array](#scrubbing-the-array)
- [Creating Subvolumes](#creating-subvolumes) - [Creating Subvolumes](#creating-subvolumes)
- [Monitoring Usage](#monitoring-usage) - [Monitoring Usage](#monitoring-usage)
- [Encrypting BTRFS with LUKS](#encrypting-btrfs-with-luks)
- [Monitoring Disk Health](#monitoring-disk-health) - [Monitoring Disk Health](#monitoring-disk-health)
- [Defragmenting and Compressing](#defragmenting-and-compressing) - [Defragmenting and Compressing](#defragmenting-and-compressing)
- [Converting ext4 to btrfs](#converting-ext4-to-btrfs)
Oracle [has decent docs here](https://docs.oracle.com/en/operating-systems/oracle-linux/8/btrfs/btrfs-ResizingaBtrfsFileSystem.html) Oracle [has decent docs here](https://docs.oracle.com/en/operating-systems/oracle-linux/8/btrfs/btrfs-ResizingaBtrfsFileSystem.html)
You'll also want to [read about btrfs compression](https://thelinuxcode.com/enable-btrfs-filesystem-compression/) You'll also want to [read about btrfs compression](https://thelinuxcode.com/enable-btrfs-filesystem-compression/)
## Naming Conventions
`poolX` is my naming convention for data pools. `pool0` is the first pool you create.
`backupX` is my naming convention for backup pools. `backup0` is hte first backup pool you create.
## Creating an Array ## Creating an Array
```bash ```bash
@@ -39,11 +47,21 @@ btrfs device scan
# Raid10 # Raid10
mkfs.btrfs --data raid10 --metadata raid10 /dev/vdb /dev/vdc /dev/vdd /dev/vde mkfs.btrfs --data raid10 --metadata raid10 /dev/vdb /dev/vdc /dev/vdd /dev/vde
btrfs device scan btrfs device scan
```
Label your arrays for easier identification in btrfs filesystem information commands
```bash
btrfs filesystem label /btrfs/pool0 pool0
```
## Converting an Array Between RAID Versions
```bash
# Convert to raid1 # Convert to raid1
# -dconvert == "data convert" # -dconvert == "data convert"
# -mconvert == "metadata convert" # -mconvert == "metadata convert"
btrfs balance start -dconvert=raid1 -mconvert=raid1 /btrfs btrfs balance start -dconvert=raid1 -mconvert=raid1 /btrfs/pool0
btrfs balance status btrfs balance status
``` ```
@@ -53,48 +71,59 @@ One off
```bash ```bash
# Create a mount point # Create a mount point
mkdir /btrfs mkdir /btrfs/pool0
# List the filesystem UUID
lsblk --fs
# Mount the top level subvolume # Mount the top level subvolume
mount /dev/vdb /btrfs -o subvolid=5 mount UUID=xxxxx-xxxxx-xxxxx /btrfs/pool0 -o subvolid=5
# Mount with better SSD support # Mount with better SSD support
mount /dev/vdb /btrfs -o subvolid=5,ssd mount UUID=xxxxx-xxxxx-xxxxx /btrfs/pool0 -o subvolid=5,ssd
# Mount with auto defragmentation for HDD support # Mount with auto defragmentation for HDD support
mount /dev/vdb /btrfs -o subvolid=5,autodefrag mount UUID=xxxxx-xxxxx-xxxxx /btrfs/pool0 -o subvolid=5,autodefrag
# Mount a subvolume # Mount a subvolume
mount /dev/vdb /btrfs -o subvol=home mount UUID=xxxxx-xxxxx-xxxxx /btrfs/pool0 -o subvol=home
# Inspect # Inspect
btrfs filesystem show /btrfs btrfs filesystem show /btrfs/pool0
``` ```
In fstab In fstab
```conf ```conf
UUID=btrfs_uuid /btrfs btrfs defaults 0 0 UUID=btrfs_uuid /btrfs/pool0 btrfs defaults 0 0
``` ```
## Adding Disks ## Adding Disks
```bash ```bash
# Add a disk # Add a disk
btrfs device add /dev/vdd /btrfs btrfs device add /dev/vdd /btrfs/pool0
# Balance the array
btrfs balance start /btrfs/pool0
# Watch the expansion # Watch the expansion
btrfs filesystem usage /btrfs btrfs filesystem usage /btrfs/pool0
``` ```
## Replacing a Disk ## Replacing a Disk
```bash ```bash
# Remove a disk from the array # Remove a disk from the array
btrfs device delete /dev/vdb /btrfs # This may take a while, as btrfs will rebalance the array during this process
btrfs device remove /dev/vdb /btrfs/pool0
# You can watch the device "used" data drain with
watch btrfs filesystem show /btrfs/pool0/
# Add the new device # Add the new device
btrfs device add /dev/vdg /btrfs # Again, this may take a while while btrfs rebalances.
btrfs device add /dev/vdg /btrfs/pool0
``` ```
## Scrubbing the Array ## Scrubbing the Array
@@ -103,10 +132,10 @@ btrfs device add /dev/vdg /btrfs
# Start a scrub to check for errors # Start a scrub to check for errors
# -B prevents the process from going to the background # -B prevents the process from going to the background
# -d prints stats for each device # -d prints stats for each device
btrfs scrub start -Bd /btrfs btrfs scrub start -Bd /btrfs/pool0
# Check the status of a scrub # Check the status of a scrub
btrfs scrub status /btrfs btrfs scrub status /btrfs/pool0
# Watch for disk failures # Watch for disk failures
dmesg | grep btrfs dmesg | grep btrfs
@@ -116,13 +145,13 @@ dmesg | grep btrfs
```bash ```bash
# Create a new subvolume (make sure to mount /btrfs as subvolid=5) # Create a new subvolume (make sure to mount /btrfs as subvolid=5)
btrfs subvolume create /btrfs/foo btrfs subvolume create /btrfs/pool0
# List all subvolumes under a path # List all subvolumes under a path
btrfs subvolume list -t /btrfs btrfs subvolume list -o /btrfs/pool0
# Delete a subvolume # Delete a subvolume
btrfs subvolume delete /btrfs/foo btrfs subvolume delete /btrfs/pool0
``` ```
## Monitoring Usage ## Monitoring Usage
@@ -132,55 +161,10 @@ btrfs subvolume delete /btrfs/foo
btrfs filesystem show btrfs filesystem show
# Show usage for a specific array # Show usage for a specific array
btrfs filesystem usage /btrfs btrfs filesystem usage /btrfs/pool0
# Quick command to filter for data used # Quick command to filter for data used
btrfs filesystem usage /btrfs | grep 'Data.*Used' btrfs filesystem usage /btrfs/pool0 | grep 'Data.*Used'
```
## Encrypting BTRFS with LUKS
```bash
export KEYFILE_PATH=/root/btrfs.keyfile
export LUKS_DEVS="sdb sdc sdd sde sdf sdg sdh"
# Create a key file
dd if=/dev/urandom of=${KEYFILE_PATH} bs=128 count=1
chmod 400 ${KEYFILE_PATH}
# Create partitions
for luks_dev in $LUKS_DEVS; do
echo Creating partition for /dev/$luks_dev
parted -s -a optimal -- /dev/$luks_dev mklabel gpt mkpart primary 1MiB 100%
done
# Check that your list is good
for luks_dev in $LUKS_DEVS; do
echo will encrypt /dev/${luks_dev}1 and create /dev/mapper/luks-$(lsblk -n -o PARTUUID /dev/${luks_dev}1)
done
# Create the luks partitions
# Note that --iter-time 10000 is how long, in milliseconds, to decrypt the key
# -v is verbose
# -q is "batch mode", don't ask for confirmation
# Longer makes it harder to brute-force
for luks_dev in $LUKS_DEVS; do \
LUKS_UUID=$(lsblk -n -o PARTUUID /dev/${luks_dev}1)
LUKS_NAME=luks-${LUKS_UUID}
echo "Encrypting /dev/${luks_dev}1"; \
cryptsetup luksFormat -v -q --key-file ${KEYFILE_PATH} /dev/${luks_dev}1
echo "Unlocking /dev/${luks_dev}1 as ${LUKS_NAME}"
cryptsetup open /dev/${luks_dev}1 ${LUKS_NAME} --key-file=${KEYFILE_PATH}
echo "Adding ${LUKS_NAME} UUID=${LUKS_UUID} ${KEYFILE_PATH} discard to crypttab"
echo "${LUKS_NAME} UUID=${LUKS_UUID} none discard" >> /etc/crypttab
done
# List filesystems with UUID
lsblk --fs
# Now create the array using the /dev/mapper entries from above
mkfs.btrfs --data raid1 --metadata raid1 /dev/mapper/crypt-btrfs-vdb /dev/mapper/crypt-btrfs-vdc...
btrfs device scan
``` ```
## Monitoring Disk Health ## Monitoring Disk Health
@@ -190,28 +174,39 @@ btrfs device scan
```bash ```bash
# btrfs device stats shows any errors # btrfs device stats shows any errors
# Grep for any line not ending in "0" # Grep for any line not ending in "0"
btrfs device stats /mnt | grep -vE ' 0$' btrfs device stats /btrfs/pool0 | grep -vE ' 0$'
# Show the device IDs for the mounted filesystem # Show the device IDs for the mounted filesystem
btrfs filesystem show /mnt btrfs filesystem show /btrfs/pool0
# Delete a device (with ID 8, for example) # Delete a device (with ID 8, for example)
btrfs device delete 8 /mnt btrfs device delete 8 /btrfs/pool0
# Add a device to the array # Add a device to the array
btrfs device add /dev/vdi1 /mnt btrfs device add /dev/vdi1 /btrfs/pool0
# Rebalance the array # Rebalance the array
btrfs balance start /mnt btrfs balance start --background --full-balance /btrfs/pool0
# Check the status
btrfs balance status /btrfs/pool0
``` ```
## Defragmenting and Compressing ## Defragmenting and Compressing
```bash ```bash
# Defrag a filesystem # Defrag a filesystem
btrfs filesystem defragment /mnt btrfs filesystem defragment /btrfs/pool0
# Defrag and apply compression # Defrag and apply compression
# zstd:20 is currently the best compression algorithm # zstd:20 is currently the best compression algorithm
btrfs filesystem defragment -c zstd:20 /mnt btrfs filesystem defragment -c zstd:20 /btrfs/pool0
```
## Converting ext4 to btrfs
```bash
# Unmount and then run btrfs-convert
umount /path/to/mount
btrfs-convert /dev/sdX1
``` ```

View File

@@ -0,0 +1,59 @@
# etcd
- [etcd](#etcd)
- [Setup etcd Project](#setup-etcd-project)
- [Service Variables](#service-variables)
- [Install etcd](#install-etcd)
- [Fedora](#fedora)
- [Upgrade etcd](#upgrade-etcd)
- [Backup etcd](#backup-etcd)
- [Restore etcd](#restore-etcd)
- [Uninstall etcd](#uninstall-etcd)
## Setup etcd Project
1. Copy and rename this folder to active/software_etcd
2. Find and replace etcd with the name of the service
3. Write the etcd.service spec
4. (OPTIONAL) Write the etcd.timer spec
5. (OPTIONAL) Write the etcd.sh.j2 template
6. Write the install_etcd.yaml ansible template
7. Install the service via ansible
8. Expose the service
9. Install a backup service and timer
## Service Variables
1. For most vars,populate `vars.yaml`
2. For secret vars, create a new folder called `secrets/` and put a `vars.yaml` there.
## Install etcd
<https://etcd.io/docs/v3.6/install/>
### Fedora
```bash
sudo dnf install etcd
```
```bash
# Run the playbook
ansible-playbook \
-i ansible/inventory.yaml \
-l podman \
active/software_etcd/install_etcd.yaml \
-e "@active/software_etcd/vars.yaml" \
-e "@active/software_etcd/secrets/vars.yaml"
```
## Upgrade etcd
## Backup etcd
Follow the [Borg Backup instructions](/active/software_borg/borg.md#set-up-a-client-for-backup)
## Restore etcd
## Uninstall etcd

View File

@@ -0,0 +1,12 @@
[Unit]
Description=Runs foobar
After=network.target
Wants=network-online.target
[Service]
Restart=no
Type=oneshot
ExecStart=/usr/local/script/foobar.sh
[Install]
WantedBy=multi-user.target

View File

@@ -0,0 +1,3 @@
#!/bin/sh
echo "hello foobar!"

View File

@@ -0,0 +1,11 @@
[Unit]
Description=Run Foobar every day
[Timer]
OnCalendar=*-*-* 2:00:00
AccuracySec=10min
Persistent=true
Unit=foobar.service
[Install]
WantedBy=timers.target

View File

@@ -0,0 +1,39 @@
- name: Create Backup Service
hosts: all
vars_files:
- secrets/vars.yaml
tasks:
- name: Create /usr/local/script dir
ansible.builtin.file:
path: /usr/local/script
state: directory
mode: '0755'
- name: Copy foobar.service
template:
src: backup.service
dest: /etc/systemd/system/foobar.service
owner: root
group: root
mode: '0644'
- name: Copy foobar.timer
template:
src: backup.timer
dest: /etc/systemd/system/foobar.timer
owner: root
group: root
mode: '0644'
- name: Template foobar.sh
template:
src: backup.sh.j2
dest: /usr/local/script/foobar.sh
owner: root
group: root
mode: '0744'
- name: Something foobar related that might fail
script: echo 'hello foobar!'
ignore_errors: yes
- name: Reload foobar.timer
ansible.builtin.systemd_service:
name: foobar.timer
enabled: true
daemon_reload: true

View File

@@ -0,0 +1 @@
name: foobar

View File

@@ -15,6 +15,9 @@ firewall-cmd --set-default-zone=drop
# Set zone for a specific subnet # Set zone for a specific subnet
firewall-cmd --permanent --zone=drop --add-source=10.244.0.0/16 firewall-cmd --permanent --zone=drop --add-source=10.244.0.0/16
# Set zone for specific interface
firewall-cmd --zone=drop --change-interface=
# Get info about service # Get info about service
firewall-cmd --info-service=samba firewall-cmd --info-service=samba

View File

@@ -1,6 +1,9 @@
# FreeIPA # FreeIPA
- [FreeIPA](#freeipa) - [FreeIPA](#freeipa)
- [Install](#install)
- [Backup and Restore](#backup-and-restore)
- [Add a Fedora Client](#add-a-fedora-client)
- [Notes](#notes) - [Notes](#notes)
- [Quickstart Debugging Setup](#quickstart-debugging-setup) - [Quickstart Debugging Setup](#quickstart-debugging-setup)
- [Quickstart Production Setup](#quickstart-production-setup) - [Quickstart Production Setup](#quickstart-production-setup)
@@ -26,6 +29,65 @@ An AD Server.
This guide assumes Fedora 40+. This guide assumes Fedora 40+.
## Install
<https://www.freeipa.org/page/Quick_Start_Guide>
```bash
# Add firewall rules
firewall-cmd \
--permanent \
--add-service=http \
--add-service=https \
--add-service=ldap \
--add-service=ldaps
firewall-cmd --reload
# Download freeipa deps
dnf install freeipa-server
# Install and setup freeipa
# Make sure your directory password is less then 64 characters long.
# Any longer and you will get an OpenSSL error when reading /root/ca-agent.p12.
ipa-server-install
# Login as admin
kinit admin
# Add a user
ipa user-add
ipa passwd ducoterra
# Uninstall
ipa-server-install --uninstall
```
## Backup and Restore
When restoring a backup the server must have the same:
1. Hostname
2. IP address
3. FreeIPA version
```bash
# Create a backup
ipa-backup
```
```bash
# Restore a backup
ipa-restore /path/to/backup
```
## Add a Fedora Client
```bash
sudo dnf install ipa-client sssd oddjob oddjob-mkhomedir adcli
sudo ipa-client-install --mkhomedir
```
## Notes ## Notes
For yubikey manager appimage extend app not responding timeout to 60 seconds For yubikey manager appimage extend app not responding timeout to 60 seconds

View File

@@ -13,15 +13,17 @@ dnf install cryptsetup
# Create an encryption key # Create an encryption key
mkdir /etc/luks-keys mkdir /etc/luks-keys
chmod 700 /etc/luks-keys chmod 700 /etc/luks-keys
mkdir /etc/luks-backups
chmod 700 /etc/luks-backups
dd if=/dev/urandom bs=128 count=1 of=/etc/luks-keys/data0.key dd if=/dev/urandom bs=128 count=1 of=/etc/luks-keys/data0.key
# Create an encrypted partition # Create an encrypted partition
# -q means don't ask for confirmation # -q means don't ask for confirmation
# -v means verbose # -v means verbose
cryptsetup -q -v luksFormat /dev/nvme6n1p1 /etc/luks-keys/data0.key cryptsetup -v luksFormat /dev/nvme6n1p1 /etc/luks-keys/data0.key
# Unlock # Unlock
cryptsetup -q -v luksOpen --key-file /etc/luks-keys/data0.key /dev/nvme6n1p1 luks-$(cryptsetup luksUUID /dev/nvme6n1p1) cryptsetup -v luksOpen --key-file /etc/luks-keys/data0.key /dev/nvme6n1p1 luks-$(cryptsetup luksUUID /dev/nvme6n1p1)
# List keys # List keys
cryptsetup luksDump /dev/nvme6n1p1 cryptsetup luksDump /dev/nvme6n1p1

View File

@@ -0,0 +1,45 @@
# Openstack
## Compute Setup
```bash
export VM_NAME="openstack-test"
export VM_DISK_PATH=/var/lib/libvirt/images/openstack-test.qcow2
# Download the fedora server image
sudo curl \
-o $VM_DISK_PATH \
https://download.fedoraproject.org/pub/fedora/linux/releases/43/Cloud/x86_64/images/Fedora-Cloud-Base-Generic-43-1.6.x86_64.qcow2
# Install
virt-install \
--name "${VM_NAME}" \
--boot uefi,firmware.feature0.name=secure-boot,firmware.feature0.enabled=no \
--cpu host-passthrough --vcpus sockets=1,cores=8,threads=2 \
--ram=8192 \
--os-variant=fedora41 \
--network bridge:virbr0 \
--graphics none \
--import --disk "path=${VM_DISK_PATH},bus=virtio" \
--cloud-init disable=yes,user-data="active/software_virsh/cloud-init/user-data,meta-data=active/software_virsh/cloud-init/meta-data"
```
SSH into the server
```bash
sudo dnf update
sudo dnf install mysql-server
sudo systemctl enable --now mariadb
sudo mysql_secure_installation
sudo mysql -u root -p
# Note, I'm using 'keystone' as the password here. Change this.
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' \
IDENTIFIED BY 'keystone';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
IDENTIFIED BY 'keystone';
exit
sudo dnf install openstack-keystone httpd uwsgi-plugin-python3
```

View File

@@ -1,12 +1,15 @@
name = "fedora-43-base" name = "fedora43-base"
description = "Fedora 43 Base Installation" description = "Fedora Base Installation"
version = "0.0.1" version = "0.2.2"
distro = "fedora-43" distro = "fedora-43"
modules = [] modules = []
groups = [] groups = []
[customizations] [customizations]
hostname = "f43-base" hostname = "fedora43-base"
[customizations.timezone]
timezone = "America/New_York"
[[customizations.disk.partitions]] [[customizations.disk.partitions]]
type = "plain" type = "plain"
@@ -15,23 +18,11 @@ mountpoint = "/"
fs_type = "ext4" fs_type = "ext4"
minsize = "128 GiB" minsize = "128 GiB"
[customizations.timezone]
timezone = "America/New_York"
[[customizations.user]]
name = "ducoterra"
password = "$6$QqOw6ktp6aiPy5kX$cpN.oar4CiofH0PpxyveJgkjsRFGnZ5ykOX/50DcJyU3hZFxc5R3SASemNW6m3jceLGgZrQHyALQl8SgtcNO90"
key = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGQa781Qj8mNlUdRquFFqg0O2ornG9SBHe705y4+1vPI ssh@ducoterra.net"
home = "/home/ducoterra/"
shell = "/usr/bin/bash"
groups = ["wheel"]
uid = 1000
[customizations.firewall.services] [customizations.firewall.services]
enabled = ["ssh"] enabled = ["ssh", "cockpit"]
[customizations.services] [customizations.services]
enabled = ["sshd"] enabled = ["sshd", "cockpit.socket"]
[[packages]] [[packages]]
name = "bash-completion" name = "bash-completion"
@@ -53,6 +44,18 @@ version = "*"
name = "git" name = "git"
version = "*" version = "*"
[[packages]]
name = "cockpit"
version = "*"
[[packages]]
name = "borgbackup"
version = "*"
[[packages]]
name = "policycoreutils-python-utils"
version = "*"
[[customizations.files]] [[customizations.files]]
path = "/root/.inputrc" path = "/root/.inputrc"
mode = "0644" mode = "0644"
@@ -62,6 +65,24 @@ data = """
"\\C-h": backward-kill-word "\\C-h": backward-kill-word
""" """
[[customizations.files]]
path = "/root/.ssh/authorized_keys"
mode = "0600"
user = "root"
group = "root"
data = """
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGQa781Qj8mNlUdRquFFqg0O2ornG9SBHe705y4+1vPI ssh@ducoterra.net
"""
[[customizations.user]]
name = "ducoterra"
password = "$6$QqOw6ktp6aiPy5kX$cpN.oar4CiofH0PpxyveJgkjsRFGnZ5ykOX/50DcJyU3hZFxc5R3SASemNW6m3jceLGgZrQHyALQl8SgtcNO90"
key = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGQa781Qj8mNlUdRquFFqg0O2ornG9SBHe705y4+1vPI ssh@ducoterra.net"
home = "/home/ducoterra/"
shell = "/usr/bin/bash"
groups = ["wheel"]
uid = 1000
[[customizations.files]] [[customizations.files]]
path = "/home/ducoterra/.inputrc" path = "/home/ducoterra/.inputrc"
mode = "0644" mode = "0644"

View File

@@ -82,7 +82,7 @@ dnf install openscap-scanner scap-security-guide
composer-cli compose list finished composer-cli compose list finished
# Download the image # Download the image
composer-cli compose image --filename active/software_osbuild/secrets/fedora43-base.qcow2 image-uuid composer-cli compose image --filename active/software_osbuild/secrets/fedora43-base.qcow2 1884d6a3-0b13-40e1-980d-1941f9a4c4f9
# Test with qemu # Test with qemu
virt-install \ virt-install \

View File

@@ -7,6 +7,7 @@
- [Convert bootable drive to qcow](#convert-bootable-drive-to-qcow) - [Convert bootable drive to qcow](#convert-bootable-drive-to-qcow)
- [ISO](#iso) - [ISO](#iso)
- [ISO Access Denied on Boot](#iso-access-denied-on-boot) - [ISO Access Denied on Boot](#iso-access-denied-on-boot)
- [Resize an image](#resize-an-image)
## QCOW2 ## QCOW2
@@ -57,3 +58,14 @@ mkisofs -r -iso-level 4 -l -o /tmp/arch-files.iso ./arch
## ISO Access Denied on Boot ## ISO Access Denied on Boot
Turn off Secure Boot in the VM. Bios -> Device Manager -> Secure Boot Configuration -> Uncheck Attempt Secure Boot Turn off Secure Boot in the VM. Bios -> Device Manager -> Secure Boot Configuration -> Uncheck Attempt Secure Boot
## Resize an image
```bash
# Add more disk space
qemu-img resize <my_vm>.raw +10G
# Grow the partition and filesystem. For ext4:
growpart /dev/vda 1
resize2fs /dev/vda1
```

View File

@@ -5,6 +5,7 @@
- [Create SMB User](#create-smb-user) - [Create SMB User](#create-smb-user)
- [Create a SMB Share](#create-a-smb-share) - [Create a SMB Share](#create-a-smb-share)
- [Create a SMB Share with Many Users](#create-a-smb-share-with-many-users) - [Create a SMB Share with Many Users](#create-a-smb-share-with-many-users)
- [Mount SMB Share at Boot](#mount-smb-share-at-boot)
## Install SMB ## Install SMB
@@ -19,28 +20,32 @@ sudo firewall-cmd --reload
## Create SMB User ## Create SMB User
```bash ```bash
sudo smbpasswd -a ducoterra # Add a linux user
sudo adduser --no-create-home --disabled-password --disabled-login sambauser
# Add the samba user with password
sudo smbpasswd -a sambauser
``` ```
## Create a SMB Share ## Create a SMB Share
```bash ```bash
# Create share # Create share
mkdir /btrfs/pool0/smb/ducoterra mkdir /srv/smb/sambauser
# Set proper selinux labels for samba # Set proper selinux labels for samba
sudo semanage fcontext --add --type "samba_share_t" "/btrfs/pool0/smb/ducoterra(/.*)?" sudo semanage fcontext --add --type "samba_share_t" "/srv/smb(/.*)?"
# Run restorecon at the root of the btrfs subvolume # Run restorecon at the root of the btrfs subvolume
sudo restorecon -R /btrfs/pool0 sudo restorecon -R /srv
``` ```
Edit /etc/samba/smb.conf Edit /etc/samba/smb.conf
```conf ```conf
[ducoterra] [smb_sambauser]
comment = My Share comment = My Share
path = /btrfs/pool0/smb/ducoterra path = /srv/smb/sambauser
writeable = yes writeable = yes
browseable = yes browseable = yes
public = no public = no
@@ -59,8 +64,8 @@ sudo systemctl restart smb
```bash ```bash
sudo groupadd myfamily sudo groupadd myfamily
sudo useradd -G myfamily jack sudo useradd -G myfamily jack
sudo useradd -G myfamily maria sudo useradd -G myfamily maria
sudo smbpasswd -a jack sudo smbpasswd -a jack
sudo smbpasswd -a maria sudo smbpasswd -a maria
@@ -94,4 +99,25 @@ Don't forget to restart smb
```bash ```bash
systemctl restart smb systemctl restart smb
``` ```
## Mount SMB Share at Boot
Make sure your install cifs-utils: `dnf install cifs-utils`
Create a password file at `/etc/samba/credentials` with contents like
```text
username=USERNAME
password=PASSWORD
```
Then edit `/etc/fstab` to mount the share
```conf
# With login
//server_name/share_name /local_mount_point cifs _netdev,nofail,uid=1001,gid=1001,credentials=/etc/samba/credentials 0 0
# As guest
//server_name/share_name /local_mount_point cifs _netdev,nofail,guest 0 0
```

View File

@@ -0,0 +1,40 @@
# Snapper
## Snapper Installation
We'll be using snapper, a tool for automating and controlling snapshot behavior.
```bash
dnf install snapper dnf-plugin-snapper
# Allow selinux management
semanage permissive -a snapperd_t
# Note, if you mess something up you can run snapper -c root delete-config to delete
# System configs are stored in /etc/sysconfig/snapper as well as /etc/snapper
snapper -c root create-config /
snapper -c data create-config /path/to/other/data
# Enable automatic snapshots
systemctl enable --now snapper-timeline.timer
# Enable automatic cleanup
systemctl enable --now snapper-cleanup.timer
# Enable snapshots on boot
systemctl enable --now snapper-boot.timer
```
## Snapper Cleanup
```bash
# List snapshots
snapper -c root list
# Create snapshot manually
snapper -c root create --description "test snapshot"
# Delete first snapshot
snapper -c root delete 1
# Delete snapshots between 655-857
snapper -c root delete 655-857
```
Note - you probably don't want to keep yearly snapshots.
Edit `/etc/snapper/configs/root` and change `TIMELINE_LIMIT_YEARLY=` to `0`.

View File

@@ -0,0 +1,42 @@
# Systemd
## Timers
Basic timer:
my_service.timer
```conf
[Unit]
Description=Run $my_service every hour
[Timer]
OnCalendar=hourly
AccuracySec=10min
Persistent=true
Unit=$my_service.service
[Install]
WantedBy=timers.target
```
my_service.service
```conf
[Unit]
Description=Runs some command
After=syslog.target network.target auditd.service
Wants=network-online.target
[Service]
ExecStart=/usr/bin/command -with -arguments
[Install]
WantedBy=multi-user.target
```
Create an on-the-fly timer (will not survive reboots)
```bash
systemd-run --user --on-calendar '*:0/1' /bin/sh -c "date >> ~/log.txt"
```

11
active/software_uv/uv.md Normal file
View File

@@ -0,0 +1,11 @@
# UV
## Notes
```bash
# Install a specific version
uv python install 3.14
# Initialize a package
uv init --package .
```

View File

@@ -7,4 +7,4 @@ users:
lock_passwd: false lock_passwd: false
sudo: ALL=(ALL) NOPASSWD:ALL sudo: ALL=(ALL) NOPASSWD:ALL
ssh_authorized_keys: ssh_authorized_keys:
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGQa781Qj8mNlUdRquFFqg0O2ornG9SBHe705y4+1vPI ssh@ducoterra.net - ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGQa781Qj8mNlUdRquFFqg0O2ornG9SBHe705y4+1vPI ssh@ducoterra.net

View File

@@ -22,6 +22,7 @@ Virtual Machine Management
- [Create a Home Assistant VM](#create-a-home-assistant-vm) - [Create a Home Assistant VM](#create-a-home-assistant-vm)
- [Snapshots](#snapshots) - [Snapshots](#snapshots)
- [Creating and Attaching Disks](#creating-and-attaching-disks) - [Creating and Attaching Disks](#creating-and-attaching-disks)
- [Disk Testing and Scripts](#disk-testing-and-scripts)
- [Virt Builder](#virt-builder) - [Virt Builder](#virt-builder)
## Before you Begin ## Before you Begin
@@ -351,9 +352,14 @@ See [qemu qcow2 snapshots](/active/software_qemu/qemu.md#qcow2-snapshots)
To create and attach one disk: To create and attach one disk:
```bash ```bash
export VM_NAME="cloud-init-test-fedora" # QCOW
export VM_DISK_NAME="test1" qemu-img create -f qcow2 /var/lib/libvirt/images/my-disk.qcow2 1G
qemu-img create -f qcow2 /var/lib/libvirt/images/${VM_DISK_NAME}.qcow2 1G
# Raw (better performance, no snapshots or copy on write)
qemu-img create -f raw /var/lib/libvirt/images/my-disk.raw 1G
# Query disk info (like actual size on disk)
qemu-img info /var/lib/libvirt/images/my-disk.raw
virsh attach-disk ${VM_NAME} \ virsh attach-disk ${VM_NAME} \
--source /var/lib/libvirt/images/${VM_DISK_NAME} \ --source /var/lib/libvirt/images/${VM_DISK_NAME} \
@@ -362,6 +368,8 @@ virsh attach-disk ${VM_NAME} \
--live --live
``` ```
### Disk Testing and Scripts
To create and attach multiple disks (for raid testing) To create and attach multiple disks (for raid testing)
```bash ```bash

View File

@@ -72,7 +72,7 @@ script/run \
--uri 'tcp://127.0.0.1:10400' \ --uri 'tcp://127.0.0.1:10400' \
--threshold '0.8' \ --threshold '0.8' \
--preload-model 'jarvis_v2' \ --preload-model 'jarvis_v2' \
# --debug-probability --debug-probability
``` ```
## Install Wyoming ## Install Wyoming
@@ -96,12 +96,12 @@ script/run \
--uri 'tcp://0.0.0.0:10700' \ --uri 'tcp://0.0.0.0:10700' \
--mic-command 'arecord -r 16000 -c 1 -f S16_LE -t raw' \ --mic-command 'arecord -r 16000 -c 1 -f S16_LE -t raw' \
--snd-command 'aplay -r 22050 -c 1 -f S16_LE -t raw' \ --snd-command 'aplay -r 22050 -c 1 -f S16_LE -t raw' \
--awake-wav sounds/listening.wav \ --awake-wav /root/wyoming-satellite/sounds/listening.wav \
--done-wav sounds/finished.wav \ --done-wav /root/wyoming-satellite/sounds/finished.wav \
--synthesize-command tee \ --synthesize-command tee \
--transcript-command tee \ --transcript-command tee \
--wake-uri 'tcp://127.0.0.1:10400' \ --wake-uri 'tcp://127.0.0.1:10400' \
--wake-word-name 'hey jarvis' \ --wake-word-name 'hey dick head' \
--wake-refractory-seconds 1 --wake-refractory-seconds 1
# Allow through firewall # Allow through firewall

Some files were not shown because too many files have changed in this diff Show More