moving closer to podman systemd services for everything

This commit is contained in:
2025-04-19 12:52:56 -04:00
parent 9acff25d43
commit 6e393d90ee
47 changed files with 1455 additions and 433 deletions

3
.gitignore vendored
View File

@@ -2,4 +2,5 @@ secrets/
venv/ venv/
tmp/ tmp/
Unsorted/ Unsorted/
volumes/ volumes/
__pycache__/

View File

@@ -13,6 +13,7 @@ Just here for the Arch distoolbox?
- [Homelab](#homelab) - [Homelab](#homelab)
- [Table of Contents](#table-of-contents) - [Table of Contents](#table-of-contents)
- [Fun Facts](#fun-facts) - [Fun Facts](#fun-facts)
- [Important Dates and Times](#important-dates-and-times)
- [Project Lifecycle](#project-lifecycle) - [Project Lifecycle](#project-lifecycle)
- [Supported Projects](#supported-projects) - [Supported Projects](#supported-projects)
- [Graduation Requirements](#graduation-requirements) - [Graduation Requirements](#graduation-requirements)
@@ -25,6 +26,12 @@ On linux, <kbd>ctrl</kbd>+<kbd>shift</kbd>+<kbd>u</kbd>, then, while holding
<kbd>ctrl</kbd>+<kbd>shift</kbd>, typing <kbd>b</kbd>+<kbd>0</kbd> will type a ° (degree) symbol. Also you <kbd>ctrl</kbd>+<kbd>shift</kbd>, typing <kbd>b</kbd>+<kbd>0</kbd> will type a ° (degree) symbol. Also you
can enter any unicode symbol this way. can enter any unicode symbol this way.
## Important Dates and Times
- Machine updates happen at 4am on on Saturday
- VM updates happen at 5am on Saturday
- Backups happen at 6am every day
## Project Lifecycle ## Project Lifecycle
Projects will fall into one of the three following categories: Projects will fall into one of the three following categories:

View File

@@ -1,4 +1,16 @@
fedora: fedora:
hosts: hosts:
3dserver: 3dserver:
nextcloud: podman:
kubernetes:
docker:
yellow:
borg:
ubuntu:
hosts:
unifi-external:
caddy:
hosts:
3dserver:

View File

@@ -2,7 +2,12 @@
Note: this requires the AWS CLI. See [AWS CLI](/cloud/graduated/aws_cli/aws_cli.md) Note: this requires the AWS CLI. See [AWS CLI](/cloud/graduated/aws_cli/aws_cli.md)
## Credential Generation - [AWS Credentials](#aws-credentials)
- [Route53 Credential Generation](#route53-credential-generation)
- [AWS Certbot Route53 Policies](#aws-certbot-route53-policies)
- [Email Credentials](#email-credentials)
## Route53 Credential Generation
```bash ```bash
export AWS_USERNAME= export AWS_USERNAME=
@@ -14,9 +19,6 @@ aws iam attach-user-policy --user-name $AWS_USERNAME --policy-arn $(cat cloud/gr
# Allow updating reeselink # Allow updating reeselink
aws iam attach-user-policy --user-name $AWS_USERNAME --policy-arn $(cat cloud/graduated/aws_iam/secrets/update-reeselink-iam-policy-arn) aws iam attach-user-policy --user-name $AWS_USERNAME --policy-arn $(cat cloud/graduated/aws_iam/secrets/update-reeselink-iam-policy-arn)
# Allow sending emails
aws iam attach-user-policy --user-name $AWS_USERNAME --policy-arn $(cat cloud/graduated/aws_iam/secrets/ses-send-email-policy-arn)
# Create credentials (run aws configure on the machine that needs these to input them manually) # Create credentials (run aws configure on the machine that needs these to input them manually)
aws iam create-access-key --user-name $AWS_USERNAME aws iam create-access-key --user-name $AWS_USERNAME
``` ```
@@ -61,7 +63,17 @@ aws iam create-policy --policy-name update-reeselink --policy-document file://cl
# Allow updating route53 records for reeseapps.com # Allow updating route53 records for reeseapps.com
aws iam create-policy --policy-name update-reeseapps --policy-document file://cloud/graduated/aws_iam/secrets/route53_reeseapps_policy.json aws iam create-policy --policy-name update-reeseapps --policy-document file://cloud/graduated/aws_iam/secrets/route53_reeseapps_policy.json
# Allow sending emails
aws iam create-policy --policy-name send-email --policy-document file://cloud/graduated/aws_iam/secrets/ses_allow_send_policy.json
``` ```
## Email Credentials
<https://docs.aws.amazon.com/ses/latest/dg/smtp-credentials.html>
You can technically do this through the CLI, see above link.
1. Log into the AWS console
2. Navigate to SES
3. Click "SMTP Settings"
4. Click "Create SMTP Credentials"
5. Name it "ses-smtp-user.something"
6. Copy the username and password

View File

@@ -1 +1 @@
# Framework Laptop 16 # Framework Laptop 16

View File

@@ -13,3 +13,36 @@ qmk compile -kb <keyboard> -km default
## udev rules ## udev rules
Make sure the udev rules are copied to your host machine if you're in a distrobox or toolbox. Make sure the udev rules are copied to your host machine if you're in a distrobox or toolbox.
## VIA or Keychron Launcher Support
In your rules.mk (at the closest level to your keymap) add:
```c
VIA_ENABLE = yes
```
Compile and flash.
## Macros
### Code Homelab
```js
// menu
{+KC_LGUI}{-KC_LGUI}{100}
// type "terminal"
{+KC_T}{-KC_T}{100}{+KC_E}{-KC_E}{100}{+KC_R}{-KC_R}{100}{+KC_M}{-KC_M}{100}{+KC_I}{-KC_I}{100}{+KC_N}{-KC_N}{100}{+KC_A}{-KC_A}{100}{+KC_L}{-KC_L}{100}
// open terminal
{+KC_ENT}{-KC_ENT}{1000}
// type "tbox"
{+KC_T}{-KC_T}{100}{+KC_B}{-KC_B}{100}{+KC_O}{-KC_O}{100}{+KC_X}{-KC_X}{100}
// enter
{+KC_ENT}{-KC_ENT}{5000}
// type "code Homelab"
{+KC_C}{-KC_C}{100}{+KC_O}{-KC_O}{100}{+KC_D}{-KC_D}{100}{+KC_E}{-KC_E}{100}{+KC_SPC}{-KC_SPC}{100}{+KC_LSFT}{+KC_H}{-KC_H}{-KC_LSFT}{100}{+KC_O}{-KC_O}{100}{+KC_M}{-KC_M}{100}{+KC_E}{-KC_E}{100}{+KC_L}{-KC_L}{100}{+KC_A}{-KC_A}{100}{+KC_B}{-KC_B}{100}
// enter
{+KC_ENT}{-KC_ENT}{1000}
// full screen
{+KC_LGUI}{+KC_PGUP}{-KC_PGUP}{-KC_LGUI}
```

View File

@@ -2,7 +2,7 @@
- [Framework 16 Fixes](#framework-16-fixes) - [Framework 16 Fixes](#framework-16-fixes)
- [Wake from Sleep](#wake-from-sleep) - [Wake from Sleep](#wake-from-sleep)
- [Wrong keys pressed in the browser, keyboard occasionally freezes after clicking a field.](#wrong-keys-pressed-in-the-browser-keyboard-occasionally-freezes-after-clicking-a-field) - [Wrong keys pressed in the browser, keyboard occasionally freezes after clicking a field](#wrong-keys-pressed-in-the-browser-keyboard-occasionally-freezes-after-clicking-a-field)
## Wake from Sleep ## Wake from Sleep
@@ -28,6 +28,6 @@ If you want to specifically disable the framework keyboard you can use this:
ACTION=="add", SUBSYSTEM=="usb", ATTRS{idVendor}=="32ac", ATTRS{idProduct}=="0012", RUN+="/bin/sh -c 'echo disabled > /sys$env{DEVPATH}/power/wakeup'" ACTION=="add", SUBSYSTEM=="usb", ATTRS{idVendor}=="32ac", ATTRS{idProduct}=="0012", RUN+="/bin/sh -c 'echo disabled > /sys$env{DEVPATH}/power/wakeup'"
``` ```
## Wrong keys pressed in the browser, keyboard occasionally freezes after clicking a field. ## Wrong keys pressed in the browser, keyboard occasionally freezes after clicking a field
This was caused by a Keychron M3 mini mouse. This was caused by a Keychron M3 mini mouse.

View File

@@ -20,8 +20,13 @@
- [QEMU Images](#qemu-images) - [QEMU Images](#qemu-images)
- [Firewalld](#firewalld) - [Firewalld](#firewalld)
- [Backups](#backups) - [Backups](#backups)
- [Connect to the ISCSI Backup Target](#connect-to-the-iscsi-backup-target)
- [Connect to Backup Target with Cockpit](#connect-to-backup-target-with-cockpit)
- [Connect to Backup Target with iscsiadm](#connect-to-backup-target-with-iscsiadm)
- [Format backup disk](#format-backup-disk)
- [Troubleshooting Backup ISCSI Connection](#troubleshooting-backup-iscsi-connection)
- [Quick Backup](#quick-backup) - [Quick Backup](#quick-backup)
- [Regular Backup to an NFS Share](#regular-backup-to-an-nfs-share) - [Regular Backups with Borg](#regular-backups-with-borg)
- [Optional Steps](#optional-steps) - [Optional Steps](#optional-steps)
- [Docker with Podman as Runtime](#docker-with-podman-as-runtime) - [Docker with Podman as Runtime](#docker-with-podman-as-runtime)
- [Vanilla Docker](#vanilla-docker) - [Vanilla Docker](#vanilla-docker)
@@ -156,6 +161,17 @@ systemctl enable fail2ban --now
tail -f /var/log/fail2ban.log tail -f /var/log/fail2ban.log
``` ```
Checking, banning, unbanning
```bash
# See banned clients
fail2ban-client banned
# See jails (sshd should be one of them)
fail2ban-client status
# Unban a client from the sshd jail
fail2ban-client set sshd unbanip <IP address>
```
## BTRFS Parent Volumes ## BTRFS Parent Volumes
In `/etc/fstab`, add the parent volumes for your disks mounted with subvolid=5 at `/btrfs` so you can see In `/etc/fstab`, add the parent volumes for your disks mounted with subvolid=5 at `/btrfs` so you can see
@@ -414,44 +430,94 @@ firewall-cmd --reload
## Backups ## Backups
Note: this assumes you've set up [an iscsi backup disk](/infrastructure/graduated/truenas/truenas.md#iscsi-backup-volumes)
### Connect to the ISCSI Backup Target
#### Connect to Backup Target with Cockpit
1. Storage -> Hamburger menu -> Add iSCSI portal
2. Type your portal address, username, and password
#### Connect to Backup Target with iscsiadm
<https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/5/html/online_storage_reconfiguration_guide/iscsi-replacements_timeout#iscsi-replacements_timeout>
```bash
# Set username and password for discovered nodes
# Optionally you can add "-T <target name>" to specify which target has the username/password
iscsiadm -m node \
-o update \
-n node.session.auth.username -v username \
-n node.session.auth.password -v password
# Set replacement_timeout to 10 minutes in case server reboots
iscsiadm -m node \
-o update \
-n node.session.timeo.replacement_timeout -v 600
systemctl restart iscsid
# Discover targets
iscsiadm -m discovery -t st -p driveripper.reeselink.com
# Login to all nodes
iscsiadm -m node -l
```
#### Format backup disk
```bash
# list disks
lsblk
# Create partition
fdisk /dev/sdx
# Format partition with btrfs
mkfs.btrfs /dev/sdx1
# Get the UUID
blkid /dev/sdx1
```
Update /etc/fstab with the iscsi disk details.
Note:
- `x-systemd.automount` which only mounts the device when it's accessed.
- `x-systemd.mount-timeout=30` allows a 30 second timeout
- `_netdev` ensures the device won't be mounted until after the network is available
```conf
UUID=... /btrfs/some-name btrfs subvolid=5,compress=zstd:1,x-systemd.automount,x-systemd.mount-timeout=30,_netdev 0 0
```
#### Troubleshooting Backup ISCSI Connection
```bash
# List targets
iscsiadm -m node
# Delete node
iscsiadm -m node -o delete -T iqn.2022-01.com.reeselink:driveripper:iqn.2022-01.com.reeselink:driveripper
# List discovered targets
iscsiadm -m discovery
# Delete from discovery db
iscsiadm -m discoverydb -t sendtargets -p driveripper.reeselink.com -o delete
```
### Quick Backup ### Quick Backup
```bash ```bash
rsync -av / \ rsync -av --progress --exclude '.snapshots' /btrfs/yellow/root /btrfs/backup-yellow --dry-run
--exclude={"/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*","/lost+found"} \
/mnt/root/
``` ```
### Regular Backup to an NFS Share ### Regular Backups with Borg
1. Create a new dataset called "<server>_backup" See [borg.md](/systemd/graduated/borg/borg.md)
2. Remove "other" read/exec permissions from the dataset
3. Create a new NFS share for that dataset with maproot user and group set to root
4. Mount the NFS share to your server at `/backup`
5. Copy the following script into /root/backup.sh
```bash
#!/bin/bash
BACKUP_PATH="/backup"
EXCLUDE_DIR='{"/dev/*","/proc/*","/sys/*","/tmp/*","/run/*","/mnt/*","/media/*","/lost+found"}'
SOURCE_DIR="/"
rsync -aAXv ${SOURCE_DIR} --exclude=${EXCLUDE_DIR} ${BACKUP_PATH}
if [ $? -eq 0 ]; then
echo "Backup completed successfully"
else
echo "Some error occurred during backup"
fi
```
6. `chmod +x /root/backup.sh`
7. `crontab -e`
```cron
0 2 * * * bash /root/backup.sh >> /root/backup.log
```
## Optional Steps ## Optional Steps

View File

@@ -4,6 +4,7 @@
- [Framework 16 Fixes](#framework-16-fixes) - [Framework 16 Fixes](#framework-16-fixes)
- [Wake from Sleep](#wake-from-sleep) - [Wake from Sleep](#wake-from-sleep)
- [Wrong keys pressed in the browser](#wrong-keys-pressed-in-the-browser) - [Wrong keys pressed in the browser](#wrong-keys-pressed-in-the-browser)
- [Wifi Powersave](#wifi-powersave)
- [Podman](#podman) - [Podman](#podman)
- [Autostarting services with quadlets](#autostarting-services-with-quadlets) - [Autostarting services with quadlets](#autostarting-services-with-quadlets)
- [Network](#network) - [Network](#network)
@@ -38,6 +39,30 @@ sudo udevadm control --reload-rules && sudo udevadm trigger
Sometimes keys will stop working when using search bars or do strange things like move the page around. This seems to be caused by some "alt" keypress combination. Pressing "alt" twice fixes it. Sometimes keys will stop working when using search bars or do strange things like move the page around. This seems to be caused by some "alt" keypress combination. Pressing "alt" twice fixes it.
### Wifi Powersave
**NOTE: THIS DOESN'T WORK. IT CAUSES WIFI DISCONNECT AND RECONNECT ISSUES.**
<https://www.networkmanager.dev/docs/api/latest/settings-802-11-wireless.html>
<https://gist.github.com/jcberthon/ea8cfe278998968ba7c5a95344bc8b55>
<https://askubuntu.com/questions/1230525/ubuntu-20-04-network-performance-extremely-slow>
```bash
vim /etc/NetworkManager/conf.d/wifi-powersave-off.conf
```
```conf
[connection]
# Values are 0 (use default), 1 (ignore/don't touch), 2 (disable) or 3 (enable).
wifi.powersave = 2
```
```bash
systemctl restart NetworkManager
```
## Podman ## Podman
Since you'll be using podman for most container-based services, you'll want to set the Since you'll be using podman for most container-based services, you'll want to set the

View File

@@ -2,19 +2,24 @@
- [Truenas](#truenas) - [Truenas](#truenas)
- [Bios settings](#bios-settings) - [Bios settings](#bios-settings)
- [Archiving](#archiving)
- [Deleting snapshots](#deleting-snapshots)
- [But First, ZFS on RPi](#but-first-zfs-on-rpi)
- [Pi Setup](#pi-setup)
- [Datasets, Snapshots, and Encryption](#datasets-snapshots-and-encryption) - [Datasets, Snapshots, and Encryption](#datasets-snapshots-and-encryption)
- [Periodic Snapshot Recommendations](#periodic-snapshot-recommendations)
- [Hourly Snapshots](#hourly-snapshots)
- [Daily Snapshots](#daily-snapshots)
- [Replication Tasks](#replication-tasks)
- [Source](#source)
- [Destination](#destination)
- [Manually Create Named Snapshots](#manually-create-named-snapshots)
- [Migrating encrypted pools](#migrating-encrypted-pools) - [Migrating encrypted pools](#migrating-encrypted-pools)
- [Migrating Properties](#migrating-properties) - [Migrating Properties](#migrating-properties)
- [Backup Task Settings](#backup-task-settings)
- [Create and Destroy zfs Datasets](#create-and-destroy-zfs-datasets) - [Create and Destroy zfs Datasets](#create-and-destroy-zfs-datasets)
- [Create and send snapshots](#create-and-send-snapshots) - [Create and send snapshots](#create-and-send-snapshots)
- [Cleaning up old snapshots](#cleaning-up-old-snapshots) - [Cleaning up old snapshots](#cleaning-up-old-snapshots)
- [Creating and restoring snapshots](#creating-and-restoring-snapshots) - [Creating and restoring snapshots](#creating-and-restoring-snapshots)
- [Filesystem ACLs](#filesystem-acls) - [Filesystem ACLs](#filesystem-acls)
- [ISCSI Backup Volumes](#iscsi-backup-volumes)
- [Create Backup ZVOL](#create-backup-zvol)
- [Create Backup ISCSI Target](#create-backup-iscsi-target)
- [VMs](#vms) - [VMs](#vms)
- [Converting zvol to qcow2](#converting-zvol-to-qcow2) - [Converting zvol to qcow2](#converting-zvol-to-qcow2)
- [Converting qcow2 to zvol](#converting-qcow2-to-zvol) - [Converting qcow2 to zvol](#converting-qcow2-to-zvol)
@@ -37,7 +42,11 @@
- [ZFS Size Data](#zfs-size-data) - [ZFS Size Data](#zfs-size-data)
- [ZFS Rename](#zfs-rename) - [ZFS Rename](#zfs-rename)
- [ISCSI](#iscsi) - [ISCSI](#iscsi)
- [Base Name](#base-name) - [ISCSI Base Name](#iscsi-base-name)
- [Archiving](#archiving)
- [Deleting snapshots](#deleting-snapshots)
- [But First, ZFS on RPi](#but-first-zfs-on-rpi)
- [Pi Setup](#pi-setup)
## Bios settings ## Bios settings
@@ -48,120 +57,65 @@ You can check the bios version with `dmidecode -t bios -q`
2. Turn off boosting 2. Turn off boosting
3. Enable XMP 3. Enable XMP
## Archiving
1. Create a recursive snapshot called "archive_pool_year_month_day"
2. Create a replication task called "archive_pool_year_month_day"
- select all datasets you want to backup
- fill in enc0/archives/archive-year-month-day_hour-minute
- full filesystem replication
- select "Matching naming schema"
- Use `archive-%Y-%m-%d_%H-%M`
- Deselect run automatically
- Save and run
## Deleting snapshots
Sometimes you need to delete many snapshots from a certain dataset. The UI is terrible for this, so
we need to use `zfs destroy`. xargs is the best way to do this since it allows parallel processing.
```bash
# zfs list snapshots with:
# -o name: only print the name
# -S creation: sort by creation time
# -H: don't display headers
# -r: recurse through every child dataset
zfs list -t snapshot enc0/archives -o name -S creation -H -r
# pipe it through xargs with:
# -n 1: take only 1 argument from the pipe per command
# -P 8: eight parallel processes
# Also pass to zfs destroy:
# -v: verbose
# -n: dryrun
zfs list -t snapshot enc0/archives -o name -S creation -H -r | xargs -n 1 -P 8 zfs destroy -v -n
# if that looks good you can remove the "-n"
zfs list -t snapshot enc0/archives -o name -S creation -H -r | xargs -n 1 -P 8 zfs destroy -v
```
## But First, ZFS on RPi
A really good backup server is an RPi running openzfs. See [the openzfs docs](https://openzfs.github.io/openzfs-docs/Getting%20Started/Ubuntu/Ubuntu%2020.04%20Root%20on%20ZFS%20for%20Raspberry%20Pi.html#step-2-setup-zfs) for more info.
### Pi Setup
Add the vault ssh CA key to your pi.
```bash
curl -o /etc/ssh/trusted-user-ca-keys.pem https://vault.ducoterra.net/v1/ssh-client-signer/public_key
echo "TrustedUserCAKeys /etc/ssh/trusted-user-ca-keys.pem" >> /etc/ssh/sshd_config
service ssh restart
```
Create a pi user.
```bash
adduser pi
usermod -a -G sudo pi
```
SSH to the pi as the "pi" user. Delete the ubuntu user.
```bash
killall -u ubuntu
userdel -r ubuntu
```
Disable SSH password authentication
```bash
sed -i 's/PasswordAuthentication yes/PasswordAuthentication no/g' /etc/ssh/sshd_config
service ssh restart
```
Change the hostname.
```bash
echo pi-nas > /etc/hostname
```
Upgrade and restart the pi.
```bash
apt update && apt upgrade -y && apt autoremove -y
reboot
```
Install ZFS.
```bash
apt install -y pv zfs-initramfs
```
Find the disks you want to use to create your pool
```bash
fdisk -l
```
Create a pool.
```bash
mkdir -p /mnt/backup
zpool create \
-o ashift=12 \
-O acltype=posixacl -O canmount=off -O compression=lz4 \
-O dnodesize=auto -O normalization=formD -O relatime=on \
-O xattr=sa -O mountpoint=/mnt/backup \
backup ${DISK}
```
## Datasets, Snapshots, and Encryption ## Datasets, Snapshots, and Encryption
### Periodic Snapshot Recommendations
#### Hourly Snapshots
- Lifetime: `1 day`
- Naming Schema: `hourly-%Y-%m-%d_%H-%M`
- Schedule: `Hourly`
- Begin: `00:00:00`
- End: `23:59:00`
- Disallow taking empty snapshots
- Enabled
- Recursive
Assuming 100 datasets: 100 datasets x 24 hours = 2400 snapshots
Disallowing empty snapshots will help keep that number down.
#### Daily Snapshots
- Lifetime: `1 week`
- Naming Schema: `daily-%Y-%m-%d_%H-%M`
- Schedule: `Daily`
- Allow taking empty snapshots
- Enabled
- Recursive
Assuming 100 datasets: 100 datasets x 7 days = 700 snapshots
### Replication Tasks
Before configuring, create a dataset that you'll be replicating to.
Use advanced settings.
- Transport `LOCAL`
#### Source
- Recursive
- Include Dataset Properties
- Periodic Snapshot Tasks: Select your `daily` task
- Run automatically
#### Destination
- Read-only Policy: `SET`
- Snapshot Retention Policy: `Custom`
- Lifetime: `1 month`
- Naming Schema: `daily-%Y-%m-%d_%H-%M`
Assuming 100 datasets: 100 datasets x 30 days = 3000 snapshots
#### Manually Create Named Snapshots
1. Datasets -> Select dataset -> Create Snapshot -> Naming Schema (daily)
2. Start replication from Data Protection
### Migrating encrypted pools ### Migrating encrypted pools
Since you can't use `-R` to send encrypted datasets recursively you'll need to use more creative tactics. Here's my recommendation: Since you can't use `-R` to send encrypted datasets recursively you'll need to use more creative tactics. Here's my recommendation:
@@ -201,16 +155,6 @@ If you need to migrate your dataset comments you can use the following bash to a
for i in $(zfs list -H -d 1 -o name backup/nvme/k3os-private); do read -r name desc < <(zfs list -H -o name,org.freenas:description $i) && pvc=$(echo "$name" | awk -F "/" '{print $NF}') && zfs set org.freenas:description=$desc enc1/k3os-private/$pvc; done for i in $(zfs list -H -d 1 -o name backup/nvme/k3os-private); do read -r name desc < <(zfs list -H -o name,org.freenas:description $i) && pvc=$(echo "$name" | awk -F "/" '{print $NF}') && zfs set org.freenas:description=$desc enc1/k3os-private/$pvc; done
``` ```
### Backup Task Settings
| Key | Value |
| ------------------------------------ | --------------------- |
| Destination Dataset Read-only Policy | SET |
| Recursive | true |
| Snapshot Retention Policy | Same as Source |
| Include Dataset Properties | true |
| Periodic Snapshot Tasks | <daily-snapshot-task> |
### Create and Destroy zfs Datasets ### Create and Destroy zfs Datasets
```bash ```bash
@@ -269,7 +213,7 @@ zfs list -t snap -r $POOL | wc -l
zfs list -t snap -r -H -o name $POOL > $SNAPSHOTS_FILE zfs list -t snap -r -H -o name $POOL > $SNAPSHOTS_FILE
# Check the file # Check the file
cat $SNAPSHOTS_FILE | less cat $SNAPSHOTS_FILE | less
# Dry run # Dry run
for SNAPSHOT in $(cat $SNAPSHOTS_FILE); do echo "zfs destroy -v $SNAPSHOT"; done | less for SNAPSHOT in $(cat $SNAPSHOTS_FILE); do echo "zfs destroy -v $SNAPSHOT"; done | less
@@ -303,6 +247,35 @@ Dataset -> Dataset details (edit) -> Advanced Options -> ACL Type (inherit)
setfacl -b -R /mnt/enc0/smb/media setfacl -b -R /mnt/enc0/smb/media
``` ```
## ISCSI Backup Volumes
### Create Backup ZVOL
1. Create a new dataset called "iscsi" and then a dataset under that called "backups"
1. Set sync to always
2. Disable compression
3. Enable Sparse
2. Create a new dataset under backups with the same name as your server hostname
3. Set the size to something reasonable (Note you may need to "force size")
### Create Backup ISCSI Target
1. In System -> Services -> ISCSI set the Base Name following [these rules](#iscsi-base-name)
2. In Shared -> ISCSI -> Authorized Access -> Create a new authorized access
1. Group ID arbitrary - just pick a number you haven't used
2. User: The connecting machine's ISCSI Base Name
3. Secret: A 16 character password with no special characters
3. Wizard -> Create New
1. Extent Name: `backup-<hostname>`
2. Extent Type: `Device`
3. Extent Device: The ZVOL you just created
4. Extent Sharing Platform: `Modern OS`
5. Protocol Options Portal: Either create new (0.0.0.0 and ::) or select your existing portal
6. Protocol Options Initiators: The base name of the connecting machine following [these rules](#iscsi-base-name)
4. Targets -> Select the backup-<hostname> target -> Edit
1. Authentication Method: `CHAP`
2. Authentication Group Number: The group number you created above
## VMs ## VMs
1. Force UEFI installation 1. Force UEFI installation
@@ -568,10 +541,123 @@ zfs rename enc0/something enc0/something_else
## ISCSI ## ISCSI
### Base Name ### ISCSI Base Name
<https://datatracker.ietf.org/doc/html/rfc3721.html#section-1.1> <https://datatracker.ietf.org/doc/html/rfc3721.html#section-1.1>
| iqn | . | year-month of domain registration | . | reversed domain | : | unique string | iqn | . | year-month of domain registration | . | reversed domain | : | unique string
iqn.2022-01.com.reeselink:driveripper iqn.2022-01.com.reeselink:driveripper
## Archiving
1. Create a recursive snapshot called "archive_pool_year_month_day"
2. Create a replication task called "archive_pool_year_month_day"
- select all datasets you want to backup
- fill in enc0/archives/archive-year-month-day_hour-minute
- full filesystem replication
- select "Matching naming schema"
- Use `archive-%Y-%m-%d_%H-%M`
- Deselect run automatically
- Save and run
## Deleting snapshots
Sometimes you need to delete many snapshots from a certain dataset. The UI is terrible for this, so
we need to use `zfs destroy`. xargs is the best way to do this since it allows parallel processing.
```bash
# zfs list snapshots with:
# -o name: only print the name
# -S creation: sort by creation time
# -H: don't display headers
# -r: recurse through every child dataset
zfs list -t snapshot enc0/archives -o name -S creation -H -r
# pipe it through xargs with:
# -n 1: take only 1 argument from the pipe per command
# -P 8: eight parallel processes
# Also pass to zfs destroy:
# -v: verbose
# -n: dryrun
zfs list -t snapshot enc0/archives -o name -S creation -H -r | xargs -n 1 -P 8 zfs destroy -v -n
# if that looks good you can remove the "-n"
zfs list -t snapshot enc0/archives -o name -S creation -H -r | xargs -n 1 -P 8 zfs destroy -v
```
## But First, ZFS on RPi
A really good backup server is an RPi running openzfs. See [the openzfs docs](https://openzfs.github.io/openzfs-docs/Getting%20Started/Ubuntu/Ubuntu%2020.04%20Root%20on%20ZFS%20for%20Raspberry%20Pi.html#step-2-setup-zfs) for more info.
### Pi Setup
Add the vault ssh CA key to your pi.
```bash
curl -o /etc/ssh/trusted-user-ca-keys.pem https://vault.ducoterra.net/v1/ssh-client-signer/public_key
echo "TrustedUserCAKeys /etc/ssh/trusted-user-ca-keys.pem" >> /etc/ssh/sshd_config
service ssh restart
```
Create a pi user.
```bash
adduser pi
usermod -a -G sudo pi
```
SSH to the pi as the "pi" user. Delete the ubuntu user.
```bash
killall -u ubuntu
userdel -r ubuntu
```
Disable SSH password authentication
```bash
sed -i 's/PasswordAuthentication yes/PasswordAuthentication no/g' /etc/ssh/sshd_config
service ssh restart
```
Change the hostname.
```bash
echo pi-nas > /etc/hostname
```
Upgrade and restart the pi.
```bash
apt update && apt upgrade -y && apt autoremove -y
reboot
```
Install ZFS.
```bash
apt install -y pv zfs-initramfs
```
Find the disks you want to use to create your pool
```bash
fdisk -l
```
Create a pool.
```bash
mkdir -p /mnt/backup
zpool create \
-o ashift=12 \
-O acltype=posixacl -O canmount=off -O compression=lz4 \
-O dnodesize=auto -O normalization=formD -O relatime=on \
-O xattr=sa -O mountpoint=/mnt/backup \
backup ${DISK}
```

View File

@@ -117,31 +117,7 @@ On the server:
```bash ```bash
# Install glances for system monitoring # Install glances for system monitoring
apt install -y glances net-tools vim apt install -y glances net-tools vim tmux
# Install zsh with autocomplete and suggestions
apt install -y zsh zsh-autosuggestions zsh-syntax-highlighting
cat <<EOF > ~/.zshrc
# Basic settings
autoload bashcompinit && bashcompinit
autoload -U compinit; compinit
zstyle ':completion:*' menu select
# Prompt settings
autoload -Uz promptinit
promptinit
prompt redhat
PROMPT_EOL_MARK=
# Syntax Highlighting
source /usr/share/zsh-syntax-highlighting/zsh-syntax-highlighting.zsh
source /usr/share/zsh-autosuggestions/zsh-autosuggestions.zsh
### Custom Commands and Aliases ###
EOF
chsh -s $(which zsh) && chsh -s $(which zsh) ducoterra
# Cockpit # Cockpit
apt install -y cockpit apt install -y cockpit
@@ -163,3 +139,7 @@ To disable this, You can create a blank file with the same name in /etc:
sudo touch /etc/NetworkManager/conf.d/10-globally-managed-devices.conf sudo touch /etc/NetworkManager/conf.d/10-globally-managed-devices.conf
sudo systemctl restart NetworkManager sudo systemctl restart NetworkManager
Then `nmcli device set <device-name> managed yes`
Then turn on "Connect Automatically" in Cockpit.

View File

@@ -0,0 +1,18 @@
[Unit]
Description=Caddy
[Container]
AddCapability=NET_ADMIN
ContainerName=caddy
Image=docker.io/caddy:2
Network=host
SecurityLabelDisable=true
Volume=/etc/caddy:/etc/caddy
Volume=caddy_data:/data
Volume=caddy_config:/config
[Service]
Restart=always
[Install]
WantedBy=default.target

View File

@@ -1,7 +1,54 @@
# Caddy Reverse Proxy # Caddy Reverse Proxy
- [Caddy Reverse Proxy](#caddy-reverse-proxy)
- [DNS Records](#dns-records)
- [Install Caddy](#install-caddy)
- [Ansible](#ansible)
- [Manual](#manual)
## DNS Records
Before you can create a Caddyfile you need records that point to your server.
You can either create them manually in your DNS provider of choice or use the provided
ddns service:
1. Update the [ddns caddy records](/podman/graduated/ddns/secrets/caddy_records.yaml)
2. Run the [caddy ansible playbook](/podman/graduated/ddns/ddns.md#ansible-caddy-records)
## Install Caddy ## Install Caddy
### Ansible
You'll need a secrets/Caddyfile with your caddy config.
`secrets/Caddyfile` example:
```conf
https://something.reeseapps.com:443 {
reverse_proxy internal.reeselink.com:8000
}
https://something-else.reeseapps.com:443 {
reverse_proxy internal-other.reeselink.com:8080
}
```
The playbook limits the installer to `hosts: caddy` so make sure you have a caddy
host in your inventory.
Now you can install the Caddy service with something like:
```bash
ansible-playbook \
-i ansible/inventory.yaml \
podman/graduated/caddy/install_caddy.yaml
```
See ansible playbook [install_caddy.yaml](/podman/graduated/caddy/install_caddy.yaml.yaml)
### Manual
As root As root
```bash ```bash
@@ -12,24 +59,18 @@ vim /etc/caddy/Caddyfile
Caddy will automatically provision certificates if the server DNS points to the correct IP Caddy will automatically provision certificates if the server DNS points to the correct IP
and is accessible on the ports specifified. All you need to do is put `https` in the caddy conf. and is accessible on the ports specifified. All you need to do is put `https` in the caddy conf.
Example:
```conf ```conf
# Nextcloud
https://nextcloud.reeseapps.com:443 {
reverse_proxy podman.reeselink.com:11000
}
https://nextcloud.reeseapps.com:8443 {
reverse_proxy podman.reeselink.com:11001 {
transport http {
tls_insecure_skip_verify
}
}
}
# Gitea # Gitea
https://gitea.reeseapps.com:443 { https://gitea.reeseapps.com:443 {
reverse_proxy podman.reeselink.com:3000 reverse_proxy podman.reeselink.com:3000
} }
# Jellyfin
https://jellyfin.reeseapps.com:443 {
reverse_proxy podman.reeselink.com:8096
}
``` ```
```bash ```bash
@@ -59,5 +100,5 @@ WantedBy=default.target
```bash ```bash
systemctl daemon-reload systemctl daemon-reload
systemctl start caddy systemctl restart caddy
``` ```

View File

@@ -0,0 +1,28 @@
- name: Create DDNS Service
hosts: caddy
tasks:
- name: Create /etc/caddy dir
ansible.builtin.file:
path: /etc/caddy
state: directory
mode: '0755'
- name: Copy Caddyfile
template:
src: secrets/Caddyfile
dest: /etc/caddy/Caddyfile
owner: root
group: root
mode: '0644'
- name: Template Caddy Container Services
template:
src: caddy.container
dest: /etc/containers/systemd/caddy.container
owner: root
group: root
mode: '0644'
- name: Reload and start the Caddy service
ansible.builtin.systemd_service:
state: restarted
name: caddy.service
enabled: true
daemon_reload: true

View File

@@ -0,0 +1 @@
3.13

View File

@@ -0,0 +1,7 @@
{
"python.testing.pytestArgs": [
"."
],
"python.testing.unittestEnabled": false,
"python.testing.pytestEnabled": true
}

View File

@@ -0,0 +1,23 @@
FROM python:3.12-slim-bookworm
# The installer requires curl (and certificates) to download the release archive
RUN apt-get update && apt-get install -y --no-install-recommends curl ca-certificates
# Download the latest installer
ADD https://astral.sh/uv/install.sh /uv-installer.sh
# Run the installer then remove it
RUN sh /uv-installer.sh && rm /uv-installer.sh
# Ensure the installed binary is on the `PATH`
ENV PATH="/root/.local/bin/:$PATH"
# Copy the project into the image
ADD update.py uv.lock pyproject.toml /app
# Sync the project into a new environment, using the frozen lockfile
WORKDIR /app
RUN uv sync --frozen
# Presuming there is a `my_app` command provided by the project
CMD ["uv", "run", "update.py"]

View File

@@ -0,0 +1,21 @@
[Unit]
Description=DDNS
After=network-online.target
Wants=network-online.target
[Container]
Environment=ROUTE53_RECORD={{ item.record }}
Environment=HOSTED_ZONE_ID={{ item.hosted_zone_id }}
Environment=AWS_ACCESS_KEY_ID={{ aws.access_key_id }}
Environment=AWS_SECRET_ACCESS_KEY={{ aws.secret_access_key }}
{% if item.skip_ipv6 | default(false) %}
Environment=SKIP_IPV6=true
{% endif %}
{% if item.skip_ipv4 | default(false) %}
Environment=SKIP_IPV4=true
{% endif %}
Image=gitea.reeseapps.com/services/ddns:latest
Network=ddns.network
[Install]
WantedBy=default.target

View File

@@ -0,0 +1,130 @@
# DDNS for Route53
- [DDNS for Route53](#ddns-for-route53)
- [Install](#install)
- [As a Systemd Service](#as-a-systemd-service)
- [Ansible Caddy Records](#ansible-caddy-records)
- [Ansible Git Record](#ansible-git-record)
- [Ansible Unifi External Records](#ansible-unifi-external-records)
- [Ansible Hostname reeselink records](#ansible-hostname-reeselink-records)
- [Development](#development)
- [Testing](#testing)
- [Building Container Image](#building-container-image)
This service will automatically keep ipv4 and ipv6 records updated in AWS Route53.
**NOTE**: This requires the aws cli to be installed on each node with
credentials that can modify records in route53. See
[aws_iam](/cloud/graduated/aws_iam/aws_iam.md) and
[aws_cli](/cloud/graduated/aws_cli/aws_cli.md)
## Install
### As a Systemd Service
You need two files:
1. secrets/vars.yaml (with aws credentials)
2. secrets/records.yaml (with AWS records)
`secrets/vars.yaml` example:
```yaml
aws:
access_key_id: key_here
secret_access_key: secret_here
```
`secrets/records.yaml` example:
```yaml
records:
- record: some.domain.com
hosted_zone_id: ABC123456789
- record: someother.domain.com
hosted_zone_id: ABC123456789
```
Then you can install the ddns service with something like
```bash
ansible-playbook \
# specify your inventory
-i ansible/inventory.yaml \
# -l limits to a particular host
-l 3dserver \
podman/graduated/ddns/install_ddns.yaml \
# -e brings in our secrets/records.yaml
-e "@podman/graduated/ddns/secrets/records.yaml"
```
See ansible playbook [install_ddns.yaml](/install_ddns.yaml)
#### Ansible Caddy Records
```bash
ansible-playbook \
-i ansible/inventory.yaml \
-l 3dserver \
podman/graduated/ddns/install_ddns.yaml \
-e "@podman/graduated/ddns/secrets/caddy_records.yaml"
```
#### Ansible Git Record
```bash
ansible-playbook \
-i ansible/inventory.yaml \
-l podman \
podman/graduated/ddns/install_ddns.yaml \
-e "@podman/graduated/ddns/secrets/git_record.yaml"
```
#### Ansible Unifi External Records
```bash
ansible-playbook \
-i ansible/inventory.yaml \
-l unifi-external \
podman/graduated/ddns/install_ddns.yaml \
-e "@podman/graduated/ddns/secrets/unifi_external_record.yaml"
```
#### Ansible Hostname reeselink records
```bash
export PLAYBOOK_PATH=podman/graduated/ddns
ansible-playbook \
-i ansible/inventory.yaml \
${PLAYBOOK_PATH}/install_ddns.yaml \
-e "@${PLAYBOOK_PATH}/secrets/hostname_reeselink_record.yaml"
```
## Development
### Testing
```bash
export ROUTE53_RECORD=test-ddns.reeseapps.com
export HOSTED_ZONE_ID=$(cat secrets/secret_vars.yaml | yq -r '.reeseapps_zone_id')
uv run update.py
```
### Building Container Image
```bash
# Build
podman build -t gitea.reeseapps.com/services/ddns:latest -f ./Containerfile
podman push gitea.reeseapps.com/services/ddns:latest
# Run
export ROUTE53_RECORD=test-ddns.reeseapps.com
export HOSTED_ZONE_ID=$(cat secrets/secret_vars.yaml | yq -r '.reeseapps_zone_id')
podman run \
-e ROUTE53_RECORD=$ROUTE53_RECORD \
-e HOSTED_ZONE_ID=$HOSTED_ZONE_ID \
-e AWS_PROFILE=prod \
-v $HOME/.aws:/root/.aws:Z \
-it --rm \
gitea.reeseapps.com/services/ddns:latest
```

View File

@@ -0,0 +1,8 @@
[Unit]
Description=DDNS
[Network]
IPv6=true
[Install]
WantedBy=default.target

View File

@@ -1,11 +1,11 @@
[Unit] [Unit]
Description=Run ddns service every hour Description=Run ddns.{{ item.record }}.service every hour
[Timer] [Timer]
OnCalendar=hourly OnCalendar=hourly
AccuracySec=10min AccuracySec=10min
Persistent=true Persistent=true
Unit=ddns.service Unit=ddns.{{ item.record }}.service
[Install] [Install]
WantedBy=timers.target WantedBy=timers.target

View File

@@ -0,0 +1,45 @@
- name: Create DDNS Service
hosts: all
vars_files:
- secrets/vars.yaml
tasks:
- name: Create /etc/ddns dir
ansible.builtin.file:
path: /etc/ddns
state: directory
mode: '0755'
- name: Copy ddns.network
template:
src: ddns.network
dest: /etc/containers/systemd/ddns.network
owner: root
group: root
mode: '0644'
- name: Template DDNS Container Services
template:
src: ddns.container
dest: /etc/containers/systemd/ddns.{{ item.record }}.container
owner: root
group: root
mode: '0644'
loop: "{{ records }}"
- name: Template DDNS Container Timers
template:
src: ddns.timer
dest: /etc/systemd/system/ddns.{{ item.record }}.timer
owner: root
group: root
mode: '0644'
loop: "{{ records }}"
- name: Reload ddns timer
ansible.builtin.systemd_service:
state: restarted
name: ddns.{{ item.record }}.timer
enabled: true
daemon_reload: true
loop: "{{ records }}"
- name: Run ddns service
ansible.builtin.systemd_service:
state: restarted
name: ddns.{{ item.record }}.service
loop: "{{ records }}"

View File

@@ -0,0 +1,10 @@
[project]
name = "ddns"
version = "0.1.0"
description = "Add your description here"
readme = "README.md"
requires-python = ">=3.10"
dependencies = [
"boto3>=1.37.30",
"pytest>=8.3.5",
]

View File

@@ -0,0 +1,32 @@
import re
from update import get_ipv4, get_ipv6
regex_match_ipv4 = (
r"^(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)$"
)
regex_match_ipv6 = (
r"(([0-9a-fA-F]{1,4}:){7,7}[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,7}:|([0-9a-fA-F]{1,4}:)"
r"{1,6}:[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,5}(:[0-9a-fA-F]{1,4}){1,2}|([0-9a-fA-F]{1,4}:)"
r"{1,4}(:[0-9a-fA-F]{1,4}){1,3}|([0-9a-fA-F]{1,4}:){1,3}(:[0-9a-fA-F]{1,4}){1,4}|([0-9a-fA-F]"
r"{1,4}:){1,2}(:[0-9a-fA-F]{1,4}){1,5}|[0-9a-fA-F]{1,4}:((:[0-9a-fA-F]{1,4}){1,6})|:((:"
r"[0-9a-fA-F]{1,4}){1,7}|:)|fe80:(:[0-9a-fA-F]{0,4}){0,4}%[0-9a-zA-Z]{1,}|::(ffff(:0{1,4})"
r"{0,1}:){0,1}((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9])"
r"{0,1}[0-9])|([0-9a-fA-F]{1,4}:){1,4}:((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}"
r"(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9]))"
)
def test_get_ipv4():
ip = get_ipv4()
assert re.match(
regex_match_ipv4,
ip
)
def test_get_ipv6():
ip = get_ipv6()
assert re.match(
regex_match_ipv6,
ip
)

View File

@@ -0,0 +1,127 @@
"""
export HOSTED_ZONE_ID=<aws hosted zone ID>
export ROUTE53_RECORD=something.mydomain.com
"""
import boto3
import os
import logging
import subprocess
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s.%(msecs)03d %(levelname)s %(module)s - %(funcName)s: %(message)s',
datefmt='%Y-%m-%d %H:%M:%S',
)
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
HOSTED_ZONE_ID = os.getenv("HOSTED_ZONE_ID")
ROUTE53_RECORD = os.getenv("ROUTE53_RECORD")
SKIP_IPV4 = os.getenv("SKIP_IPV4", "false").lower() == "true"
SKIP_IPV6 = os.getenv("SKIP_IPV6", "false").lower() == "true"
def get_ipv4() -> str:
result = subprocess.run(["curl", "-4", "ifconfig.me"], capture_output=True)
return result.stdout.decode()
def get_ipv6() -> str:
result = subprocess.run(["curl", "-6", "ifconfig.me"], capture_output=True)
return result.stdout.decode()
def update_ipv4(hosted_zone_id: str, record: str, public_ipv4: str):
client = boto3.client("route53")
try:
logger.info("Calling upsert for ipv4.")
client.change_resource_record_sets(
HostedZoneId=hosted_zone_id,
ChangeBatch={
"Comment": "Update Public Addresses",
"Changes": [
{
"Action": "UPSERT",
"ResourceRecordSet": {
"Name": f"{record}",
"Type": "A",
"TTL": 300,
"ResourceRecords": [
{
"Value": public_ipv4
}
]
}
}
]
}
)
logger.info(f"Successfully updated ipv4 for {record}")
except Exception as e:
logger.error(f"Error updating ipv4 for {record}.")
raise e
def update_ipv6(hosted_zone_id: str, record: str, public_ipv6: str):
client = boto3.client("route53")
try:
logger.info("Calling upsert for ipv6.")
client.change_resource_record_sets(
HostedZoneId=hosted_zone_id,
ChangeBatch={
"Comment": "Update Public Addresses",
"Changes": [
{
"Action": "UPSERT",
"ResourceRecordSet": {
"Name": f"{record}",
"Type": "AAAA",
"TTL": 300,
"ResourceRecords": [
{
"Value": public_ipv6
}
]
}
}
]
}
)
logger.info(f"Successfully updated ipv6 for {record}")
except Exception as e:
logger.error(f"Error updating ipv6 for {record}.")
raise e
def main():
if not HOSTED_ZONE_ID:
logger.error("HOSTED_ZONE_ID env var not found!")
exit(1)
if not ROUTE53_RECORD:
logger.error("ROUTE53_RECORD env var not found!")
exit(1)
logger.info(f"Attempting to update {ROUTE53_RECORD} from {HOSTED_ZONE_ID}.")
if SKIP_IPV4:
logger.warning("Skipping IPv4.")
else:
logger.info("Getting IPv4 address from ifconfig.me")
public_ipv4 = get_ipv4()
if not public_ipv4:
logger.error("Public IPv4 not found.")
exit(1)
logger.info(f"Public IPv4 is {public_ipv4}")
update_ipv4(hosted_zone_id=HOSTED_ZONE_ID, record=ROUTE53_RECORD, public_ipv4=public_ipv4)
if SKIP_IPV6:
logger.warning("Skipping IPv6")
else:
logger.info("Getting IPv6 address from ifconfig.me")
public_ipv6 = get_ipv6()
if not public_ipv6:
logger.error("Public IPv6 not found.")
exit(1)
logger.info(f"Public IPv6 is {public_ipv6}")
update_ipv6(hosted_zone_id=HOSTED_ZONE_ID, record=ROUTE53_RECORD, public_ipv6=public_ipv6)
if __name__ == "__main__":
main()

197
podman/graduated/ddns/uv.lock generated Normal file
View File

@@ -0,0 +1,197 @@
version = 1
requires-python = ">=3.10"
[[package]]
name = "boto3"
version = "1.37.30"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "botocore" },
{ name = "jmespath" },
{ name = "s3transfer" },
]
sdist = { url = "https://files.pythonhosted.org/packages/f0/4b/fc19e5b08f5871ebc92a5bb8919133245bf63ab4e985b91e00ff78cd0905/boto3-1.37.30.tar.gz", hash = "sha256:beea13db5a5f5eaacecfa905cd1e4e933c13802f776198264eef229d6dffcc42", size = 111380 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/28/3a/072a196ee038e859a6bdd52e374533c722e8b943c9a52c673dab70976d98/boto3-1.37.30-py3-none-any.whl", hash = "sha256:c75d78013eb43b354662cbd5f30bf537ab06641d3ed37aaad6fcf55a529d2991", size = 139560 },
]
[[package]]
name = "botocore"
version = "1.37.30"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "jmespath" },
{ name = "python-dateutil" },
{ name = "urllib3" },
]
sdist = { url = "https://files.pythonhosted.org/packages/49/e7/29af47eb173faaeef3daabcc3e94bd8b6c1d87e1ba8eef1c6a18827b9cee/botocore-1.37.30.tar.gz", hash = "sha256:2f43b61e0231abbb4fbe8917acb1af98cb83dbab8c264c0d1f5ca0f16fdbf219", size = 13810655 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/d9/85/cef0fdbd17f09cddc97c6b3182b099e2583ca77caec76f8a09f76794266e/botocore-1.37.30-py3-none-any.whl", hash = "sha256:d8ca899962d2079acd52483581f607322513910337a69bdae697766404b85b7d", size = 13476760 },
]
[[package]]
name = "colorama"
version = "0.4.6"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/d8/53/6f443c9a4a8358a93a6792e2acffb9d9d5cb0a5cfd8802644b7b1c9a02e4/colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44", size = 27697 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/d1/d6/3965ed04c63042e047cb6a3e6ed1a63a35087b6a609aa3a15ed8ac56c221/colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6", size = 25335 },
]
[[package]]
name = "ddns"
version = "0.1.0"
source = { virtual = "." }
dependencies = [
{ name = "boto3" },
{ name = "pytest" },
]
[package.metadata]
requires-dist = [
{ name = "boto3", specifier = ">=1.37.30" },
{ name = "pytest", specifier = ">=8.3.5" },
]
[[package]]
name = "exceptiongroup"
version = "1.2.2"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/09/35/2495c4ac46b980e4ca1f6ad6db102322ef3ad2410b79fdde159a4b0f3b92/exceptiongroup-1.2.2.tar.gz", hash = "sha256:47c2edf7c6738fafb49fd34290706d1a1a2f4d1c6df275526b62cbb4aa5393cc", size = 28883 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/02/cc/b7e31358aac6ed1ef2bb790a9746ac2c69bcb3c8588b41616914eb106eaf/exceptiongroup-1.2.2-py3-none-any.whl", hash = "sha256:3111b9d131c238bec2f8f516e123e14ba243563fb135d3fe885990585aa7795b", size = 16453 },
]
[[package]]
name = "iniconfig"
version = "2.1.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/f2/97/ebf4da567aa6827c909642694d71c9fcf53e5b504f2d96afea02718862f3/iniconfig-2.1.0.tar.gz", hash = "sha256:3abbd2e30b36733fee78f9c7f7308f2d0050e88f0087fd25c2645f63c773e1c7", size = 4793 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/2c/e1/e6716421ea10d38022b952c159d5161ca1193197fb744506875fbb87ea7b/iniconfig-2.1.0-py3-none-any.whl", hash = "sha256:9deba5723312380e77435581c6bf4935c94cbfab9b1ed33ef8d238ea168eb760", size = 6050 },
]
[[package]]
name = "jmespath"
version = "1.0.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/00/2a/e867e8531cf3e36b41201936b7fa7ba7b5702dbef42922193f05c8976cd6/jmespath-1.0.1.tar.gz", hash = "sha256:90261b206d6defd58fdd5e85f478bf633a2901798906be2ad389150c5c60edbe", size = 25843 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/31/b4/b9b800c45527aadd64d5b442f9b932b00648617eb5d63d2c7a6587b7cafc/jmespath-1.0.1-py3-none-any.whl", hash = "sha256:02e2e4cc71b5bcab88332eebf907519190dd9e6e82107fa7f83b1003a6252980", size = 20256 },
]
[[package]]
name = "packaging"
version = "24.2"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/d0/63/68dbb6eb2de9cb10ee4c9c14a0148804425e13c4fb20d61cce69f53106da/packaging-24.2.tar.gz", hash = "sha256:c228a6dc5e932d346bc5739379109d49e8853dd8223571c7c5b55260edc0b97f", size = 163950 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/88/ef/eb23f262cca3c0c4eb7ab1933c3b1f03d021f2c48f54763065b6f0e321be/packaging-24.2-py3-none-any.whl", hash = "sha256:09abb1bccd265c01f4a3aa3f7a7db064b36514d2cba19a2f694fe6150451a759", size = 65451 },
]
[[package]]
name = "pluggy"
version = "1.5.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/96/2d/02d4312c973c6050a18b314a5ad0b3210edb65a906f868e31c111dede4a6/pluggy-1.5.0.tar.gz", hash = "sha256:2cffa88e94fdc978c4c574f15f9e59b7f4201d439195c3715ca9e2486f1d0cf1", size = 67955 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/88/5f/e351af9a41f866ac3f1fac4ca0613908d9a41741cfcf2228f4ad853b697d/pluggy-1.5.0-py3-none-any.whl", hash = "sha256:44e1ad92c8ca002de6377e165f3e0f1be63266ab4d554740532335b9d75ea669", size = 20556 },
]
[[package]]
name = "pytest"
version = "8.3.5"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "colorama", marker = "sys_platform == 'win32'" },
{ name = "exceptiongroup", marker = "python_full_version < '3.11'" },
{ name = "iniconfig" },
{ name = "packaging" },
{ name = "pluggy" },
{ name = "tomli", marker = "python_full_version < '3.11'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/ae/3c/c9d525a414d506893f0cd8a8d0de7706446213181570cdbd766691164e40/pytest-8.3.5.tar.gz", hash = "sha256:f4efe70cc14e511565ac476b57c279e12a855b11f48f212af1080ef2263d3845", size = 1450891 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/30/3d/64ad57c803f1fa1e963a7946b6e0fea4a70df53c1a7fed304586539c2bac/pytest-8.3.5-py3-none-any.whl", hash = "sha256:c69214aa47deac29fad6c2a4f590b9c4a9fdb16a403176fe154b79c0b4d4d820", size = 343634 },
]
[[package]]
name = "python-dateutil"
version = "2.9.0.post0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "six" },
]
sdist = { url = "https://files.pythonhosted.org/packages/66/c0/0c8b6ad9f17a802ee498c46e004a0eb49bc148f2fd230864601a86dcf6db/python-dateutil-2.9.0.post0.tar.gz", hash = "sha256:37dd54208da7e1cd875388217d5e00ebd4179249f90fb72437e91a35459a0ad3", size = 342432 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/ec/57/56b9bcc3c9c6a792fcbaf139543cee77261f3651ca9da0c93f5c1221264b/python_dateutil-2.9.0.post0-py2.py3-none-any.whl", hash = "sha256:a8b2bc7bffae282281c8140a97d3aa9c14da0b136dfe83f850eea9a5f7470427", size = 229892 },
]
[[package]]
name = "s3transfer"
version = "0.11.4"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "botocore" },
]
sdist = { url = "https://files.pythonhosted.org/packages/0f/ec/aa1a215e5c126fe5decbee2e107468f51d9ce190b9763cb649f76bb45938/s3transfer-0.11.4.tar.gz", hash = "sha256:559f161658e1cf0a911f45940552c696735f5c74e64362e515f333ebed87d679", size = 148419 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/86/62/8d3fc3ec6640161a5649b2cddbbf2b9fa39c92541225b33f117c37c5a2eb/s3transfer-0.11.4-py3-none-any.whl", hash = "sha256:ac265fa68318763a03bf2dc4f39d5cbd6a9e178d81cc9483ad27da33637e320d", size = 84412 },
]
[[package]]
name = "six"
version = "1.17.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/94/e7/b2c673351809dca68a0e064b6af791aa332cf192da575fd474ed7d6f16a2/six-1.17.0.tar.gz", hash = "sha256:ff70335d468e7eb6ec65b95b99d3a2836546063f63acc5171de367e834932a81", size = 34031 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/b7/ce/149a00dd41f10bc29e5921b496af8b574d8413afcd5e30dfa0ed46c2cc5e/six-1.17.0-py2.py3-none-any.whl", hash = "sha256:4721f391ed90541fddacab5acf947aa0d3dc7d27b2e1e8eda2be8970586c3274", size = 11050 },
]
[[package]]
name = "tomli"
version = "2.2.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/18/87/302344fed471e44a87289cf4967697d07e532f2421fdaf868a303cbae4ff/tomli-2.2.1.tar.gz", hash = "sha256:cd45e1dc79c835ce60f7404ec8119f2eb06d38b1deba146f07ced3bbc44505ff", size = 17175 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/43/ca/75707e6efa2b37c77dadb324ae7d9571cb424e61ea73fad7c56c2d14527f/tomli-2.2.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:678e4fa69e4575eb77d103de3df8a895e1591b48e740211bd1067378c69e8249", size = 131077 },
{ url = "https://files.pythonhosted.org/packages/c7/16/51ae563a8615d472fdbffc43a3f3d46588c264ac4f024f63f01283becfbb/tomli-2.2.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:023aa114dd824ade0100497eb2318602af309e5a55595f76b626d6d9f3b7b0a6", size = 123429 },
{ url = "https://files.pythonhosted.org/packages/f1/dd/4f6cd1e7b160041db83c694abc78e100473c15d54620083dbd5aae7b990e/tomli-2.2.1-cp311-cp311-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:ece47d672db52ac607a3d9599a9d48dcb2f2f735c6c2d1f34130085bb12b112a", size = 226067 },
{ url = "https://files.pythonhosted.org/packages/a9/6b/c54ede5dc70d648cc6361eaf429304b02f2871a345bbdd51e993d6cdf550/tomli-2.2.1-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:6972ca9c9cc9f0acaa56a8ca1ff51e7af152a9f87fb64623e31d5c83700080ee", size = 236030 },
{ url = "https://files.pythonhosted.org/packages/1f/47/999514fa49cfaf7a92c805a86c3c43f4215621855d151b61c602abb38091/tomli-2.2.1-cp311-cp311-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:c954d2250168d28797dd4e3ac5cf812a406cd5a92674ee4c8f123c889786aa8e", size = 240898 },
{ url = "https://files.pythonhosted.org/packages/73/41/0a01279a7ae09ee1573b423318e7934674ce06eb33f50936655071d81a24/tomli-2.2.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:8dd28b3e155b80f4d54beb40a441d366adcfe740969820caf156c019fb5c7ec4", size = 229894 },
{ url = "https://files.pythonhosted.org/packages/55/18/5d8bc5b0a0362311ce4d18830a5d28943667599a60d20118074ea1b01bb7/tomli-2.2.1-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:e59e304978767a54663af13c07b3d1af22ddee3bb2fb0618ca1593e4f593a106", size = 245319 },
{ url = "https://files.pythonhosted.org/packages/92/a3/7ade0576d17f3cdf5ff44d61390d4b3febb8a9fc2b480c75c47ea048c646/tomli-2.2.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:33580bccab0338d00994d7f16f4c4ec25b776af3ffaac1ed74e0b3fc95e885a8", size = 238273 },
{ url = "https://files.pythonhosted.org/packages/72/6f/fa64ef058ac1446a1e51110c375339b3ec6be245af9d14c87c4a6412dd32/tomli-2.2.1-cp311-cp311-win32.whl", hash = "sha256:465af0e0875402f1d226519c9904f37254b3045fc5084697cefb9bdde1ff99ff", size = 98310 },
{ url = "https://files.pythonhosted.org/packages/6a/1c/4a2dcde4a51b81be3530565e92eda625d94dafb46dbeb15069df4caffc34/tomli-2.2.1-cp311-cp311-win_amd64.whl", hash = "sha256:2d0f2fdd22b02c6d81637a3c95f8cd77f995846af7414c5c4b8d0545afa1bc4b", size = 108309 },
{ url = "https://files.pythonhosted.org/packages/52/e1/f8af4c2fcde17500422858155aeb0d7e93477a0d59a98e56cbfe75070fd0/tomli-2.2.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:4a8f6e44de52d5e6c657c9fe83b562f5f4256d8ebbfe4ff922c495620a7f6cea", size = 132762 },
{ url = "https://files.pythonhosted.org/packages/03/b8/152c68bb84fc00396b83e7bbddd5ec0bd3dd409db4195e2a9b3e398ad2e3/tomli-2.2.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:8d57ca8095a641b8237d5b079147646153d22552f1c637fd3ba7f4b0b29167a8", size = 123453 },
{ url = "https://files.pythonhosted.org/packages/c8/d6/fc9267af9166f79ac528ff7e8c55c8181ded34eb4b0e93daa767b8841573/tomli-2.2.1-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:4e340144ad7ae1533cb897d406382b4b6fede8890a03738ff1683af800d54192", size = 233486 },
{ url = "https://files.pythonhosted.org/packages/5c/51/51c3f2884d7bab89af25f678447ea7d297b53b5a3b5730a7cb2ef6069f07/tomli-2.2.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:db2b95f9de79181805df90bedc5a5ab4c165e6ec3fe99f970d0e302f384ad222", size = 242349 },
{ url = "https://files.pythonhosted.org/packages/ab/df/bfa89627d13a5cc22402e441e8a931ef2108403db390ff3345c05253935e/tomli-2.2.1-cp312-cp312-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:40741994320b232529c802f8bc86da4e1aa9f413db394617b9a256ae0f9a7f77", size = 252159 },
{ url = "https://files.pythonhosted.org/packages/9e/6e/fa2b916dced65763a5168c6ccb91066f7639bdc88b48adda990db10c8c0b/tomli-2.2.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:400e720fe168c0f8521520190686ef8ef033fb19fc493da09779e592861b78c6", size = 237243 },
{ url = "https://files.pythonhosted.org/packages/b4/04/885d3b1f650e1153cbb93a6a9782c58a972b94ea4483ae4ac5cedd5e4a09/tomli-2.2.1-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:02abe224de6ae62c19f090f68da4e27b10af2b93213d36cf44e6e1c5abd19fdd", size = 259645 },
{ url = "https://files.pythonhosted.org/packages/9c/de/6b432d66e986e501586da298e28ebeefd3edc2c780f3ad73d22566034239/tomli-2.2.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:b82ebccc8c8a36f2094e969560a1b836758481f3dc360ce9a3277c65f374285e", size = 244584 },
{ url = "https://files.pythonhosted.org/packages/1c/9a/47c0449b98e6e7d1be6cbac02f93dd79003234ddc4aaab6ba07a9a7482e2/tomli-2.2.1-cp312-cp312-win32.whl", hash = "sha256:889f80ef92701b9dbb224e49ec87c645ce5df3fa2cc548664eb8a25e03127a98", size = 98875 },
{ url = "https://files.pythonhosted.org/packages/ef/60/9b9638f081c6f1261e2688bd487625cd1e660d0a85bd469e91d8db969734/tomli-2.2.1-cp312-cp312-win_amd64.whl", hash = "sha256:7fc04e92e1d624a4a63c76474610238576942d6b8950a2d7f908a340494e67e4", size = 109418 },
{ url = "https://files.pythonhosted.org/packages/04/90/2ee5f2e0362cb8a0b6499dc44f4d7d48f8fff06d28ba46e6f1eaa61a1388/tomli-2.2.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:f4039b9cbc3048b2416cc57ab3bda989a6fcf9b36cf8937f01a6e731b64f80d7", size = 132708 },
{ url = "https://files.pythonhosted.org/packages/c0/ec/46b4108816de6b385141f082ba99e315501ccd0a2ea23db4a100dd3990ea/tomli-2.2.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:286f0ca2ffeeb5b9bd4fcc8d6c330534323ec51b2f52da063b11c502da16f30c", size = 123582 },
{ url = "https://files.pythonhosted.org/packages/a0/bd/b470466d0137b37b68d24556c38a0cc819e8febe392d5b199dcd7f578365/tomli-2.2.1-cp313-cp313-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:a92ef1a44547e894e2a17d24e7557a5e85a9e1d0048b0b5e7541f76c5032cb13", size = 232543 },
{ url = "https://files.pythonhosted.org/packages/d9/e5/82e80ff3b751373f7cead2815bcbe2d51c895b3c990686741a8e56ec42ab/tomli-2.2.1-cp313-cp313-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:9316dc65bed1684c9a98ee68759ceaed29d229e985297003e494aa825ebb0281", size = 241691 },
{ url = "https://files.pythonhosted.org/packages/05/7e/2a110bc2713557d6a1bfb06af23dd01e7dde52b6ee7dadc589868f9abfac/tomli-2.2.1-cp313-cp313-manylinux_2_5_i686.manylinux1_i686.manylinux_2_17_i686.manylinux2014_i686.whl", hash = "sha256:e85e99945e688e32d5a35c1ff38ed0b3f41f43fad8df0bdf79f72b2ba7bc5272", size = 251170 },
{ url = "https://files.pythonhosted.org/packages/64/7b/22d713946efe00e0adbcdfd6d1aa119ae03fd0b60ebed51ebb3fa9f5a2e5/tomli-2.2.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:ac065718db92ca818f8d6141b5f66369833d4a80a9d74435a268c52bdfa73140", size = 236530 },
{ url = "https://files.pythonhosted.org/packages/38/31/3a76f67da4b0cf37b742ca76beaf819dca0ebef26d78fc794a576e08accf/tomli-2.2.1-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:d920f33822747519673ee656a4b6ac33e382eca9d331c87770faa3eef562aeb2", size = 258666 },
{ url = "https://files.pythonhosted.org/packages/07/10/5af1293da642aded87e8a988753945d0cf7e00a9452d3911dd3bb354c9e2/tomli-2.2.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:a198f10c4d1b1375d7687bc25294306e551bf1abfa4eace6650070a5c1ae2744", size = 243954 },
{ url = "https://files.pythonhosted.org/packages/5b/b9/1ed31d167be802da0fc95020d04cd27b7d7065cc6fbefdd2f9186f60d7bd/tomli-2.2.1-cp313-cp313-win32.whl", hash = "sha256:d3f5614314d758649ab2ab3a62d4f2004c825922f9e370b29416484086b264ec", size = 98724 },
{ url = "https://files.pythonhosted.org/packages/c7/32/b0963458706accd9afcfeb867c0f9175a741bf7b19cd424230714d722198/tomli-2.2.1-cp313-cp313-win_amd64.whl", hash = "sha256:a38aa0308e754b0e3c67e344754dff64999ff9b513e691d0e786265c93583c69", size = 109383 },
{ url = "https://files.pythonhosted.org/packages/6e/c2/61d3e0f47e2b74ef40a68b9e6ad5984f6241a942f7cd3bbfbdbd03861ea9/tomli-2.2.1-py3-none-any.whl", hash = "sha256:cb55c73c5f4408779d0cf3eef9f762b9c9f147a77de7b258bef0a5628adc85cc", size = 14257 },
]
[[package]]
name = "urllib3"
version = "2.3.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/aa/63/e53da845320b757bf29ef6a9062f5c669fe997973f966045cb019c3f4b66/urllib3-2.3.0.tar.gz", hash = "sha256:f8c5449b3cf0861679ce7e0503c7b44b5ec981bec0d1d3795a07f1ba96f0204d", size = 307268 }
wheels = [
{ url = "https://files.pythonhosted.org/packages/c8/19/4ec628951a74043532ca2cf5d97b7b14863931476d117c471e8e2b1eb39f/urllib3-2.3.0-py3-none-any.whl", hash = "sha256:1cee9ad369867bfdbbb48b7dd50374c0967a0bb7710050facf0dd6911440e3df", size = 128369 },
]

View File

@@ -19,7 +19,7 @@ services:
networks: networks:
- gitea - gitea
volumes: volumes:
- /home/gitea/gitea:/data - /home/gitea/gitea_data:/data
- /etc/localtime:/etc/localtime:ro - /etc/localtime:/etc/localtime:ro
ports: ports:
- "3000:3000" - "3000:3000"
@@ -40,6 +40,6 @@ services:
networks: networks:
- gitea - gitea
volumes: volumes:
- /home/gitea/postgres:/var/lib/postgresql/data - /home/gitea/gitea_postgres:/var/lib/postgresql/data
security_opt: security_opt:
- label=disable - label=disable

View File

@@ -1,5 +1,16 @@
# Gitea # Gitea
- [Gitea](#gitea)
- [Gitea on Rootless Podman](#gitea-on-rootless-podman)
- [Create the gitea user](#create-the-gitea-user)
- [Convert Compose to Quadlet](#convert-compose-to-quadlet)
- [Install Quadlets](#install-quadlets)
- [Gitea Runners](#gitea-runners)
- [Firewall Rules](#firewall-rules)
- [Install](#install)
- [Cache Cleanup](#cache-cleanup)
- [Email Notifications](#email-notifications)
## Gitea on Rootless Podman ## Gitea on Rootless Podman
### Create the gitea user ### Create the gitea user
@@ -108,3 +119,18 @@ To run it every day at midnight: `crontab -e`
```bash ```bash
0 0 * * * yes | docker builder prune -a 0 0 * * * yes | docker builder prune -a
``` ```
## Email Notifications
In `/data/gitea/conf/app.ini` add (yes, the `` around the password matters):
```conf
[mailer]
ENABLED = true
FROM = gitea@reeseapps.com
PROTOCOL = smtps
SMTP_ADDR = email-smtp.us-east-1.amazonaws.com
SMTP_PORT = 465
USER = ABC123
PASSWD = `ABC123...`
```

View File

@@ -0,0 +1,42 @@
# Iperf3 Server on Podman
- [Iperf3 Server on Podman](#iperf3-server-on-podman)
- [Install](#install)
- [Client](#client)
## Install
Create a new ipv6 network
```bash
# As root
podman network create iperf3 --ipv6
```
Create the iperf3 container
```bash
# As root
podman run \
--name iperf3 \
-d \
-p 5201:5201 \
--restart=always \
--network=iperf3 \
docker.io/networkstatic/iperf3:latest -s
```
Make sure you have the `podman-restart` service enabled and running
```bash
systemctl enable --now podman-restart
```
## Client
```bash
podman run \
--rm \
--network=iperf3 \
docker.io/networkstatic/iperf3:latest -c 3dserver.reeselink.com -P 5
```

View File

@@ -0,0 +1,74 @@
# Jellyfin
They have podman rootless instructions!
<https://jellyfin.org/docs/general/installation/container/#managing-via-systemd>
## Install
### Create jellyfin btrfs volume
```bash
btrfs sub create /btrfs/jellyfin
```
Add /home/jellyfin mount to /etc/fstab
```bash
systemctl daemon-reload
mount -a --mkdir
```
### Create the jellyfin user
```bash
export JF_USER=jellyfin
useradd $JF_USER
su -l $JF_USER
ssh-keygen
exit
cp ~/.ssh/authorized_keys /home/$JF_USER/.ssh/authorized_keys
chown $JF_USER:$JF_USER /home/$JF_USER/.ssh/authorized_keys
loginctl enable-linger $(id -u $JF_USER)
```
SSH into the server as jellyfin
```bash
systemctl --user enable podman-restart
systemctl --user enable --now podman.socket
mkdir -p ~/.config/containers/systemd
mkdir jellyfin-config jellyfin-cache jellyfin-media
```
### Install jellyfin
~/.config/containers/systemd/jellyfin.container
```bash
# Copy the files to the server
scp -r podman/graduated/jellyfin/quadlets/. jellyfin:~/.config/containers/systemd/
ssh jellyfin systemctl --user daemon-reload
ssh jellyfin systemctl --user start jellyfin
ssh jellyfin journalctl --user -xeu jellyfin
ssh jellyfin systemctl --user enable --now podman-auto-update.timer
```
Update the jellyfin record in Caddy.
Open port 8096 in the firewall.
## Mounting Media Directory
Update /etc/fstab with the smb disk details.
Note:
- `x-systemd.automount` which only mounts the device when it's accessed.
- `x-systemd.mount-timeout=30` allows a 30 second timeout
- `_netdev` ensures the device won't be mounted until after the network is available
```conf
UUID=... /btrfs/some-name btrfs subvolid=5,compress=zstd:1,x-systemd.automount,x-systemd.mount-timeout=30,_netdev 0 0
```

View File

@@ -0,0 +1,18 @@
[Container]
Image=docker.io/jellyfin/jellyfin:latest
AutoUpdate=registry
PublishPort=8096:8096/tcp
UserNS=keep-id
SecurityLabelDisable=true
Volume=/home/jellyfin/jellyfin-config:/config:Z
Volume=/home/jellyfin/jellyfin-cache:/cache:Z
Volume=/home/jellyfin/jellyfin-media:/media:Z
Network=jellyfin.network
[Service]
# Inform systemd of additional exit status
SuccessExitStatus=0 143
[Install]
# Start by default on boot
WantedBy=default.target

View File

@@ -0,0 +1,8 @@
[Unit]
Description=Jellyfin
[Network]
IPv6=true
[Install]
WantedBy=default.target

View File

@@ -183,6 +183,12 @@ Write down the passphrase. Store both at safe place(s).
docker exec nextcloud-aio-borgbackup borg key export /mnt/borgbackup/borg/ docker exec nextcloud-aio-borgbackup borg key export /mnt/borgbackup/borg/
``` ```
If you need to reset the borg backup repo:
```bash
docker exec nextcloud-aio-borgbackup rm /mnt/docker-aio-config/data/borg.config
```
## Maintenace Mode ## Maintenace Mode
```bash ```bash

5
podman/podman.md Normal file
View File

@@ -0,0 +1,5 @@
# Podman Server
## Notes
- create a btrfs subvolume for each user

View File

@@ -0,0 +1,72 @@
# Borg Backup
## Server Setup
<https://borgbackup.readthedocs.io/en/stable/deployment/central-backup-server.html#user-and-group>
User: backup
Group: backup
Shell: /bin/bash (or other capable to run the borg serve command)
Home: /home/backup
```bash
dnf install borgbackup
useradd backup
mkdir /home/backup/.ssh
touch /home/backup/.ssh/authorized_keys
chown -R backup:backup /home/backup/.ssh
```
### Adding a Client
Note: See [adding nextcloud](#adding-nextcloud) for nextcloud instructions here.
```bash
export BACKUP_HOST=""
ssh-keygen -C backup@${BACKUP_HOST} -f ~/.ssh/id_${BACKUP_HOST}
cat <<EOF >> ~/.ssh/config
Host ${BACKUP_HOST}
Hostname ${BACKUP_HOST}
IdentityFile ~/.ssh/id_${BACKUP_HOST}
User backup
Port 22
KeepAlive yes
EOF
```
Now on the server:
```bash
export CLIENT_FQDN=""
# Should look like ssh-rsa abcd1234 backup@fqdn.something.com
export SSH_PUBKEY=""
export AUTHKEY_ENTRY="command=\"cd /home/backup/repos/${CLIENT_FQDN}; borg serve --restrict-to-path /home/backup/repos/${CLIENT_FQDN}\",restrict ${SSH_PUBKEY}"
echo $AUTHKEY_ENTRY >> /home/backup/.ssh/authorized_keys
mkdir /home/backup/repos/${CLIENT_FQDN}
chown backup:backup /home/backup/repos/${CLIENT_FQDN}
```
Then back on the client:
```bash
ssh borg.reeselink.com
borg init --encryption none backup@${BACKUP_HOST}:root
```
#### Adding Nextcloud
Rather than creating a client, just set the borg backup location to:
```text
backup@borg.reeselink.com:nextcloud
```
Then run the backup. It will generate a pubkey. Copy this into the authorized_keys file.

View File

View File

@@ -0,0 +1,75 @@
#!/bin/sh
export BACKUP_HOST=driveripper.reeselink.com
sshfs ${BACKUP_HOST}:backup /backup
# Setting this, so the repo does not need to be given on the commandline:
export BORG_REPO='/backup'
# some helpers and error handling:
info() { printf "\n%s %s\n\n" "$( date )" "$*" >&2; }
trap 'echo $( date ) Backup interrupted >&2; exit 2' INT TERM
info "Starting backup"
# Backup the most important directories into an archive named after
# the machine this script is currently running on:
borg create \
--verbose \
--filter AME \
--list \
--stats \
--show-rc \
--compression none \
--exclude-caches \
--exclude 'home/*/.cache/*' \
--exclude 'var/tmp/*' \
\
::'{hostname}-{now}' \
/etc \
/home \
/root \
/var
backup_exit=$?
info "Pruning repository"
# Use the `prune` subcommand to maintain 7 daily, 4 weekly and 6 monthly
# archives of THIS machine. The '{hostname}-*' matching is very important to
# limit prune's operation to this machine's archives and not apply to
# other machines' archives also:
borg prune \
--list \
--glob-archives '{hostname}-*' \
--show-rc \
--keep-daily 7 \
--keep-weekly 2 \
--keep-monthly 1
prune_exit=$?
# actually free repo disk space by compacting segments
info "Compacting repository"
borg compact
compact_exit=$?
# use highest exit code as global exit code
global_exit=$(( backup_exit > prune_exit ? backup_exit : prune_exit ))
global_exit=$(( compact_exit > global_exit ? compact_exit : global_exit ))
if [ ${global_exit} -eq 0 ]; then
info "Backup, Prune, and Compact finished successfully"
elif [ ${global_exit} -eq 1 ]; then
info "Backup, Prune, and/or Compact finished with warnings"
else
info "Backup, Prune, and/or Compact finished with errors"
fi
fusermount -u /backup
exit ${global_exit}

View File

View File

@@ -1,15 +0,0 @@
# DDNS Service
Since we occasionally need an ipv4 address we'll make one.
This creates and keeps updated ipv4 records for reeseapps.com and reeselink.com
as specified in vars.yaml
**NOTE**: This requires the aws cli to be installed on each node with
credentials that can modify records in route53. See
[aws_iam](/cloud/graduated/aws_iam/aws_iam.md) and
[aws_cli](/cloud/graduated/aws_cli/aws_cli.md)
```bash
ansible-playbook -i ansible/inventory.yaml systemd/graduated/ddns/install_ddns.yaml
```

View File

@@ -1,5 +0,0 @@
[Unit]
Description=Updates the IPv4 records with the current public IPV4 address
[Service]
ExecStart=/usr/local/scripts/ddns.sh

View File

@@ -1,29 +0,0 @@
#!/bin/bash
# Get public IP address (there are many ways to do it, I picked this way)
PUBLIC_IPV4=$(curl -4 ifconfig.me)
PUBLIC_IPV6=$(curl -6 ifconfig.me)
# Update reeselink records
cat /etc/ddns/ipv4_reeselink_record_template.json \
| jq '.Changes[0].ResourceRecordSet.ResourceRecords[0].Value = "'$PUBLIC_IPV4'"' \
> /etc/ddns/ipv4_reeselink_record.json
cat /etc/ddns/ipv6_reeselink_record_template.json \
| jq '.Changes[0].ResourceRecordSet.ResourceRecords[0].Value = "'$PUBLIC_IPV6'"' \
> /etc/ddns/ipv6_reeselink_record.json
# Update reeseapps records
cat /etc/ddns/ipv4_reeseapps_record_template.json \
| jq '.Changes[].ResourceRecordSet.ResourceRecords[0].Value = "'$PUBLIC_IPV4'"' \
> /etc/ddns/ipv4_reeseapps_record.json
cat /etc/ddns/ipv6_reeseapps_record_template.json \
| jq '.Changes[].ResourceRecordSet.ResourceRecords[0].Value = "'$PUBLIC_IPV6'"' \
> /etc/ddns/ipv6_reeseapps_record.json
# Update reeselink records
aws route53 change-resource-record-sets --hosted-zone-id {{ reeselink_zone_id }} --change-batch file:///etc/ddns/ipv4_reeselink_record.json
aws route53 change-resource-record-sets --hosted-zone-id {{ reeselink_zone_id }} --change-batch file:///etc/ddns/ipv6_reeselink_record.json
# Update reeseapps records
aws route53 change-resource-record-sets --hosted-zone-id {{ reeseapps_zone_id }} --change-batch file:///etc/ddns/ipv4_reeseapps_record.json
aws route53 change-resource-record-sets --hosted-zone-id {{ reeseapps_zone_id }} --change-batch file:///etc/ddns/ipv6_reeseapps_record.json

View File

@@ -1,78 +0,0 @@
- name: Create DDNS Service
hosts: 3dserver
vars_files:
- vars.yaml
- secrets/secret_vars.yaml
tasks:
- name: Ensure moreutils, jq is installed
ansible.builtin.dnf:
name:
- moreutils
- jq
state: present
- name: Create /usr/local/scripts dir
ansible.builtin.file:
path: /usr/local/scripts
state: directory
mode: '0755'
- name: Copy ddns.sh
template:
src: ddns.sh
dest: /usr/local/scripts/ddns.sh
owner: root
group: root
mode: '0755'
- name: Create /etc/ddns dir
ansible.builtin.file:
path: /etc/ddns
state: directory
mode: '0755'
- name: Copy IPv4 reeseapps_record_template.json
template:
src: ipv4_reeseapps_record_template.json.j2
dest: /etc/ddns/ipv4_reeseapps_record_template.json
owner: root
group: root
mode: '0644'
- name: Copy IPv4 reeselink_record_template.json
template:
src: ipv4_reeselink_record_template.json.j2
dest: /etc/ddns/ipv4_reeselink_record_template.json
owner: root
group: root
mode: '0644'
- name: Copy IPv6 reeseapps_record_template.json
template:
src: ipv6_reeseapps_record_template.json.j2
dest: /etc/ddns/ipv6_reeseapps_record_template.json
owner: root
group: root
mode: '0644'
- name: Copy IPv6 reeselink_record_template.json
template:
src: ipv6_reeselink_record_template.json.j2
dest: /etc/ddns/ipv6_reeselink_record_template.json
owner: root
group: root
mode: '0644'
- name: Copy ddns.service
template:
src: ddns.service
dest: /etc/systemd/system/ddns.service
owner: root
group: root
mode: '0644'
- name: Copy ddns.timer
template:
src: ddns.timer
dest: /etc/systemd/system/ddns.timer
owner: root
group: root
mode: '0644'
- name: Run ddns script
ansible.builtin.shell: /usr/local/scripts/ddns.sh
- name: Reload ddns timer
ansible.builtin.systemd_service:
state: restarted
name: ddns.timer
enabled: true

View File

@@ -1,20 +0,0 @@
{
"Comment": "Update Public IPV4 Address",
"Changes": [
{%- for item in records.reeseapps %}
{
"Action": "UPSERT",
"ResourceRecordSet": {
"Name": "{{ item }}.reeseapps.com",
"Type": "A",
"TTL": 300,
"ResourceRecords": [
{
"Value": ""
}
]
}
}{{ ", " if not loop.last else "" }}
{%- endfor %}
]
}

View File

@@ -1,20 +0,0 @@
{
"Comment": "Update Public IPV4 Address",
"Changes": [
{%- for item in records.reeselink %}
{
"Action": "UPSERT",
"ResourceRecordSet": {
"Name": "{{ item }}.reeselink.com",
"Type": "A",
"TTL": 300,
"ResourceRecords": [
{
"Value": ""
}
]
}
}{{ ", " if not loop.last else "" }}
{%- endfor %}
]
}

View File

@@ -1,20 +0,0 @@
{
"Comment": "Update Public IPV4 Address",
"Changes": [
{%- for item in records.reeseapps %}
{
"Action": "UPSERT",
"ResourceRecordSet": {
"Name": "{{ item }}.reeseapps.com",
"Type": "AAAA",
"TTL": 300,
"ResourceRecords": [
{
"Value": ""
}
]
}
}{{ ", " if not loop.last else "" }}
{%- endfor %}
]
}

View File

@@ -1,20 +0,0 @@
{
"Comment": "Update Public IPV4 Address",
"Changes": [
{%- for item in records.reeselink %}
{
"Action": "UPSERT",
"ResourceRecordSet": {
"Name": "{{ item }}.reeselink.com",
"Type": "AAAA",
"TTL": 300,
"ResourceRecords": [
{
"Value": ""
}
]
}
}{{ ", " if not loop.last else "" }}
{%- endfor %}
]
}

View File

@@ -1,7 +0,0 @@
records:
reeseapps:
- nextcloud
- gitea
- git
reeselink:
- ipv4