Compare commits

..

4 Commits

Author SHA1 Message Date
380d8f8e48 get cloud-init working consistently
All checks were successful
Podman DDNS Image / build-and-push-ddns (push) Successful in 39s
2025-11-26 13:54:01 -05:00
07a297f818 Add better intro to README
All checks were successful
Podman DDNS Image / build-and-push-ddns (push) Successful in 59s
2025-11-21 13:43:23 -05:00
d7224b038b switch to custom caddy with route53 dns support
All checks were successful
Podman DDNS Image / build-and-push-ddns (push) Successful in 2m21s
2025-11-20 17:58:53 -05:00
fc62219db7 update jellyfin to 10.11.3 2025-11-20 07:24:26 -05:00
10 changed files with 258 additions and 65 deletions

View File

@@ -0,0 +1,30 @@
name: Podman DDNS Image
run-name: Build and Push the Custom Caddy Image with Route53 DNS Certbot
on:
push:
paths:
- active/podman_caddy/**
- .gitea/workflows/caddy.yaml
schedule:
- cron: '@daily'
jobs:
build-and-push-ddns:
runs-on: ubuntu-latest
if: gitea.ref == 'refs/heads/main'
steps:
- name: Check out repository code
uses: actions/checkout@v4
- name: Login to Gitea Registry
uses: docker/login-action@v2
with:
registry: gitea.reeseapps.com
username: ${{ secrets.REGISTRY_USERNAME }}
password: ${{ secrets.REGISTRY_PASSWORD }}
- name: Build and push Docker image
uses: https://github.com/docker/build-push-action@v5
with:
context: ${{ gitea.workspace }}/active/podman_caddy
file: ${{ gitea.workspace }}/active/podman_caddy/Containerfile
push: true
tags: "gitea.reeseapps.com/services/caddy:latest,gitea.reeseapps.com/services/caddy:${{gitea.sha}}"
no-cache: true

View File

@@ -1,13 +1,34 @@
# Homelab # Homelab
A project to store homelab stuff. Welcome to my homelab!
Just here for the Arch distoolbox? This repo is an in-flux collection of my personal notes, docs, and tutorials of
things I find interesting and self-host.
[Arch Distoolbox](active/software_distoolbox/distoolbox.md) Take a look around!
![Arch Toolbox - "Active" projects (/active) are in use today and generally fall into these
Status](https://gitea.reeseapps.com/services/homelab/actions/workflows/distoolbox.yaml/badge.svg?branch=main) categories:
- `aws_` is for aws notes
- `device_` is for hardware
- `kubernetes_` is for helm charts or other kubernetes hosted software
- `os_` is for operating system setup guides and notes
- `podman_` is for containerized projects
- `software_` is for cli tools, projects without a specific way to host them,
or other misfits
All active projects will have a markdown file named after the project. This is
for quick access via shortcuts like `ctrl + p` in vscode. For example, I want
to check my notes for `virsh` so I would type `ctrl + p` "virsh" to open
"virsh.md".
"Retired" projects (/retired) is a graveyard of things I didn't want to delete.
"Template" projects (/templates) are quick templates for creating new active
projects with sane defaults.
I keep my GPG and SSH keys in `keys` if you want to add those to your keyring
or give me access to your servers.
## Table of Contents ## Table of Contents
@@ -18,6 +39,7 @@ Status](https://gitea.reeseapps.com/services/homelab/actions/workflows/distoolbo
- [inputrc](#inputrc) - [inputrc](#inputrc)
- ["find ." shortcuts](#find--shortcuts) - ["find ." shortcuts](#find--shortcuts)
- [tmux](#tmux) - [tmux](#tmux)
- [bash](#bash)
- [SSH Setup](#ssh-setup) - [SSH Setup](#ssh-setup)
- [Git GPG Commit Signing](#git-gpg-commit-signing) - [Git GPG Commit Signing](#git-gpg-commit-signing)
- [Important Dates and Times](#important-dates-and-times) - [Important Dates and Times](#important-dates-and-times)
@@ -62,6 +84,40 @@ find . -type d -exec chmod 755 {} \;
- Even Vertical Distribution: ctrl + b + alt + 2 - Even Vertical Distribution: ctrl + b + alt + 2
- Swap pane order: ctrl + b + : -> swap-pane -t 0 - Swap pane order: ctrl + b + : -> swap-pane -t 0
### bash
<https://tecadmin.net/bash-special-variables/>
Here are some handy references for default bash variables
```text
$0 The name of the script being executed.
$1-$9 The first nine command-line arguments.
$# The number of command-line arguments.
$* All command-line arguments as a single string.
$@ All command-line arguments as an array.
$? The exit status of the last executed command.
$$ The process ID of the current shell.
$! The process ID of the last background command.
$- Shows the current shell options or flags.
```
And here are the meanings of the shell options
```text
h Remember the location of commands as they are looked up
i Interactive shell
m Job control is enabled
B Brace expansion is enabled
H History substitution is enabled
```
So to check if you are in an interactive shell:
```bash
[ $- == *i* ]] && Some command here
```
## SSH Setup ## SSH Setup
Generate a key (password protect it!) Generate a key (password protect it!)
@@ -176,12 +232,10 @@ All projects will be prefixed with one of the following categories:
- `device_` - `device_`
- `os_` - `os_`
- `cloud_` - `software_`
- `systemd_`
- `podman_` - `podman_`
- `docker_` - `docker_`
- `kubernetes_` - `kubernetes_`
- `software_`
Note, some projects will be named with just the prefix. These are projects for Note, some projects will be named with just the prefix. These are projects for
configuring the underlying technology. The `podman` project, for example, will configuring the underlying technology. The `podman` project, for example, will
@@ -203,24 +257,15 @@ be prefixed with the cloud's name, not the word "cloud". So AWS services will
be prefixed with `aws_` and azure would be `azure_`. This should make them more be prefixed with `aws_` and azure would be `azure_`. This should make them more
searchable. searchable.
`systemd_` projects are designed to be installed with ansible and run via `software_` projects record configuration for common software agnostic to
systemd on a linux VM or other linux hardware. operating system or linux flavor.
`podman_` projects are either designed to be run as quadlets or as podman `podman_` projects are either designed to be run as quadlets or as podman
containers outright. containers outright.
`docker_` projects are either docker-compose or some form of docker run
command.
`kubernetes_` projects are helm, kustomize, kubectl, or some other kubernetes `kubernetes_` projects are helm, kustomize, kubectl, or some other kubernetes
compliant deployment. compliant deployment.
`software_` projects record configuration for common software agnostic to
operating system or linux flavor.
`security_` projects record security best practices and might reference other
projects, but are usually just best practices and how-tos.
## Active Project Requirements ## Active Project Requirements
- [ ] Installation is documented - [ ] Installation is documented

View File

@@ -0,0 +1,8 @@
FROM docker.io/caddy:2-builder AS builder
RUN xcaddy build \
--with github.com/caddy-dns/route53@v1.6.0
FROM docker.io/caddy:2
COPY --from=builder /usr/bin/caddy /usr/bin/caddy

View File

@@ -4,7 +4,7 @@ Description=Caddy
[Container] [Container]
AddCapability=NET_ADMIN AddCapability=NET_ADMIN
ContainerName=caddy ContainerName=caddy
Image=docker.io/caddy:2 Image=gitea.reeseapps.com/services/caddy:latest
Network=host Network=host
SecurityLabelDisable=true SecurityLabelDisable=true
Volume=/etc/caddy:/etc/caddy Volume=/etc/caddy:/etc/caddy

View File

@@ -1,11 +1,24 @@
# Caddy Reverse Proxy # Caddy Reverse Proxy
- [Caddy Reverse Proxy](#caddy-reverse-proxy) - [Caddy Reverse Proxy](#caddy-reverse-proxy)
- [Custom Caddy Image](#custom-caddy-image)
- [Install Caddy](#install-caddy) - [Install Caddy](#install-caddy)
- [Ansible](#ansible) - [Ansible](#ansible)
- [Manual](#manual) - [Manual](#manual)
- [Adding a new Caddy Record](#adding-a-new-caddy-record) - [Adding a new Caddy Record](#adding-a-new-caddy-record)
## Custom Caddy Image
This repo builds a custom caddy image with route53 DNS certbot support.
```bash
podman image pull gitea.reeseapps.com/services/caddy:latest
```
To upgrade the image, check [the caddy-dns route53
project](https://github.com/caddy-dns/route53/tags) releases and update the
`Containerfile` with the new version.
## Install Caddy ## Install Caddy
### Ansible ### Ansible

View File

@@ -1,51 +1,59 @@
# Jellyfin # Jellyfin
- [Jellyfin](#jellyfin)
- [Install](#install)
- [Upgrade](#upgrade)
- [Mounting Media Directory](#mounting-media-directory)
They have podman rootless instructions! They have podman rootless instructions!
<https://jellyfin.org/docs/general/installation/container/#managing-via-systemd> <https://jellyfin.org/docs/general/installation/container/#managing-via-systemd>
## Install ## Install
### Create jellyfin btrfs volume 1. Create the jellyfin user
```bash
useradd jellyfin
loginctl enable-linger $(id -u jellyfin)
systemctl --user --machine=jellyfin@.host enable podman-restart
systemctl --user --machine=jellyfin@.host enable --now podman.socket
mkdir -p /home/jellyfin/.config/containers/systemd
```
2. Update the jellyfin record in Caddy.
3. Open port 8096 in the firewall.
4. Copy the files to the server and start the service
```bash
export PODMAN_SERVER=3dserver
scp -r active/podman_jellyfin/quadlets/. $PODMAN_SERVER:/home/jellyfin/.config/containers/systemd/
ssh $PODMAN_SERVER chown -R jellyfin:jellyfin /home/jellyfin/.config/containers/systemd/
ssh $PODMAN_SERVER
machinectl shell jellyfin@
systemctl --user daemon-reload
systemctl --user restart jellyfin
```
## Upgrade
1. Check [the blog](https://jellyfin.org/posts) for breaking changes
2. Update the `jellyfin.container` with the new image version
3. Update quadlets and restart the service
```bash ```bash
btrfs sub create /btrfs/jellyfin # Upload quadlets and restart
export PODMAN_SERVER=3dserver
scp -r active/podman_jellyfin/quadlets/. $PODMAN_SERVER:/home/jellyfin/.config/containers/systemd/
ssh $PODMAN_SERVER chown -R jellyfin:jellyfin /home/jellyfin/.config/containers/systemd/
ssh $PODMAN_SERVER
machinectl shell jellyfin@
systemctl --user daemon-reload
systemctl --user restart jellyfin
``` ```
Add /home/jellyfin mount to /etc/fstab
```bash
systemctl daemon-reload
mount -a --mkdir
```
### Create the jellyfin user
```bash
useradd jellyfin
loginctl enable-linger $(id -u jellyfin)
systemctl --user --machine=jellyfin@.host enable podman-restart
systemctl --user --machine=jellyfin@.host enable --now podman.socket
mkdir -p /home/jellyfin/.config/containers/systemd
```
### Install jellyfin
~/.config/containers/systemd/jellyfin.container
```bash
# Copy the files to the server
scp -r active/podman_jellyfin/quadlets/. jellyfin:~/.config/containers/systemd/
ssh jellyfin systemctl --user daemon-reload
ssh jellyfin systemctl --user start jellyfin
ssh jellyfin journalctl --user -xeu jellyfin
ssh jellyfin systemctl --user enable --now podman-auto-update.timer
```
Update the jellyfin record in Caddy.
Open port 8096 in the firewall.
## Mounting Media Directory ## Mounting Media Directory
Update /etc/fstab with the smb disk details. Update /etc/fstab with the smb disk details.

View File

@@ -1,12 +1,12 @@
[Container] [Container]
Image=docker.io/jellyfin/jellyfin:latest Image=docker.io/jellyfin/jellyfin:10.11.3
AutoUpdate=registry AutoUpdate=registry
PublishPort=8096:8096/tcp PublishPort=8096:8096/tcp
UserNS=keep-id UserNS=keep-id
SecurityLabelDisable=true SecurityLabelDisable=true
Volume=/home/jellyfin/jellyfin-config:/config:Z Volume=/home/jellyfin/jellyfin-config:/config:Z
Volume=/home/jellyfin/jellyfin-cache:/cache:Z Volume=/home/jellyfin/jellyfin-cache:/cache:Z
Volume=/home/jellyfin/jellyfin-media:/media:Z Volume=/var/media:/media:Z
Network=jellyfin.network Network=jellyfin.network
[Service] [Service]

View File

@@ -1 +1 @@
local-hostname: al-2023 local-hostname: cloud-init-vm

View File

@@ -3,7 +3,7 @@
users: users:
- default - default
- name: ec2-user - name: ducoterra
lock_passwd: false lock_passwd: false
sudo: ALL=(ALL) NOPASSWD:ALL sudo: ALL=(ALL) NOPASSWD:ALL
ssh_authorized_keys: ssh_authorized_keys:

View File

@@ -4,6 +4,9 @@ Virtual Machine Management
- [Virsh](#virsh) - [Virsh](#virsh)
- [Before you Begin](#before-you-begin) - [Before you Begin](#before-you-begin)
- [Connecting to External Servers via SSH](#connecting-to-external-servers-via-ssh)
- [Configuring Aliases](#configuring-aliases)
- [One-off Connections](#one-off-connections)
- [Useful Virsh Commands](#useful-virsh-commands) - [Useful Virsh Commands](#useful-virsh-commands)
- [Virsh Networking](#virsh-networking) - [Virsh Networking](#virsh-networking)
- [Create a Virtual Network](#create-a-virtual-network) - [Create a Virtual Network](#create-a-virtual-network)
@@ -13,10 +16,11 @@ Virtual Machine Management
- [Set a Static IP](#set-a-static-ip) - [Set a Static IP](#set-a-static-ip)
- [Creating VMs](#creating-vms) - [Creating VMs](#creating-vms)
- [Create VM with No Graphics and use an Existing QCOW2 Disk](#create-vm-with-no-graphics-and-use-an-existing-qcow2-disk) - [Create VM with No Graphics and use an Existing QCOW2 Disk](#create-vm-with-no-graphics-and-use-an-existing-qcow2-disk)
- [Cloud Init Compatible VMs](#cloud-init-compatible-vms) - [Create a Cloud Init Compatible VM](#create-a-cloud-init-compatible-vm)
- [Create VM with Graphics using an ISO Installation Disk](#create-vm-with-graphics-using-an-iso-installation-disk) - [Create VM with Graphics using an ISO Installation Disk](#create-vm-with-graphics-using-an-iso-installation-disk)
- [Create VM using Host Device as Disk](#create-vm-using-host-device-as-disk) - [Create VM using Host Device as Disk](#create-vm-using-host-device-as-disk)
- [Snapshots](#snapshots) - [Snapshots](#snapshots)
- [Virt Builder](#virt-builder)
## Before you Begin ## Before you Begin
@@ -28,6 +32,29 @@ Virtual Machine Management
6. Tell virsh to connect to your root system rather than your user: `export LIBVIRT_DEFAULT_URI='qemu:///system'` 6. Tell virsh to connect to your root system rather than your user: `export LIBVIRT_DEFAULT_URI='qemu:///system'`
7. Export your editor so virsh knows what to use: `export EDITOR=vim` 7. Export your editor so virsh knows what to use: `export EDITOR=vim`
## Connecting to External Servers via SSH
<https://libvirt.org/uri.html#ssh-transport>
### Configuring Aliases
1. Edit `~/.config/libvirt/libvirt.conf`
2. Add your aliases
```bash
uri_aliases = [
"3dserver=qemu+ssh://3dserver/system",
]
```
3. Export the alias: `export LIBVIRT_DEFAULT_URI=3dserver`
### One-off Connections
```bash
export LIBVIRT_DEFAULT_URI='qemu+ssh://user@server/system'
```
## Useful Virsh Commands ## Useful Virsh Commands
```bash ```bash
@@ -207,10 +234,42 @@ virt-install \
--import --disk "path=${VM_DISK_PATH},bus=virtio" --import --disk "path=${VM_DISK_PATH},bus=virtio"
``` ```
#### Cloud Init Compatible VMs #### Create a Cloud Init Compatible VM
<https://cloudinit.readthedocs.io/en/latest/reference/examples.html>
```bash ```bash
--cloud-init user-data="active/software_virsh/cloud-init/user-data,meta-data=active/software_virsh/cloud-init/meta-data" # Fedora
# https://fedoraproject.org/cloud/download
export VM_NAME="cloud-init-test-fedora"
export VM_DISK_PATH=/var/lib/libvirt/images/Fedora-Cloud-Base-Generic-43-1.6.x86_64.qcow2
# Rocky
# https://rockylinux.org/download
export VM_NAME="cloud-init-test-rocky"
export VM_DISK_PATH=/var/lib/libvirt/images/Rocky-10-GenericCloud-Base.latest.x86_64.qcow2
# Ubuntu
# https://cloud-images.ubuntu.com/noble/current/
export VM_NAME="cloud-init-test-ubuntu"
export VM_DISK_PATH=/var/lib/libvirt/images/noble-server-cloudimg-amd64.img
# Debian
# https://cloud.debian.org/images/cloud/trixie/20251117-2299/
export VM_NAME="cloud-init-test-debian"
export VM_DISK_PATH=/var/lib/libvirt/images/debian-13-generic-amd64-20251117-2299.qcow2
# Set --cloud-init disable=no to allow cloud-init to run again after first boot
virt-install \
--name "${VM_NAME}" \
--boot uefi,firmware.feature0.name=secure-boot,firmware.feature0.enabled=no \
--cpu host-passthrough --vcpus sockets=1,cores=8,threads=2 \
--ram=8192 \
--os-variant=fedora41 \
--network bridge:virbr0 \
--graphics none \
--import --disk "path=${VM_DISK_PATH},bus=virtio" \
--cloud-init disable=yes,user-data="active/software_virsh/cloud-init/user-data,meta-data=active/software_virsh/cloud-init/meta-data"
``` ```
### Create VM with Graphics using an ISO Installation Disk ### Create VM with Graphics using an ISO Installation Disk
@@ -222,8 +281,8 @@ virt-install \
# `--console pty,target.type=virtio` adds a console connection # `--console pty,target.type=virtio` adds a console connection
# For any command, use `virt-install --arg=?` to see all available options # For any command, use `virt-install --arg=?` to see all available options
export VM_NAME="fedora43-kinoite-test" export VM_NAME="fedora43-kinoite-test"
export VM_ISO_PATH=/var/lib/libvirt/iso/ export VM_ISO_PATH=/var/lib/libvirt/iso/fedora43.iso
export VM_DISK_PATH=/var/lib/libvirt/images/ export VM_DISK_PATH=/var/lib/libvirt/images/fedora43.qcow2
virt-install \ virt-install \
--name "${VM_NAME}" \ --name "${VM_NAME}" \
--boot uefi,firmware.feature0.name=secure-boot,firmware.feature0.enabled=no \ --boot uefi,firmware.feature0.name=secure-boot,firmware.feature0.enabled=no \
@@ -262,3 +321,33 @@ virt-install \
## Snapshots ## Snapshots
See [qemu qcow2 snapshots](/active/software_qemu/qemu.md#qcow2-snapshots) See [qemu qcow2 snapshots](/active/software_qemu/qemu.md#qcow2-snapshots)
## Virt Builder
<https://docs.fedoraproject.org/en-US/fedora-server/virtualization/vm-install-diskimg-virtbuilder/#_minimal_effort_customization>
You can use virt-builder to build vm images
```bash
export VM_NAME=fedora42-vb
export VM_DISK_PATH=/var/lib/libvirt/images/fedora42-vb.qcow2
# Build the image
virt-builder fedora-42 \
--format qcow2 --output ${VM_DISK_PATH} \
--root-password locked:disabled \
--hostname ${VM_NAME} \
--selinux-relabel \
--firstboot-command 'useradd -m -G wheel -p "" ducoterra ; chage -d 0 ducoterra'
# Run the built image
virt-install \
--name "${VM_NAME}" \
--cpu host-passthrough --vcpus sockets=1,cores=8,threads=2 \
--ram=8192 \
--os-variant=fedora41 \
--network bridge:virbr0 \
--graphics none \
--console pty,target.type=virtio \
--import --disk "path=${VM_DISK_PATH},bus=virtio"
```