Compare commits
9 Commits
3ff805fa39
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
|
380d8f8e48
|
|||
|
07a297f818
|
|||
|
d7224b038b
|
|||
|
fc62219db7
|
|||
|
2ebd97c345
|
|||
|
270e86bfd0
|
|||
|
7305e3a35b
|
|||
|
aabbd8286f
|
|||
|
37f7d442a1
|
30
.gitea/workflows/caddy.yaml
Normal file
30
.gitea/workflows/caddy.yaml
Normal file
@@ -0,0 +1,30 @@
|
||||
name: Podman DDNS Image
|
||||
run-name: Build and Push the Custom Caddy Image with Route53 DNS Certbot
|
||||
on:
|
||||
push:
|
||||
paths:
|
||||
- active/podman_caddy/**
|
||||
- .gitea/workflows/caddy.yaml
|
||||
schedule:
|
||||
- cron: '@daily'
|
||||
jobs:
|
||||
build-and-push-ddns:
|
||||
runs-on: ubuntu-latest
|
||||
if: gitea.ref == 'refs/heads/main'
|
||||
steps:
|
||||
- name: Check out repository code
|
||||
uses: actions/checkout@v4
|
||||
- name: Login to Gitea Registry
|
||||
uses: docker/login-action@v2
|
||||
with:
|
||||
registry: gitea.reeseapps.com
|
||||
username: ${{ secrets.REGISTRY_USERNAME }}
|
||||
password: ${{ secrets.REGISTRY_PASSWORD }}
|
||||
- name: Build and push Docker image
|
||||
uses: https://github.com/docker/build-push-action@v5
|
||||
with:
|
||||
context: ${{ gitea.workspace }}/active/podman_caddy
|
||||
file: ${{ gitea.workspace }}/active/podman_caddy/Containerfile
|
||||
push: true
|
||||
tags: "gitea.reeseapps.com/services/caddy:latest,gitea.reeseapps.com/services/caddy:${{gitea.sha}}"
|
||||
no-cache: true
|
||||
83
README.md
83
README.md
@@ -1,13 +1,34 @@
|
||||
# Homelab
|
||||
|
||||
A project to store homelab stuff.
|
||||
Welcome to my homelab!
|
||||
|
||||
Just here for the Arch distoolbox?
|
||||
This repo is an in-flux collection of my personal notes, docs, and tutorials of
|
||||
things I find interesting and self-host.
|
||||
|
||||
[Arch Distoolbox](active/software_distoolbox/distoolbox.md)
|
||||
Take a look around!
|
||||
|
||||

|
||||
- "Active" projects (/active) are in use today and generally fall into these
|
||||
categories:
|
||||
- `aws_` is for aws notes
|
||||
- `device_` is for hardware
|
||||
- `kubernetes_` is for helm charts or other kubernetes hosted software
|
||||
- `os_` is for operating system setup guides and notes
|
||||
- `podman_` is for containerized projects
|
||||
- `software_` is for cli tools, projects without a specific way to host them,
|
||||
or other misfits
|
||||
|
||||
All active projects will have a markdown file named after the project. This is
|
||||
for quick access via shortcuts like `ctrl + p` in vscode. For example, I want
|
||||
to check my notes for `virsh` so I would type `ctrl + p` "virsh" to open
|
||||
"virsh.md".
|
||||
|
||||
"Retired" projects (/retired) is a graveyard of things I didn't want to delete.
|
||||
|
||||
"Template" projects (/templates) are quick templates for creating new active
|
||||
projects with sane defaults.
|
||||
|
||||
I keep my GPG and SSH keys in `keys` if you want to add those to your keyring
|
||||
or give me access to your servers.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
@@ -18,6 +39,7 @@ Status](https://gitea.reeseapps.com/services/homelab/actions/workflows/distoolbo
|
||||
- [inputrc](#inputrc)
|
||||
- ["find ." shortcuts](#find--shortcuts)
|
||||
- [tmux](#tmux)
|
||||
- [bash](#bash)
|
||||
- [SSH Setup](#ssh-setup)
|
||||
- [Git GPG Commit Signing](#git-gpg-commit-signing)
|
||||
- [Important Dates and Times](#important-dates-and-times)
|
||||
@@ -62,6 +84,40 @@ find . -type d -exec chmod 755 {} \;
|
||||
- Even Vertical Distribution: ctrl + b + alt + 2
|
||||
- Swap pane order: ctrl + b + : -> swap-pane -t 0
|
||||
|
||||
### bash
|
||||
|
||||
<https://tecadmin.net/bash-special-variables/>
|
||||
|
||||
Here are some handy references for default bash variables
|
||||
|
||||
```text
|
||||
$0 – The name of the script being executed.
|
||||
$1-$9 – The first nine command-line arguments.
|
||||
$# – The number of command-line arguments.
|
||||
$* – All command-line arguments as a single string.
|
||||
$@ – All command-line arguments as an array.
|
||||
$? – The exit status of the last executed command.
|
||||
$$ – The process ID of the current shell.
|
||||
$! – The process ID of the last background command.
|
||||
$- – Shows the current shell options or flags.
|
||||
```
|
||||
|
||||
And here are the meanings of the shell options
|
||||
|
||||
```text
|
||||
h – Remember the location of commands as they are looked up
|
||||
i – Interactive shell
|
||||
m – Job control is enabled
|
||||
B – Brace expansion is enabled
|
||||
H – History substitution is enabled
|
||||
```
|
||||
|
||||
So to check if you are in an interactive shell:
|
||||
|
||||
```bash
|
||||
[ $- == *i* ]] && Some command here
|
||||
```
|
||||
|
||||
## SSH Setup
|
||||
|
||||
Generate a key (password protect it!)
|
||||
@@ -176,12 +232,10 @@ All projects will be prefixed with one of the following categories:
|
||||
|
||||
- `device_`
|
||||
- `os_`
|
||||
- `cloud_`
|
||||
- `systemd_`
|
||||
- `software_`
|
||||
- `podman_`
|
||||
- `docker_`
|
||||
- `kubernetes_`
|
||||
- `software_`
|
||||
|
||||
Note, some projects will be named with just the prefix. These are projects for
|
||||
configuring the underlying technology. The `podman` project, for example, will
|
||||
@@ -203,24 +257,15 @@ be prefixed with the cloud's name, not the word "cloud". So AWS services will
|
||||
be prefixed with `aws_` and azure would be `azure_`. This should make them more
|
||||
searchable.
|
||||
|
||||
`systemd_` projects are designed to be installed with ansible and run via
|
||||
systemd on a linux VM or other linux hardware.
|
||||
`software_` projects record configuration for common software agnostic to
|
||||
operating system or linux flavor.
|
||||
|
||||
`podman_` projects are either designed to be run as quadlets or as podman
|
||||
containers outright.
|
||||
|
||||
`docker_` projects are either docker-compose or some form of docker run
|
||||
command.
|
||||
|
||||
`kubernetes_` projects are helm, kustomize, kubectl, or some other kubernetes
|
||||
compliant deployment.
|
||||
|
||||
`software_` projects record configuration for common software agnostic to
|
||||
operating system or linux flavor.
|
||||
|
||||
`security_` projects record security best practices and might reference other
|
||||
projects, but are usually just best practices and how-tos.
|
||||
|
||||
## Active Project Requirements
|
||||
|
||||
- [ ] Installation is documented
|
||||
|
||||
8
active/podman_caddy/Containerfile
Normal file
8
active/podman_caddy/Containerfile
Normal file
@@ -0,0 +1,8 @@
|
||||
FROM docker.io/caddy:2-builder AS builder
|
||||
|
||||
RUN xcaddy build \
|
||||
--with github.com/caddy-dns/route53@v1.6.0
|
||||
|
||||
FROM docker.io/caddy:2
|
||||
|
||||
COPY --from=builder /usr/bin/caddy /usr/bin/caddy
|
||||
@@ -4,7 +4,7 @@ Description=Caddy
|
||||
[Container]
|
||||
AddCapability=NET_ADMIN
|
||||
ContainerName=caddy
|
||||
Image=docker.io/caddy:2
|
||||
Image=gitea.reeseapps.com/services/caddy:latest
|
||||
Network=host
|
||||
SecurityLabelDisable=true
|
||||
Volume=/etc/caddy:/etc/caddy
|
||||
|
||||
@@ -1,11 +1,24 @@
|
||||
# Caddy Reverse Proxy
|
||||
|
||||
- [Caddy Reverse Proxy](#caddy-reverse-proxy)
|
||||
- [Custom Caddy Image](#custom-caddy-image)
|
||||
- [Install Caddy](#install-caddy)
|
||||
- [Ansible](#ansible)
|
||||
- [Manual](#manual)
|
||||
- [Adding a new Caddy Record](#adding-a-new-caddy-record)
|
||||
|
||||
## Custom Caddy Image
|
||||
|
||||
This repo builds a custom caddy image with route53 DNS certbot support.
|
||||
|
||||
```bash
|
||||
podman image pull gitea.reeseapps.com/services/caddy:latest
|
||||
```
|
||||
|
||||
To upgrade the image, check [the caddy-dns route53
|
||||
project](https://github.com/caddy-dns/route53/tags) releases and update the
|
||||
`Containerfile` with the new version.
|
||||
|
||||
## Install Caddy
|
||||
|
||||
### Ansible
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
services:
|
||||
gitea:
|
||||
image: docker.gitea.com/gitea:1.24-rootless
|
||||
image: docker.gitea.com/gitea:1.25-rootless
|
||||
container_name: gitea
|
||||
environment:
|
||||
- GITEA__database__DB_TYPE=postgres
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
- [Create the gitea user](#create-the-gitea-user)
|
||||
- [Convert Compose to Quadlet](#convert-compose-to-quadlet)
|
||||
- [Install Quadlets](#install-quadlets)
|
||||
- [Upgrade Quadlets](#upgrade-quadlets)
|
||||
- [Upgrade](#upgrade)
|
||||
- [Editing Gitea Config](#editing-gitea-config)
|
||||
- [Gitea Runners](#gitea-runners)
|
||||
- [Firewall Rules](#firewall-rules)
|
||||
@@ -102,12 +102,23 @@ systemctl --user restart gitea postgres
|
||||
systemctl --user enable --now podman-auto-update.timer
|
||||
```
|
||||
|
||||
### Upgrade Quadlets
|
||||
### Upgrade
|
||||
|
||||
1. Check [the blog](https://blog.gitea.com/) for any breaking changes.
|
||||
2. Update the `compose.yaml` with any needed changes
|
||||
3. [Regenerate the quadlets](#convert-compose-to-quadlet)
|
||||
4. Upload the new quadlets and restart the service
|
||||
|
||||
```bash
|
||||
scp -r quadlets/. gitea:~/.config/containers/systemd/
|
||||
ssh gitea systemctl --user daemon-reload
|
||||
ssh gitea systemctl --user restart gitea postgres
|
||||
# Upload quadlets and restart
|
||||
export PODMAN_SERVER=3dserver
|
||||
scp -r active/podman_gitea/quadlets/. $PODMAN_SERVER:/home/gitea/.config/containers/systemd/
|
||||
ssh $PODMAN_SERVER chown -R gitea:gitea /home/gitea/.config/containers/systemd/
|
||||
|
||||
ssh $PODMAN_SERVER
|
||||
machinectl shell gitea@
|
||||
systemctl --user daemon-reload
|
||||
systemctl --user restart gitea postgres
|
||||
```
|
||||
|
||||
### Editing Gitea Config
|
||||
|
||||
@@ -5,7 +5,7 @@ Requires=postgres.service
|
||||
AutoUpdate=registry
|
||||
ContainerName=gitea
|
||||
Environment=GITEA__database__DB_TYPE=postgres GITEA__database__HOST=postgres:5432 GITEA__database__NAME=gitea GITEA__database__USER=gitea GITEA__database__PASSWD=gitea
|
||||
Image=docker.gitea.com/gitea:1.24-rootless
|
||||
Image=docker.gitea.com/gitea:1.25-rootless
|
||||
Network=gitea.network
|
||||
PublishPort=3000:3000
|
||||
PublishPort=2222:2222
|
||||
|
||||
@@ -10,7 +10,7 @@
|
||||
services:
|
||||
immich-server:
|
||||
container_name: immich_server
|
||||
image: ghcr.io/immich-app/immich-server:release
|
||||
image: ghcr.io/immich-app/immich-server:v2.3.1
|
||||
# extends:
|
||||
# file: hwaccel.transcoding.yml
|
||||
# service: cpu # set to one of [nvenc, quicksync, rkmpp, vaapi, vaapi-wsl] for accelerated transcoding
|
||||
|
||||
@@ -4,7 +4,7 @@ Requires=redis.service database.service
|
||||
[Container]
|
||||
ContainerName=immich_server
|
||||
EnvironmentFile=.env
|
||||
Image=ghcr.io/immich-app/immich-server:release
|
||||
Image=ghcr.io/immich-app/immich-server:v2.3.1
|
||||
Network=immich.network
|
||||
PublishPort=2283:2283
|
||||
Volume=/home/immich/library:/data:Z
|
||||
|
||||
@@ -1,25 +1,17 @@
|
||||
# Jellyfin
|
||||
|
||||
- [Jellyfin](#jellyfin)
|
||||
- [Install](#install)
|
||||
- [Upgrade](#upgrade)
|
||||
- [Mounting Media Directory](#mounting-media-directory)
|
||||
|
||||
They have podman rootless instructions!
|
||||
|
||||
<https://jellyfin.org/docs/general/installation/container/#managing-via-systemd>
|
||||
|
||||
## Install
|
||||
|
||||
### Create jellyfin btrfs volume
|
||||
|
||||
```bash
|
||||
btrfs sub create /btrfs/jellyfin
|
||||
```
|
||||
|
||||
Add /home/jellyfin mount to /etc/fstab
|
||||
|
||||
```bash
|
||||
systemctl daemon-reload
|
||||
mount -a --mkdir
|
||||
```
|
||||
|
||||
### Create the jellyfin user
|
||||
1. Create the jellyfin user
|
||||
|
||||
```bash
|
||||
useradd jellyfin
|
||||
@@ -29,22 +21,38 @@ systemctl --user --machine=jellyfin@.host enable --now podman.socket
|
||||
mkdir -p /home/jellyfin/.config/containers/systemd
|
||||
```
|
||||
|
||||
### Install jellyfin
|
||||
|
||||
~/.config/containers/systemd/jellyfin.container
|
||||
2. Update the jellyfin record in Caddy.
|
||||
3. Open port 8096 in the firewall.
|
||||
4. Copy the files to the server and start the service
|
||||
|
||||
```bash
|
||||
# Copy the files to the server
|
||||
scp -r active/podman_jellyfin/quadlets/. jellyfin:~/.config/containers/systemd/
|
||||
ssh jellyfin systemctl --user daemon-reload
|
||||
ssh jellyfin systemctl --user start jellyfin
|
||||
ssh jellyfin journalctl --user -xeu jellyfin
|
||||
ssh jellyfin systemctl --user enable --now podman-auto-update.timer
|
||||
export PODMAN_SERVER=3dserver
|
||||
scp -r active/podman_jellyfin/quadlets/. $PODMAN_SERVER:/home/jellyfin/.config/containers/systemd/
|
||||
ssh $PODMAN_SERVER chown -R jellyfin:jellyfin /home/jellyfin/.config/containers/systemd/
|
||||
|
||||
ssh $PODMAN_SERVER
|
||||
machinectl shell jellyfin@
|
||||
systemctl --user daemon-reload
|
||||
systemctl --user restart jellyfin
|
||||
```
|
||||
|
||||
Update the jellyfin record in Caddy.
|
||||
## Upgrade
|
||||
|
||||
Open port 8096 in the firewall.
|
||||
1. Check [the blog](https://jellyfin.org/posts) for breaking changes
|
||||
2. Update the `jellyfin.container` with the new image version
|
||||
3. Update quadlets and restart the service
|
||||
|
||||
```bash
|
||||
# Upload quadlets and restart
|
||||
export PODMAN_SERVER=3dserver
|
||||
scp -r active/podman_jellyfin/quadlets/. $PODMAN_SERVER:/home/jellyfin/.config/containers/systemd/
|
||||
ssh $PODMAN_SERVER chown -R jellyfin:jellyfin /home/jellyfin/.config/containers/systemd/
|
||||
|
||||
ssh $PODMAN_SERVER
|
||||
machinectl shell jellyfin@
|
||||
systemctl --user daemon-reload
|
||||
systemctl --user restart jellyfin
|
||||
```
|
||||
|
||||
## Mounting Media Directory
|
||||
|
||||
|
||||
@@ -1,12 +1,12 @@
|
||||
[Container]
|
||||
Image=docker.io/jellyfin/jellyfin:latest
|
||||
Image=docker.io/jellyfin/jellyfin:10.11.3
|
||||
AutoUpdate=registry
|
||||
PublishPort=8096:8096/tcp
|
||||
UserNS=keep-id
|
||||
SecurityLabelDisable=true
|
||||
Volume=/home/jellyfin/jellyfin-config:/config:Z
|
||||
Volume=/home/jellyfin/jellyfin-cache:/cache:Z
|
||||
Volume=/home/jellyfin/jellyfin-media:/media:Z
|
||||
Volume=/var/media:/media:Z
|
||||
Network=jellyfin.network
|
||||
|
||||
[Service]
|
||||
|
||||
1
active/software_virsh/cloud-init/meta-data
Normal file
1
active/software_virsh/cloud-init/meta-data
Normal file
@@ -0,0 +1 @@
|
||||
local-hostname: cloud-init-vm
|
||||
10
active/software_virsh/cloud-init/user-data
Normal file
10
active/software_virsh/cloud-init/user-data
Normal file
@@ -0,0 +1,10 @@
|
||||
#cloud-config
|
||||
#vim:syntax=yaml
|
||||
|
||||
users:
|
||||
- default
|
||||
- name: ducoterra
|
||||
lock_passwd: false
|
||||
sudo: ALL=(ALL) NOPASSWD:ALL
|
||||
ssh_authorized_keys:
|
||||
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGQa781Qj8mNlUdRquFFqg0O2ornG9SBHe705y4+1vPI ssh@ducoterra.net
|
||||
@@ -4,6 +4,9 @@ Virtual Machine Management
|
||||
|
||||
- [Virsh](#virsh)
|
||||
- [Before you Begin](#before-you-begin)
|
||||
- [Connecting to External Servers via SSH](#connecting-to-external-servers-via-ssh)
|
||||
- [Configuring Aliases](#configuring-aliases)
|
||||
- [One-off Connections](#one-off-connections)
|
||||
- [Useful Virsh Commands](#useful-virsh-commands)
|
||||
- [Virsh Networking](#virsh-networking)
|
||||
- [Create a Virtual Network](#create-a-virtual-network)
|
||||
@@ -13,9 +16,11 @@ Virtual Machine Management
|
||||
- [Set a Static IP](#set-a-static-ip)
|
||||
- [Creating VMs](#creating-vms)
|
||||
- [Create VM with No Graphics and use an Existing QCOW2 Disk](#create-vm-with-no-graphics-and-use-an-existing-qcow2-disk)
|
||||
- [Create a Cloud Init Compatible VM](#create-a-cloud-init-compatible-vm)
|
||||
- [Create VM with Graphics using an ISO Installation Disk](#create-vm-with-graphics-using-an-iso-installation-disk)
|
||||
- [Create VM using Host Device as Disk](#create-vm-using-host-device-as-disk)
|
||||
- [Snapshots](#snapshots)
|
||||
- [Virt Builder](#virt-builder)
|
||||
|
||||
## Before you Begin
|
||||
|
||||
@@ -27,6 +32,29 @@ Virtual Machine Management
|
||||
6. Tell virsh to connect to your root system rather than your user: `export LIBVIRT_DEFAULT_URI='qemu:///system'`
|
||||
7. Export your editor so virsh knows what to use: `export EDITOR=vim`
|
||||
|
||||
## Connecting to External Servers via SSH
|
||||
|
||||
<https://libvirt.org/uri.html#ssh-transport>
|
||||
|
||||
### Configuring Aliases
|
||||
|
||||
1. Edit `~/.config/libvirt/libvirt.conf`
|
||||
2. Add your aliases
|
||||
|
||||
```bash
|
||||
uri_aliases = [
|
||||
"3dserver=qemu+ssh://3dserver/system",
|
||||
]
|
||||
```
|
||||
|
||||
3. Export the alias: `export LIBVIRT_DEFAULT_URI=3dserver`
|
||||
|
||||
### One-off Connections
|
||||
|
||||
```bash
|
||||
export LIBVIRT_DEFAULT_URI='qemu+ssh://user@server/system'
|
||||
```
|
||||
|
||||
## Useful Virsh Commands
|
||||
|
||||
```bash
|
||||
@@ -206,6 +234,44 @@ virt-install \
|
||||
--import --disk "path=${VM_DISK_PATH},bus=virtio"
|
||||
```
|
||||
|
||||
#### Create a Cloud Init Compatible VM
|
||||
|
||||
<https://cloudinit.readthedocs.io/en/latest/reference/examples.html>
|
||||
|
||||
```bash
|
||||
# Fedora
|
||||
# https://fedoraproject.org/cloud/download
|
||||
export VM_NAME="cloud-init-test-fedora"
|
||||
export VM_DISK_PATH=/var/lib/libvirt/images/Fedora-Cloud-Base-Generic-43-1.6.x86_64.qcow2
|
||||
|
||||
# Rocky
|
||||
# https://rockylinux.org/download
|
||||
export VM_NAME="cloud-init-test-rocky"
|
||||
export VM_DISK_PATH=/var/lib/libvirt/images/Rocky-10-GenericCloud-Base.latest.x86_64.qcow2
|
||||
|
||||
# Ubuntu
|
||||
# https://cloud-images.ubuntu.com/noble/current/
|
||||
export VM_NAME="cloud-init-test-ubuntu"
|
||||
export VM_DISK_PATH=/var/lib/libvirt/images/noble-server-cloudimg-amd64.img
|
||||
|
||||
# Debian
|
||||
# https://cloud.debian.org/images/cloud/trixie/20251117-2299/
|
||||
export VM_NAME="cloud-init-test-debian"
|
||||
export VM_DISK_PATH=/var/lib/libvirt/images/debian-13-generic-amd64-20251117-2299.qcow2
|
||||
|
||||
# Set --cloud-init disable=no to allow cloud-init to run again after first boot
|
||||
virt-install \
|
||||
--name "${VM_NAME}" \
|
||||
--boot uefi,firmware.feature0.name=secure-boot,firmware.feature0.enabled=no \
|
||||
--cpu host-passthrough --vcpus sockets=1,cores=8,threads=2 \
|
||||
--ram=8192 \
|
||||
--os-variant=fedora41 \
|
||||
--network bridge:virbr0 \
|
||||
--graphics none \
|
||||
--import --disk "path=${VM_DISK_PATH},bus=virtio" \
|
||||
--cloud-init disable=yes,user-data="active/software_virsh/cloud-init/user-data,meta-data=active/software_virsh/cloud-init/meta-data"
|
||||
```
|
||||
|
||||
### Create VM with Graphics using an ISO Installation Disk
|
||||
|
||||
```bash
|
||||
@@ -215,8 +281,8 @@ virt-install \
|
||||
# `--console pty,target.type=virtio` adds a console connection
|
||||
# For any command, use `virt-install --arg=?` to see all available options
|
||||
export VM_NAME="fedora43-kinoite-test"
|
||||
export VM_ISO_PATH=/var/lib/libvirt/iso/
|
||||
export VM_DISK_PATH=/var/lib/libvirt/images/
|
||||
export VM_ISO_PATH=/var/lib/libvirt/iso/fedora43.iso
|
||||
export VM_DISK_PATH=/var/lib/libvirt/images/fedora43.qcow2
|
||||
virt-install \
|
||||
--name "${VM_NAME}" \
|
||||
--boot uefi,firmware.feature0.name=secure-boot,firmware.feature0.enabled=no \
|
||||
@@ -255,3 +321,33 @@ virt-install \
|
||||
## Snapshots
|
||||
|
||||
See [qemu qcow2 snapshots](/active/software_qemu/qemu.md#qcow2-snapshots)
|
||||
|
||||
## Virt Builder
|
||||
|
||||
<https://docs.fedoraproject.org/en-US/fedora-server/virtualization/vm-install-diskimg-virtbuilder/#_minimal_effort_customization>
|
||||
|
||||
You can use virt-builder to build vm images
|
||||
|
||||
```bash
|
||||
export VM_NAME=fedora42-vb
|
||||
export VM_DISK_PATH=/var/lib/libvirt/images/fedora42-vb.qcow2
|
||||
|
||||
# Build the image
|
||||
virt-builder fedora-42 \
|
||||
--format qcow2 --output ${VM_DISK_PATH} \
|
||||
--root-password locked:disabled \
|
||||
--hostname ${VM_NAME} \
|
||||
--selinux-relabel \
|
||||
--firstboot-command 'useradd -m -G wheel -p "" ducoterra ; chage -d 0 ducoterra'
|
||||
|
||||
# Run the built image
|
||||
virt-install \
|
||||
--name "${VM_NAME}" \
|
||||
--cpu host-passthrough --vcpus sockets=1,cores=8,threads=2 \
|
||||
--ram=8192 \
|
||||
--os-variant=fedora41 \
|
||||
--network bridge:virbr0 \
|
||||
--graphics none \
|
||||
--console pty,target.type=virtio \
|
||||
--import --disk "path=${VM_DISK_PATH},bus=virtio"
|
||||
```
|
||||
@@ -8,15 +8,15 @@ installing the OS onto.
|
||||
- [Foobar](#foobar)
|
||||
- [Installation](#installation)
|
||||
- [Network](#network)
|
||||
- [Firewalld Configuration](#firewalld-configuration)
|
||||
- [Firewall](#firewall)
|
||||
- [Setup SSH](#setup-ssh)
|
||||
- [Fail2Ban](#fail2ban)
|
||||
- [Set eui64 on network interface](#set-eui64-on-network-interface)
|
||||
- [Enable IPv6](#enable-ipv6)
|
||||
- [Set up Network Bridge](#set-up-network-bridge)
|
||||
- [Storage](#storage)
|
||||
- [BTRFS Parent Volumes](#btrfs-parent-volumes)
|
||||
- [BTRFS Snapshots](#btrfs-snapshots)
|
||||
- [BTRFS Maintenance](#btrfs-maintenance)
|
||||
- [Volumes](#volumes)
|
||||
- [Snapshots](#snapshots)
|
||||
- [Maintenance](#maintenance)
|
||||
- [Backups](#backups)
|
||||
- [TPM2 Luks Decryption](#tpm2-luks-decryption)
|
||||
- [Users](#users)
|
||||
- [Change your password](#change-your-password)
|
||||
@@ -25,16 +25,8 @@ installing the OS onto.
|
||||
- [Monitoring](#monitoring)
|
||||
- [Disk Usage](#disk-usage)
|
||||
- [Disk Wear](#disk-wear)
|
||||
- [Backups](#backups)
|
||||
- [Downgrading Kernel](#downgrading-kernel)
|
||||
- [Apps](#apps)
|
||||
- [Package Manager](#package-manager)
|
||||
- [Install and Enable Cockpit](#install-and-enable-cockpit)
|
||||
- [Install and Enable Virtualization](#install-and-enable-virtualization)
|
||||
- [Install and Enable Containers](#install-and-enable-containers)
|
||||
- [Troubleshooting](#troubleshooting)
|
||||
- [Disable Swap](#disable-swap)
|
||||
- [Disable Selinux](#disable-selinux)
|
||||
|
||||
## Installation
|
||||
|
||||
@@ -57,355 +49,85 @@ installing the OS onto.
|
||||
|
||||
## Network
|
||||
|
||||
### Firewalld Configuration
|
||||
### Firewall
|
||||
|
||||
Set the default firewalld zone to `public`
|
||||
Allow SSH
|
||||
|
||||
```bash
|
||||
# Note, you probably don't have to do this. Check Cockpit Network -> Firewall
|
||||
# firewall-cmd --set-default-zone=public
|
||||
```
|
||||
|
||||
Firewalld will be on and blocking by default. You can check the zone and allowed ports with:
|
||||
|
||||
```bash
|
||||
firewall-cmd --zone=public --list-ports
|
||||
firewall-cmd --zone=public --list-services
|
||||
```
|
||||
|
||||
Allow Cockpit with
|
||||
|
||||
```bash
|
||||
firewall-cmd --permanent --zone=public --add-port=9090/tcp
|
||||
firewall-cmd --reload
|
||||
# Your firewall command here
|
||||
```
|
||||
|
||||
### Setup SSH
|
||||
|
||||
See [README](/README.md#ssh-key-generation)
|
||||
|
||||
### Fail2Ban
|
||||
|
||||
On the server:
|
||||
Install SSH
|
||||
|
||||
```bash
|
||||
# Run tmux session
|
||||
tmux
|
||||
|
||||
dnf install -y fail2ban
|
||||
|
||||
# Setup initial rules
|
||||
cat <<EOF > /etc/fail2ban/jail.local
|
||||
# Jail configuration additions for local installation
|
||||
|
||||
# Adjust the default configuration's default values
|
||||
[DEFAULT]
|
||||
# Optional enter an trusted IP never to ban
|
||||
# ignoreip = 2600:1700:1e6c:a81f::0/64
|
||||
bantime = 6600
|
||||
backend = auto
|
||||
|
||||
# The main configuration file defines all services but
|
||||
# deactivates them by default. We have to activate those neeeded
|
||||
[sshd]
|
||||
enabled = true
|
||||
EOF
|
||||
|
||||
systemctl enable fail2ban --now
|
||||
|
||||
# OPTIONAL: follow logs
|
||||
tail -f /var/log/fail2ban.log
|
||||
# Steps to install SSH server
|
||||
```
|
||||
|
||||
Checking, banning, unbanning
|
||||
### Enable IPv6
|
||||
|
||||
```bash
|
||||
# See banned clients
|
||||
fail2ban-client banned
|
||||
# See jails (sshd should be one of them)
|
||||
fail2ban-client status
|
||||
# Unban a client from the sshd jail
|
||||
fail2ban-client set sshd unbanip <IP address>
|
||||
```
|
||||
|
||||
### Set eui64 on network interface
|
||||
|
||||
Ensures consistent mac-based IPv6 address.
|
||||
|
||||
```bash
|
||||
nmcli connection modify Wired\ connection\ 1 ipv6.addr-gen-mode eui64
|
||||
```
|
||||
1. Disable privacy
|
||||
2. Enable eui64 addressing
|
||||
|
||||
### Set up Network Bridge
|
||||
|
||||
Networking -> Add bridge -> add network interface and save
|
||||
|
||||
```bash
|
||||
nmcli connection modify bridge0 ipv6.addr-gen-mode eui64
|
||||
```
|
||||
Create a network bridge for VMs
|
||||
|
||||
## Storage
|
||||
|
||||
### BTRFS Parent Volumes
|
||||
### Volumes
|
||||
|
||||
In `/etc/fstab`, add the parent volumes for your disks mounted with subvolid=5 at `/btrfs` so you can see
|
||||
all subvolumes.
|
||||
Create volumes for `/home`, `/var`, `/var/log`, `/tmp`, etc.
|
||||
|
||||
```conf
|
||||
UUID=64beedac-c0c9-48bf-a3ae-7707df6ebc97 /btrfs/3dserver-root btrfs subvolid=5,compress=zstd:1,x-systemd.device-timeout=0 0 0
|
||||
UUID=3c76b83f-7547-4c18-b08f-9e7902022b8d /btrfs/3dserver-data btrfs subvolid=5,compress=zstd:1,x-systemd.device-timeout=0 0 0
|
||||
```
|
||||
### Snapshots
|
||||
|
||||
```bash
|
||||
systemctl daemon-reload
|
||||
mount -a --mkdir
|
||||
```
|
||||
If supported, set up automated snapshots to keep a history of your data.
|
||||
|
||||
### BTRFS Snapshots
|
||||
### Maintenance
|
||||
|
||||
<https://en.opensuse.org/openSUSE:Snapper_Tutorial>
|
||||
If supported, set up scrub and check jobs to ensure data integrity.
|
||||
|
||||
<http://snapper.io/manpages/snapper-configs.html>
|
||||
### Backups
|
||||
|
||||
We'll be using snapper, a tool for automating and controlling snapshot behavior.
|
||||
|
||||
```bash
|
||||
dnf install snapper dnf-plugin-snapper
|
||||
|
||||
# Allow selinux management
|
||||
semanage permissive -a snapperd_t
|
||||
|
||||
# Note, if you mess something up you can run snapper -c root delete-config to delete
|
||||
# System configs are stored in /etc/sysconfig/snapper as well as /etc/snapper
|
||||
snapper -c root create-config /
|
||||
snapper -c data create-config /path/to/other/data
|
||||
|
||||
# Enable automatic snapshots
|
||||
systemctl enable --now snapper-timeline.timer
|
||||
# Enable automatic cleanup
|
||||
systemctl enable --now snapper-cleanup.timer
|
||||
# Enable snapshots on boot
|
||||
systemctl enable --now snapper-boot.timer
|
||||
|
||||
# List snapshots
|
||||
snapper -c root list
|
||||
# Create snapshot manually
|
||||
snapper -c root create --description "test snapshot"
|
||||
# Delete first snapshot
|
||||
snapper -c root delete 1
|
||||
```
|
||||
|
||||
Note - you probably don't want to keep yearly snapshots.
|
||||
Edit `/etc/snapper/configs/root` and change `TIMELINE_LIMIT_YEARLY=` to `0`.
|
||||
|
||||
### BTRFS Maintenance
|
||||
|
||||
```bash
|
||||
# Start a scrub in the foreground (-B) at /
|
||||
btrfs scrub start -B /
|
||||
```
|
||||
Set up regular backups via a supported tool like Borg.
|
||||
|
||||
### TPM2 Luks Decryption
|
||||
|
||||
Mostly taken from here:
|
||||
<https://gist.github.com/jdoss/777e8b52c8d88eb87467935769c98a95>
|
||||
|
||||
PCR reference for `--tpm2-pcrs` args
|
||||
|
||||
```text
|
||||
0: System firmware executable
|
||||
2: Kernel
|
||||
4: Bootloader
|
||||
7: Secure boot state
|
||||
8: Cmdline
|
||||
9: Initrd
|
||||
```
|
||||
|
||||
Note, if your threat vector is people trying to get data off your old disks after throwing them
|
||||
away, you can set `--tpm2-pcrs=""`. Someone could gain access to your encrypted partition if they
|
||||
can access your machine physically by manipulating the boot parameters but you're guaranteed to
|
||||
unlock despite updates and upgrades.
|
||||
|
||||
Basic commands:
|
||||
|
||||
```bash
|
||||
# Run tmux session
|
||||
tmux
|
||||
|
||||
# Show tpm2 devices
|
||||
systemd-cryptenroll --tpm2-device=list
|
||||
# Show crypto luks block devices
|
||||
blkid -t TYPE=crypto_LUKS
|
||||
|
||||
# Enroll the tpm2 device with systemd-cryptenroll
|
||||
systemd-cryptenroll /dev/nvme0n1p3 --tpm2-device=auto --tpm2-pcrs=""
|
||||
|
||||
####################
|
||||
##### OPTIONAL #####
|
||||
####################
|
||||
# If you have lots of devices to decrypt (like a btrfs raid array), use these commands.
|
||||
# Get all crypto luks partitions
|
||||
blkid | grep crypto_LUKS
|
||||
# List them all space-separated and drop the '/dev'
|
||||
LUKS_DEVS="nvme0n1p4 nvme1n1p1 nvme2n1p1 nvme3n1p1 nvme5n1p1 nvme4n1p1 nvme6n1p1"
|
||||
|
||||
# Check that your list is good
|
||||
for dev in $LUKS_DEVS; do echo will enroll /dev/$dev; done
|
||||
|
||||
# Enroll
|
||||
for dev in $LUKS_DEVS; do \
|
||||
echo "Enrolling /dev/$dev"; \
|
||||
systemd-cryptenroll /dev/$dev --tpm2-device=auto --tpm2-pcrs=""; \
|
||||
done
|
||||
########################
|
||||
##### END OPTIONAL #####
|
||||
########################
|
||||
|
||||
# Append to command line args
|
||||
echo "add_dracutmodules+=\" tpm2-tss \"" | tee /etc/dracut.conf.d/tpm2.conf
|
||||
dracut -f
|
||||
```
|
||||
|
||||
Finally, `vim /etc/default/grub` and add `rd.luks.options=tpm2-device=auto` to GRUB_CMDLINE_LINUX
|
||||
|
||||
```bash
|
||||
# Update Grub
|
||||
grub2-mkconfig -o /boot/grub2/grub.cfg
|
||||
reboot
|
||||
# Cross your fingers that you don't have to go type in the password manually.
|
||||
# Yes, 60 full seconds is too long. Go type your password in.
|
||||
```
|
||||
|
||||
If you need to reenroll for some reason:
|
||||
|
||||
```bash
|
||||
# Reenroll
|
||||
systemd-cryptenroll /dev/nvme0n1p3 --wipe-slot=tpm2 --tpm2-device=auto --tpm2-pcrs=""
|
||||
```
|
||||
If you want automatic decryption via TPM2, set it up here.
|
||||
|
||||
## Users
|
||||
|
||||
### Change your password
|
||||
|
||||
In Cockpit navigate to Accounts -> user -> Set password
|
||||
Change the default user's password.
|
||||
|
||||
## Updates
|
||||
|
||||
Describe what needs updated regularly and how to update it.
|
||||
|
||||
### Automatic Updates
|
||||
|
||||
In Cockpit navigate to software updates -> automatic updates -> install -> security updates only
|
||||
Set up automatic updates or security patches.
|
||||
|
||||
## Monitoring
|
||||
|
||||
In Cockpit: Overview -> View metrics and history -> Install PCP Support -> Metrics settings -> Turn on Collect Metrics
|
||||
Set up a monitoring service for resources or uptime.
|
||||
|
||||
### Disk Usage
|
||||
|
||||
TODO
|
||||
Set up disk usage monitoring.
|
||||
|
||||
### Disk Wear
|
||||
|
||||
TODO
|
||||
|
||||
## Backups
|
||||
|
||||
See [borg.md](/active/systemd_borg/borg.md)
|
||||
|
||||
### Downgrading Kernel
|
||||
|
||||
```bash
|
||||
dnf install koji
|
||||
|
||||
# Note: format is kernel-version.fedora-version
|
||||
cd $(mktemp -d) && koji download-build --arch=x86_64 --arch=noarch kernel-6.11.3-300.fc41 && dnf install ./*
|
||||
|
||||
reboot
|
||||
```
|
||||
Set up physical disk wear monitoring for SSDs if applicable.
|
||||
|
||||
## Apps
|
||||
|
||||
### Package Manager
|
||||
Document any recommended apps here.
|
||||
|
||||
Configure dnf to use the fastest mirror:
|
||||
|
||||
```bash
|
||||
echo 'fastestmirror=1' >> /etc/dnf/dnf.conf
|
||||
dnf clean all
|
||||
dnf update --refresh -y
|
||||
# libdnf5 is required for ansible to work
|
||||
dnf install -y glances tmux vim python3-libdnf5
|
||||
```
|
||||
|
||||
### Install and Enable Cockpit
|
||||
|
||||
<https://cockpit-project.org/running>
|
||||
|
||||
```bash
|
||||
dnf install cockpit
|
||||
systemctl enable --now cockpit.socket
|
||||
firewall-cmd --add-service=cockpit
|
||||
firewall-cmd --add-service=cockpit --permanent
|
||||
```
|
||||
|
||||
## Install and Enable Virtualization
|
||||
|
||||
Don't forget to add a btrfs subvolume for `/var/lib/libvirt`
|
||||
|
||||
```bash
|
||||
# Since we already created our /btrfs mountpoint, this volume will show up automatically
|
||||
# at /btrfs/libvirt
|
||||
btrfs sub create /btrfs/libvirt
|
||||
```
|
||||
|
||||
Now create an fstab entry that mounts the volume at /var/lib/libvirt
|
||||
|
||||
```bash
|
||||
UUID=... /var/lib/libvirt btrfs subvol=libvirt,compress=zstd:1,x-systemd.device-timeout=0 0 0
|
||||
```
|
||||
|
||||
Mount the libvirt volume:
|
||||
|
||||
```bash
|
||||
systemctl daemon-reload
|
||||
mount -a --mkdir
|
||||
# Check that the mount was successful. This will print something if our mount worked.
|
||||
mount | grep -i /var/lib/libvirt
|
||||
```
|
||||
|
||||
Create a snapshot schedule for libvirt.
|
||||
|
||||
```bash
|
||||
snapper -c libvirt create-config /var/lib/libvirt
|
||||
# Don't forget to edit "YEARLY" at /etc/snapper/configs/libvirt
|
||||
```
|
||||
|
||||
Install and enable the virtualization service.
|
||||
|
||||
```bash
|
||||
dnf group install --with-optional virtualization
|
||||
systemctl enable --now libvirtd
|
||||
```
|
||||
|
||||
Install the cockpit machines application.
|
||||
|
||||
## Install and Enable Containers
|
||||
If your package manager requires specific configuration, put it here.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Disable Swap
|
||||
|
||||
```bash
|
||||
swapoff -a
|
||||
zramctl --reset /dev/zram0
|
||||
dnf -y remove zram-generator-defaults
|
||||
```
|
||||
|
||||
### Disable Selinux
|
||||
|
||||
By default selinux will be enforcing. You can set it to permissive with
|
||||
|
||||
```bash
|
||||
setenforce 0
|
||||
```
|
||||
|
||||
And then make it permanent by editing `/etc/selinux/config` and inserting `SELINUX=permissive`.
|
||||
Any troubleshooting recommendations for common issues.
|
||||
@@ -11,7 +11,7 @@
|
||||
|
||||
## Setup foobar Project
|
||||
|
||||
1. Copy and rename this folder to active/systemd_foobar
|
||||
1. Copy and rename this folder to active/software_foobar
|
||||
2. Find and replace foobar with the name of the service
|
||||
3. Write the foobar.service spec
|
||||
4. (OPTIONAL) Write the foobar.timer spec
|
||||
@@ -33,16 +33,16 @@
|
||||
ansible-playbook \
|
||||
-i ansible/inventory.yaml \
|
||||
-l podman \
|
||||
active/systemd_foobar/install_foobar.yaml \
|
||||
-e "@active/systemd_foobar/vars.yaml" \
|
||||
-e "@active/systemd_foobar/secrets/vars.yaml"
|
||||
active/software_foobar/install_foobar.yaml \
|
||||
-e "@active/software_foobar/vars.yaml" \
|
||||
-e "@active/software_foobar/secrets/vars.yaml"
|
||||
```
|
||||
|
||||
## Upgrade foobar
|
||||
|
||||
## Backup foobar
|
||||
|
||||
Follow the [Borg Backup instructions](/active/systemd_borg/borg.md#set-up-a-client-for-backup)
|
||||
Follow the [Borg Backup instructions](/active/software_borg/borg.md#set-up-a-client-for-backup)
|
||||
|
||||
## Restore foobar
|
||||
|
||||
|
||||
Reference in New Issue
Block a user