Compare commits
43 Commits
3fed164193
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
|
380d8f8e48
|
|||
|
07a297f818
|
|||
|
d7224b038b
|
|||
|
fc62219db7
|
|||
|
2ebd97c345
|
|||
|
270e86bfd0
|
|||
|
7305e3a35b
|
|||
|
aabbd8286f
|
|||
|
37f7d442a1
|
|||
|
3ff805fa39
|
|||
|
1ae62e70ed
|
|||
|
91f4687c07
|
|||
|
dc2df62d04
|
|||
|
b75aac76c2
|
|||
|
5161dced6e
|
|||
|
d9ed144578
|
|||
|
5516f9530b
|
|||
|
621be95870
|
|||
|
b526901546
|
|||
|
b328081b59
|
|||
|
113b859927
|
|||
|
57ff005186
|
|||
|
7ccedb9768
|
|||
|
ef527abef4
|
|||
|
75f4aaebf1
|
|||
|
1396e09227
|
|||
|
cbe8c4a369
|
|||
|
2f88c75655
|
|||
|
0f4b73720c
|
|||
|
b97f41eb70
|
|||
|
6df02e8dff
|
|||
|
57ae6b7e72
|
|||
|
e3ba1759c4
|
|||
|
af70d1d396
|
|||
|
5b474c7190
|
|||
|
d94cd01008
|
|||
|
afb27c512c
|
|||
|
a500c8a572
|
|||
|
c5748d81da
|
|||
|
b38390029f
|
|||
|
b116ea73ec
|
|||
|
920aeef7f3
|
|||
|
9038962f29
|
30
.gitea/workflows/caddy.yaml
Normal file
30
.gitea/workflows/caddy.yaml
Normal file
@@ -0,0 +1,30 @@
|
||||
name: Podman DDNS Image
|
||||
run-name: Build and Push the Custom Caddy Image with Route53 DNS Certbot
|
||||
on:
|
||||
push:
|
||||
paths:
|
||||
- active/podman_caddy/**
|
||||
- .gitea/workflows/caddy.yaml
|
||||
schedule:
|
||||
- cron: '@daily'
|
||||
jobs:
|
||||
build-and-push-ddns:
|
||||
runs-on: ubuntu-latest
|
||||
if: gitea.ref == 'refs/heads/main'
|
||||
steps:
|
||||
- name: Check out repository code
|
||||
uses: actions/checkout@v4
|
||||
- name: Login to Gitea Registry
|
||||
uses: docker/login-action@v2
|
||||
with:
|
||||
registry: gitea.reeseapps.com
|
||||
username: ${{ secrets.REGISTRY_USERNAME }}
|
||||
password: ${{ secrets.REGISTRY_PASSWORD }}
|
||||
- name: Build and push Docker image
|
||||
uses: https://github.com/docker/build-push-action@v5
|
||||
with:
|
||||
context: ${{ gitea.workspace }}/active/podman_caddy
|
||||
file: ${{ gitea.workspace }}/active/podman_caddy/Containerfile
|
||||
push: true
|
||||
tags: "gitea.reeseapps.com/services/caddy:latest,gitea.reeseapps.com/services/caddy:${{gitea.sha}}"
|
||||
no-cache: true
|
||||
5
.gitignore
vendored
5
.gitignore
vendored
@@ -7,4 +7,7 @@ __pycache__/
|
||||
.pytest_cache/
|
||||
.venv/
|
||||
.mypy_cache
|
||||
TODO.md
|
||||
TODO.md
|
||||
eicar.com
|
||||
*.pp
|
||||
*.mod
|
||||
127
README.md
127
README.md
@@ -1,13 +1,34 @@
|
||||
# Homelab
|
||||
|
||||
A project to store homelab stuff.
|
||||
Welcome to my homelab!
|
||||
|
||||
Just here for the Arch distoolbox?
|
||||
This repo is an in-flux collection of my personal notes, docs, and tutorials of
|
||||
things I find interesting and self-host.
|
||||
|
||||
[Arch Distoolbox](active/software_distoolbox/distoolbox.md)
|
||||
Take a look around!
|
||||
|
||||

|
||||
- "Active" projects (/active) are in use today and generally fall into these
|
||||
categories:
|
||||
- `aws_` is for aws notes
|
||||
- `device_` is for hardware
|
||||
- `kubernetes_` is for helm charts or other kubernetes hosted software
|
||||
- `os_` is for operating system setup guides and notes
|
||||
- `podman_` is for containerized projects
|
||||
- `software_` is for cli tools, projects without a specific way to host them,
|
||||
or other misfits
|
||||
|
||||
All active projects will have a markdown file named after the project. This is
|
||||
for quick access via shortcuts like `ctrl + p` in vscode. For example, I want
|
||||
to check my notes for `virsh` so I would type `ctrl + p` "virsh" to open
|
||||
"virsh.md".
|
||||
|
||||
"Retired" projects (/retired) is a graveyard of things I didn't want to delete.
|
||||
|
||||
"Template" projects (/templates) are quick templates for creating new active
|
||||
projects with sane defaults.
|
||||
|
||||
I keep my GPG and SSH keys in `keys` if you want to add those to your keyring
|
||||
or give me access to your servers.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
@@ -18,6 +39,7 @@ Status](https://gitea.reeseapps.com/services/homelab/actions/workflows/distoolbo
|
||||
- [inputrc](#inputrc)
|
||||
- ["find ." shortcuts](#find--shortcuts)
|
||||
- [tmux](#tmux)
|
||||
- [bash](#bash)
|
||||
- [SSH Setup](#ssh-setup)
|
||||
- [Git GPG Commit Signing](#git-gpg-commit-signing)
|
||||
- [Important Dates and Times](#important-dates-and-times)
|
||||
@@ -62,34 +84,78 @@ find . -type d -exec chmod 755 {} \;
|
||||
- Even Vertical Distribution: ctrl + b + alt + 2
|
||||
- Swap pane order: ctrl + b + : -> swap-pane -t 0
|
||||
|
||||
## SSH Setup
|
||||
### bash
|
||||
|
||||
<https://tecadmin.net/bash-special-variables/>
|
||||
|
||||
Here are some handy references for default bash variables
|
||||
|
||||
```text
|
||||
$0 – The name of the script being executed.
|
||||
$1-$9 – The first nine command-line arguments.
|
||||
$# – The number of command-line arguments.
|
||||
$* – All command-line arguments as a single string.
|
||||
$@ – All command-line arguments as an array.
|
||||
$? – The exit status of the last executed command.
|
||||
$$ – The process ID of the current shell.
|
||||
$! – The process ID of the last background command.
|
||||
$- – Shows the current shell options or flags.
|
||||
```
|
||||
|
||||
And here are the meanings of the shell options
|
||||
|
||||
```text
|
||||
h – Remember the location of commands as they are looked up
|
||||
i – Interactive shell
|
||||
m – Job control is enabled
|
||||
B – Brace expansion is enabled
|
||||
H – History substitution is enabled
|
||||
```
|
||||
|
||||
So to check if you are in an interactive shell:
|
||||
|
||||
```bash
|
||||
export REMOTE_USER=${USER}
|
||||
export REMOTE_HOST=something.com
|
||||
export REMOTE_PORT=22
|
||||
[ $- == *i* ]] && Some command here
|
||||
```
|
||||
|
||||
# The following is generated by the above variables. No tweaks necessary.
|
||||
export KEY_NAME=~/.ssh/id_${REMOTE_USER}_${REMOTE_HOST}
|
||||
export KEY_COMMENT="${USER}@${HOSTNAME}:${REMOTE_USER}@${REMOTE_HOST}"
|
||||
## SSH Setup
|
||||
|
||||
Generate a key (password protect it!)
|
||||
|
||||
```bash
|
||||
# Pick one of the below key types
|
||||
# ed25519
|
||||
ssh-keygen -C ${KEY_COMMENT} -f ${KEY_NAME} -t ed25519
|
||||
ssh-keygen -C ssh@ducoterra.net -t ed25519
|
||||
# rsa 4096
|
||||
ssh-keygen -C ${KEY_COMMENT} -f ${KEY_NAME} -t rsa -b 4096
|
||||
ssh-keygen -C ssh@ducoterra.net -t rsa -b 4096
|
||||
|
||||
cat <<EOF >> ~/.ssh/config
|
||||
# Inspect a key
|
||||
ssh-keygen -l -f ~/.ssh/id_rsa
|
||||
|
||||
Host ${REMOTE_HOST}
|
||||
Hostname ${REMOTE_HOST}
|
||||
IdentityFile ${KEY_NAME}
|
||||
User ${REMOTE_USER}
|
||||
Port ${REMOTE_PORT}
|
||||
EOF
|
||||
# Change the password
|
||||
ssh-keygen -p -f ~/.ssh/id_rsa
|
||||
```
|
||||
|
||||
In your ~/.ssh/config, add the following line to set the default key
|
||||
|
||||
```conf
|
||||
IdentityFile ~/.foo/identity
|
||||
```
|
||||
|
||||
Then add a host to your local computer
|
||||
|
||||
```bash
|
||||
Host <hostname>
|
||||
Hostname <host.something.com or IP address>
|
||||
User <remote user>
|
||||
Port <remote port>
|
||||
```
|
||||
|
||||
And copy the key to a remote computer
|
||||
|
||||
```bash
|
||||
# Copy the generated key to the server using password auth. Assumes password auth enabled.
|
||||
ssh-copy-id -o PubkeyAuthentication=no -i ${KEY_NAME} ${REMOTE_USER}@${REMOTE_HOST}
|
||||
ssh-copy-id -f -i ~/.ssh/id_ed25519 ${REMOTE_USER}@${REMOTE_HOST}
|
||||
|
||||
# Log into the server with your key
|
||||
ssh -i ${KEY_NAME} ${REMOTE_HOST}
|
||||
@@ -166,12 +232,10 @@ All projects will be prefixed with one of the following categories:
|
||||
|
||||
- `device_`
|
||||
- `os_`
|
||||
- `cloud_`
|
||||
- `systemd_`
|
||||
- `software_`
|
||||
- `podman_`
|
||||
- `docker_`
|
||||
- `kubernetes_`
|
||||
- `software_`
|
||||
|
||||
Note, some projects will be named with just the prefix. These are projects for
|
||||
configuring the underlying technology. The `podman` project, for example, will
|
||||
@@ -193,24 +257,15 @@ be prefixed with the cloud's name, not the word "cloud". So AWS services will
|
||||
be prefixed with `aws_` and azure would be `azure_`. This should make them more
|
||||
searchable.
|
||||
|
||||
`systemd_` projects are designed to be installed with ansible and run via
|
||||
systemd on a linux VM or other linux hardware.
|
||||
`software_` projects record configuration for common software agnostic to
|
||||
operating system or linux flavor.
|
||||
|
||||
`podman_` projects are either designed to be run as quadlets or as podman
|
||||
containers outright.
|
||||
|
||||
`docker_` projects are either docker-compose or some form of docker run
|
||||
command.
|
||||
|
||||
`kubernetes_` projects are helm, kustomize, kubectl, or some other kubernetes
|
||||
compliant deployment.
|
||||
|
||||
`software_` projects record configuration for common software agnostic to
|
||||
operating system or linux flavor.
|
||||
|
||||
`security_` projects record security best practices and might reference other
|
||||
projects, but are usually just best practices and how-tos.
|
||||
|
||||
## Active Project Requirements
|
||||
|
||||
- [ ] Installation is documented
|
||||
|
||||
@@ -25,6 +25,7 @@
|
||||
- [light](#light)
|
||||
- [binary sensor](#binary-sensor)
|
||||
- [lambda](#lambda)
|
||||
- [Display](#display)
|
||||
|
||||
## Install
|
||||
|
||||
@@ -177,3 +178,72 @@ assist_pipeline:
|
||||
> you call any of ESPHome’s many APIs directly. For example, here we’re
|
||||
> retrieving the current state of the end stop using .state and using it to
|
||||
> construct our cover state.
|
||||
|
||||
### Display
|
||||
|
||||
Display pages
|
||||
|
||||
```yaml
|
||||
display:
|
||||
- platform: st7735
|
||||
spi_id: spi_lcd
|
||||
model: "INITR_MINI160X80"
|
||||
reset_pin: GPIO1
|
||||
cs_pin: GPIO4
|
||||
dc_pin: GPIO2
|
||||
rotation: 270
|
||||
device_width: 82
|
||||
device_height: 161
|
||||
col_start: 0
|
||||
row_start: 0
|
||||
eight_bit_color: true
|
||||
invert_colors: true
|
||||
use_bgr: true
|
||||
auto_clear_enabled: true
|
||||
id: my_display
|
||||
pages:
|
||||
- id: page1
|
||||
lambda: |-
|
||||
it.print(0, 10, id(font_roboto), "Connecting to");
|
||||
it.print(0, 30, id(font_roboto), "Home Assistant...");
|
||||
- id: page2
|
||||
lambda: |-
|
||||
it.print(0, 10, id(font_roboto), "Configuring");
|
||||
it.print(0, 30, id(font_roboto), "sensors...");
|
||||
- id: page3
|
||||
lambda: |-
|
||||
it.print(0, 10, id(font_roboto), "Loading");
|
||||
it.print(0, 30, id(font_roboto), "important");
|
||||
it.print(0, 50, id(font_roboto), "update...");
|
||||
- id: page4
|
||||
lambda: |-
|
||||
it.image(0, 0, id(my_image), COLOR_OFF, COLOR_ON);
|
||||
```
|
||||
|
||||
Switch pages
|
||||
|
||||
```yaml
|
||||
interval:
|
||||
- interval: 5s
|
||||
then:
|
||||
- display.page.show_next: my_display
|
||||
- component.update: my_display
|
||||
```
|
||||
|
||||
Show an image
|
||||
|
||||
```yaml
|
||||
image:
|
||||
- file: "test_tdongle_image.png"
|
||||
type: RGB
|
||||
id: my_image
|
||||
```
|
||||
|
||||
Specify a font
|
||||
|
||||
```yaml
|
||||
font:
|
||||
- file: "gfonts://Roboto"
|
||||
id: font_roboto
|
||||
size: 20
|
||||
```
|
||||
|
||||
@@ -11,3 +11,13 @@ sudo curl -o /etc/udev/rules.d/50-qmk.rules https://raw.githubusercontent.com/qm
|
||||
sudo udevadm control --reload-rules
|
||||
sudo udevadm trigger
|
||||
```
|
||||
|
||||
## Beta Bios Updates
|
||||
|
||||
```bash
|
||||
# With charger attached
|
||||
sudo fwupdmgr enable-remote lvfs-testing
|
||||
sudo fwupdmgr refresh --force
|
||||
sudo fwupdmgr get-updates
|
||||
sudo fwupdmgr update
|
||||
```
|
||||
|
||||
@@ -1,7 +1,99 @@
|
||||
# Yubikey
|
||||
|
||||
- [Yubikey](#yubikey)
|
||||
- [Configuration](#configuration)
|
||||
- [Software](#software)
|
||||
- [GPG](#gpg)
|
||||
- [Saving GPG key to card](#saving-gpg-key-to-card)
|
||||
- [Using the GPG key on a Yubikey](#using-the-gpg-key-on-a-yubikey)
|
||||
- [Factory Reset](#factory-reset)
|
||||
|
||||
## Configuration
|
||||
|
||||
1. You will likely need the [udev
|
||||
rules](https://support.yubico.com/hc/en-us/articles/360013708900-Using-Your-YubiKey-with-Linux)
|
||||
to use the AppImage configuration tool on linux even if your udev version is above 244.
|
||||
|
||||
## Software
|
||||
|
||||
The [Yubikey Manager](https://www.yubico.com/support/download/yubikey-manager/) is deprecated.
|
||||
|
||||
Use the [Yubikey Authenticator](https://www.yubico.com/products/yubico-authenticator/) for GUI.
|
||||
|
||||
## GPG
|
||||
|
||||
### Saving GPG key to card
|
||||
|
||||
<https://support.yubico.com/hc/en-us/articles/360013790259-Using-Your-YubiKey-with-OpenPGP>
|
||||
|
||||
On Fedora you'll need to add the following polkit rules to access your smart card.
|
||||
|
||||
```bash
|
||||
export MY_USER=ducoterra
|
||||
echo <<EOF > /etc/polkit-1/rules.d/10-pcsc-custom.rules
|
||||
polkit.addRule(function(action, subject) {
|
||||
if (action.id == "org.debian.pcsc-lite.access_pcsc" &&
|
||||
subject.user == "${MY_USER}") {
|
||||
return polkit.Result.YES;
|
||||
}
|
||||
});
|
||||
|
||||
polkit.addRule(function(action, subject) {
|
||||
if (action.id == "org.debian.pcsc-lite.access_card" &&
|
||||
action.lookup("reader") == 'Yubico YubiKey OTP+FIDO+CCID 00 00' &&
|
||||
subject.user == "${MY_USER}") {
|
||||
return polkit.Result.YES;
|
||||
}
|
||||
});
|
||||
EOF
|
||||
```
|
||||
|
||||
Now you can add your key to your card.
|
||||
|
||||
```bash
|
||||
gpg --edit-key 1234ABC
|
||||
|
||||
# Save both the signature and authentication keys
|
||||
> keytocard
|
||||
|
||||
# Do not save or your key will be deleted locally
|
||||
> quit
|
||||
```
|
||||
|
||||
Check the keys on the yubikey with
|
||||
|
||||
```bash
|
||||
gpg --card-status
|
||||
```
|
||||
|
||||
Once your keys have been loaded, change the pin.
|
||||
|
||||
```bash
|
||||
gpg --change-pin
|
||||
```
|
||||
|
||||
### Using the GPG key on a Yubikey
|
||||
|
||||
<https://github.com/drduh/YubiKey-Guide?tab=readme-ov-file#notes>
|
||||
|
||||
```bash
|
||||
export GPG_EMAIL='myemail@example.com'
|
||||
|
||||
# Import the public key. Without this the key won't show up.
|
||||
gpg --auto-key-locate hkps://keys.openpgp.org --locate-keys ${GPG_EMAIL}
|
||||
|
||||
# Trust the key
|
||||
gpg --quick-set-ownertrust ${GPG_EMAIL} full
|
||||
|
||||
# Yubikey should now show up
|
||||
gpg --list-secret-keys
|
||||
```
|
||||
|
||||
### Factory Reset
|
||||
|
||||
```bash
|
||||
gpg --edit-card
|
||||
|
||||
> admin
|
||||
> factory-reset
|
||||
```
|
||||
|
||||
@@ -1,3 +1,13 @@
|
||||
# Kubernetes
|
||||
|
||||
See [k3s](/active/systemd_k3s/k3s.md)
|
||||
## CLI Tools
|
||||
|
||||
kubectl: <https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/>
|
||||
|
||||
helm: <https://helm.sh/docs/intro/install/>
|
||||
|
||||
## Install a Kubernetes Server
|
||||
|
||||
For k3s, see [k3s](/active/systemd_k3s/k3s.md)
|
||||
|
||||
For k0s, see [k0s](/active/systemd_k0s/k0s.md)
|
||||
11
active/kubernetes_gitea/gitea-demo-values.yaml
Normal file
11
active/kubernetes_gitea/gitea-demo-values.yaml
Normal file
@@ -0,0 +1,11 @@
|
||||
service:
|
||||
http:
|
||||
type: LoadBalancer
|
||||
externalTrafficPolicy: Cluster
|
||||
annotations:
|
||||
metallb.io/allow-shared-ip: gitea
|
||||
ssh:
|
||||
type: LoadBalancer
|
||||
externalTrafficPolicy: Cluster
|
||||
annotations:
|
||||
metallb.io/allow-shared-ip: gitea
|
||||
@@ -21,14 +21,10 @@ ingress:
|
||||
persistence:
|
||||
enabled: true
|
||||
create: true
|
||||
storageClass: zfs-iscsi-enc0
|
||||
claimName: data-gitea-staging-0
|
||||
annotations:
|
||||
"helm.sh/resource-policy": keep
|
||||
|
||||
global:
|
||||
storageClass: zfs-iscsi-enc1
|
||||
|
||||
postgresql:
|
||||
enabled: true
|
||||
image:
|
||||
@@ -36,7 +32,6 @@ postgresql:
|
||||
primary:
|
||||
persistence:
|
||||
enabled: true
|
||||
storageClass: zfs-iscsi-enc1
|
||||
annotations:
|
||||
"helm.sh/resource-policy": keep
|
||||
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
# Gitea
|
||||
|
||||
- [Gitea](#gitea)
|
||||
- [Demo](#demo)
|
||||
- [Staging](#staging)
|
||||
- [Install](#install)
|
||||
- [Backup and Restore](#backup-and-restore)
|
||||
@@ -14,6 +15,17 @@ they decide to change things. This is the first chart (besides ingress-nginx) wh
|
||||
we need to pay attention to the MetalLB annotation. This has been set in the values.yaml
|
||||
file.
|
||||
|
||||
## Demo
|
||||
|
||||
```bash
|
||||
helm upgrade --install \
|
||||
gitea \
|
||||
gitea-charts/gitea \
|
||||
--values active/kubernetes_gitea/gitea-demo-values.yaml \
|
||||
--namespace gitea \
|
||||
--create-namespace
|
||||
```
|
||||
|
||||
## Staging
|
||||
|
||||
There is a `gitea-staging.yaml` file with staging values. This should be installed in
|
||||
|
||||
@@ -10,8 +10,14 @@
|
||||
|
||||
```bash
|
||||
# Download the updated template from github
|
||||
kubectl kustomize "github.com/rancher/local-path-provisioner/deploy?ref=v0.0.31" > active/kubernetes_local-path-provisioner/local-path-storage.yaml
|
||||
kubectl kustomize "github.com/rancher/local-path-provisioner/deploy?ref=v0.0.32" > active/kubernetes_local-path-provisioner/local-path-storage.yaml
|
||||
|
||||
# Apply customizations (ssd/hdd storage, read write many support)
|
||||
kubectl kustomize active/kubernetes_local-path-provisioner | kubectl apply -f -
|
||||
```
|
||||
|
||||
Mark the class as default
|
||||
|
||||
```bash
|
||||
kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
|
||||
```
|
||||
|
||||
@@ -176,7 +176,7 @@ spec:
|
||||
fieldPath: metadata.namespace
|
||||
- name: CONFIG_MOUNT_PATH
|
||||
value: /etc/config/
|
||||
image: rancher/local-path-provisioner:v0.0.31
|
||||
image: rancher/local-path-provisioner:v0.0.32
|
||||
imagePullPolicy: IfNotPresent
|
||||
name: local-path-provisioner
|
||||
volumeMounts:
|
||||
|
||||
@@ -5,6 +5,15 @@ below installs nimcraft. For each installation you'll want to create your own va
|
||||
with a new port. The server-downloader is called "minecraft_get_server" and is available on
|
||||
[Github](https://github.com/ducoterra/minecraft_get_server).
|
||||
|
||||
After installing, you can run admin commands (like whitelisting players) by
|
||||
attaching to the container:
|
||||
|
||||
```bash
|
||||
kubectl attach -it <pod>
|
||||
|
||||
> /whitelist add ducoterra
|
||||
```
|
||||
|
||||
## Testing
|
||||
|
||||
```bash
|
||||
|
||||
@@ -56,10 +56,10 @@ spec:
|
||||
value: "1"
|
||||
resources:
|
||||
requests:
|
||||
memory: {{ div .Values.max_ram 2 }}Gi
|
||||
memory: "{{ div .Values.max_ram 2 }}Gi"
|
||||
cpu: 1m
|
||||
limits:
|
||||
memory: {{ add 1 .Values.max_ram }}Gi
|
||||
memory: "{{ add 1 .Values.max_ram }}Gi"
|
||||
cpu: {{ .Values.max_cpu | quote }}
|
||||
volumes:
|
||||
- name: data
|
||||
|
||||
@@ -5,7 +5,6 @@ metadata:
|
||||
annotations:
|
||||
"helm.sh/resource-policy": keep
|
||||
spec:
|
||||
storageClassName: ssd
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
|
||||
@@ -2,11 +2,7 @@ apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: {{ .Release.Name }}
|
||||
annotations:
|
||||
metallb.universe.tf/address-pool: "external"
|
||||
external-dns.alpha.kubernetes.io/hostname: {{ .Release.Name }}.reeseapps.com
|
||||
spec:
|
||||
ipFamilies: ["IPv6"]
|
||||
externalTrafficPolicy: Cluster
|
||||
selector:
|
||||
app: {{ .Release.Name }}
|
||||
|
||||
@@ -70,10 +70,15 @@ Follow most of the instructions here: <https://wiki.archlinux.org/title/Installa
|
||||
```bash
|
||||
gpg --auto-key-locate clear,wkd -v --locate-external-key pierre@archlinux.org
|
||||
gpg --keyserver-options auto-key-retrieve --verify archlinux-...
|
||||
gpg --verify signature_file.sig archlinux.iso
|
||||
```
|
||||
|
||||
3. Create a bootable ISO <https://wiki.archlinux.org/title/USB_flash_installation_medium>
|
||||
|
||||
```bash
|
||||
dd bs=4M if=path/to/archlinux-version-x86_64.iso of=/dev/disk/by-id/usb-My_flash_drive conv=fsync oflag=direct status=progress
|
||||
```
|
||||
|
||||
1. If you are booting into a VM, create an ISO with installation files so you don't have to
|
||||
copy-paste:
|
||||
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
- [Fedora Kinoite](#fedora-kinoite)
|
||||
- [TPM2 Luks Decryption](#tpm2-luks-decryption)
|
||||
- [Podman](#podman)
|
||||
- [Autostarting services with quadlets](#autostarting-services-with-quadlets)
|
||||
- [Docker Compose and Docker Buildkit with Rootless Podman](#docker-compose-and-docker-buildkit-with-rootless-podman)
|
||||
- [rpm-ostree](#rpm-ostree)
|
||||
- [Git, Vim, etc](#git-vim-etc)
|
||||
- [Libvirt, Qemu, KVM](#libvirt-qemu-kvm)
|
||||
@@ -71,6 +71,19 @@ export REGISTRY_AUTH_FILE=$HOME/.podman-auth.json
|
||||
|
||||
Source that and then run `podman login` to create the file.
|
||||
|
||||
### Docker Compose and Docker Buildkit with Rootless Podman
|
||||
|
||||
Allows you to use podman with full docker-compose compatibility.
|
||||
|
||||
<https://emersion.fr/blog/2025/using-podman-compose-and-buildkit/>
|
||||
|
||||
```bash
|
||||
rpm-ostree install docker-compose docker-buildx
|
||||
reboot
|
||||
systemctl --user enable --now podman.socket
|
||||
docker context create podman --docker host=unix://$XDG_RUNTIME_DIR/podman/podman.sock
|
||||
docker context use podman
|
||||
|
||||
### Autostarting services with quadlets
|
||||
|
||||
If you want to run something as your user at boot (like a systemd process, think ollama) you can
|
||||
|
||||
@@ -561,7 +561,7 @@ dnf upgrade --refresh
|
||||
reboot
|
||||
|
||||
# Set the releasever to the version you want to upgrade to
|
||||
dnf system-upgrade download --releasever=42
|
||||
dnf system-upgrade download --releasever=43
|
||||
dnf system-upgrade reboot
|
||||
```
|
||||
|
||||
@@ -666,6 +666,8 @@ mkfs.xfs /dev/mapper/vg0-docker--data
|
||||
|
||||
```bash
|
||||
nmcli connection modify Wired\ connection\ 1 ipv6.addr-gen-mode eui64
|
||||
nmcli connection modify Wired\ connection\ 1 ipv6.ip6-privacy disabled
|
||||
systemctl restart NetworkManager
|
||||
```
|
||||
|
||||
### Install and Enable Cockpit
|
||||
|
||||
@@ -603,7 +603,7 @@ flatpak install org.gnome.Evolution
|
||||
# Virtualization
|
||||
sudo dnf group install --with-optional virtualization
|
||||
|
||||
sudo systemctl start libvirtd
|
||||
sudo systemctl enable --now libvirtd virtnetworkd.service
|
||||
```
|
||||
|
||||
## NVM
|
||||
@@ -642,6 +642,17 @@ dnf install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-co
|
||||
systemctl enable --now docker
|
||||
```
|
||||
|
||||
Or use docker with podman with full docker-compose compatibility.
|
||||
|
||||
<https://emersion.fr/blog/2025/using-podman-compose-and-buildkit/>
|
||||
|
||||
```bash
|
||||
dnf install -y docker-compose docker-buildx
|
||||
systemctl --user enable --now podman.socket
|
||||
docker context create podman --docker host=unix://$XDG_RUNTIME_DIR/podman/podman.sock
|
||||
docker context use podman
|
||||
```
|
||||
|
||||
## Boxes
|
||||
|
||||
Virtualization at its boxiest.
|
||||
|
||||
@@ -1,9 +0,0 @@
|
||||
# Selinux Findings
|
||||
|
||||
## Cloning a Virtual Machine
|
||||
|
||||
```bash
|
||||
cd active/os_fedora/selinux_policies
|
||||
sudo ausearch -c 'rpc-virtstorage' --raw | audit2allow -M my-rpcvirtstorage
|
||||
sudo semodule -X 300 -i my-rpcvirtstorage.pp
|
||||
```
|
||||
14
active/os_fedora/selinux_policies/clamav-notifysend.te
Normal file
14
active/os_fedora/selinux_policies/clamav-notifysend.te
Normal file
@@ -0,0 +1,14 @@
|
||||
|
||||
module clamav-notifysend 1.0;
|
||||
|
||||
require {
|
||||
type session_dbusd_tmp_t;
|
||||
type antivirus_t;
|
||||
type unconfined_dbusd_t;
|
||||
class sock_file write;
|
||||
class unix_stream_socket connectto;
|
||||
}
|
||||
|
||||
#============= antivirus_t ==============
|
||||
allow antivirus_t session_dbusd_tmp_t:sock_file write;
|
||||
allow antivirus_t unconfined_dbusd_t:unix_stream_socket connectto;
|
||||
29
active/os_fedora/selinux_policies/clamav-sudo.te
Normal file
29
active/os_fedora/selinux_policies/clamav-sudo.te
Normal file
@@ -0,0 +1,29 @@
|
||||
|
||||
module clamav-sudo 1.0;
|
||||
|
||||
require {
|
||||
type antivirus_t;
|
||||
type sudo_exec_t;
|
||||
type systemd_logind_var_run_t;
|
||||
type pidfs_t;
|
||||
type chkpwd_exec_t;
|
||||
type systemd_logind_t;
|
||||
class file { execute execute_no_trans map };
|
||||
class netlink_audit_socket { create nlmsg_relay read write };
|
||||
class capability { audit_write sys_resource };
|
||||
class process { setrlimit setsched };
|
||||
class sock_file write;
|
||||
class unix_stream_socket connectto;
|
||||
class filesystem getattr;
|
||||
}
|
||||
|
||||
#============= antivirus_t ==============
|
||||
allow antivirus_t chkpwd_exec_t:file { execute execute_no_trans };
|
||||
allow antivirus_t pidfs_t:filesystem getattr;
|
||||
allow antivirus_t self:capability { audit_write sys_resource };
|
||||
allow antivirus_t self:netlink_audit_socket { create nlmsg_relay write };
|
||||
allow antivirus_t self:netlink_audit_socket read;
|
||||
allow antivirus_t self:process { setrlimit setsched };
|
||||
allow antivirus_t sudo_exec_t:file map;
|
||||
allow antivirus_t systemd_logind_t:unix_stream_socket connectto;
|
||||
allow antivirus_t systemd_logind_var_run_t:sock_file write;
|
||||
23
active/os_fedora/selinux_policies/clamav-unixchkpwd.te
Normal file
23
active/os_fedora/selinux_policies/clamav-unixchkpwd.te
Normal file
@@ -0,0 +1,23 @@
|
||||
|
||||
module clamav-unixchkpwd 1.0;
|
||||
|
||||
require {
|
||||
type chkpwd_t;
|
||||
type user_devpts_t;
|
||||
type antivirus_t;
|
||||
type shadow_t;
|
||||
type init_t;
|
||||
class chr_file { read write };
|
||||
class file { getattr open read };
|
||||
class process siginh;
|
||||
}
|
||||
|
||||
#============= antivirus_t ==============
|
||||
allow antivirus_t shadow_t:file { open read };
|
||||
allow antivirus_t shadow_t:file getattr;
|
||||
|
||||
#============= chkpwd_t ==============
|
||||
allow chkpwd_t user_devpts_t:chr_file { read write };
|
||||
|
||||
#============= init_t ==============
|
||||
allow init_t chkpwd_t:process siginh;
|
||||
Binary file not shown.
8
active/podman_caddy/Containerfile
Normal file
8
active/podman_caddy/Containerfile
Normal file
@@ -0,0 +1,8 @@
|
||||
FROM docker.io/caddy:2-builder AS builder
|
||||
|
||||
RUN xcaddy build \
|
||||
--with github.com/caddy-dns/route53@v1.6.0
|
||||
|
||||
FROM docker.io/caddy:2
|
||||
|
||||
COPY --from=builder /usr/bin/caddy /usr/bin/caddy
|
||||
@@ -4,7 +4,7 @@ Description=Caddy
|
||||
[Container]
|
||||
AddCapability=NET_ADMIN
|
||||
ContainerName=caddy
|
||||
Image=docker.io/caddy:2
|
||||
Image=gitea.reeseapps.com/services/caddy:latest
|
||||
Network=host
|
||||
SecurityLabelDisable=true
|
||||
Volume=/etc/caddy:/etc/caddy
|
||||
|
||||
@@ -1,11 +1,24 @@
|
||||
# Caddy Reverse Proxy
|
||||
|
||||
- [Caddy Reverse Proxy](#caddy-reverse-proxy)
|
||||
- [Custom Caddy Image](#custom-caddy-image)
|
||||
- [Install Caddy](#install-caddy)
|
||||
- [Ansible](#ansible)
|
||||
- [Manual](#manual)
|
||||
- [Adding a new Caddy Record](#adding-a-new-caddy-record)
|
||||
|
||||
## Custom Caddy Image
|
||||
|
||||
This repo builds a custom caddy image with route53 DNS certbot support.
|
||||
|
||||
```bash
|
||||
podman image pull gitea.reeseapps.com/services/caddy:latest
|
||||
```
|
||||
|
||||
To upgrade the image, check [the caddy-dns route53
|
||||
project](https://github.com/caddy-dns/route53/tags) releases and update the
|
||||
`Containerfile` with the new version.
|
||||
|
||||
## Install Caddy
|
||||
|
||||
### Ansible
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
services:
|
||||
gitea:
|
||||
image: docker.gitea.com/gitea:1.24-rootless
|
||||
image: docker.gitea.com/gitea:1.25-rootless
|
||||
container_name: gitea
|
||||
environment:
|
||||
- GITEA__database__DB_TYPE=postgres
|
||||
|
||||
@@ -6,7 +6,7 @@
|
||||
- [Create the gitea user](#create-the-gitea-user)
|
||||
- [Convert Compose to Quadlet](#convert-compose-to-quadlet)
|
||||
- [Install Quadlets](#install-quadlets)
|
||||
- [Upgrade Quadlets](#upgrade-quadlets)
|
||||
- [Upgrade](#upgrade)
|
||||
- [Editing Gitea Config](#editing-gitea-config)
|
||||
- [Gitea Runners](#gitea-runners)
|
||||
- [Firewall Rules](#firewall-rules)
|
||||
@@ -102,12 +102,23 @@ systemctl --user restart gitea postgres
|
||||
systemctl --user enable --now podman-auto-update.timer
|
||||
```
|
||||
|
||||
### Upgrade Quadlets
|
||||
### Upgrade
|
||||
|
||||
1. Check [the blog](https://blog.gitea.com/) for any breaking changes.
|
||||
2. Update the `compose.yaml` with any needed changes
|
||||
3. [Regenerate the quadlets](#convert-compose-to-quadlet)
|
||||
4. Upload the new quadlets and restart the service
|
||||
|
||||
```bash
|
||||
scp -r quadlets/. gitea:~/.config/containers/systemd/
|
||||
ssh gitea systemctl --user daemon-reload
|
||||
ssh gitea systemctl --user restart gitea postgres
|
||||
# Upload quadlets and restart
|
||||
export PODMAN_SERVER=3dserver
|
||||
scp -r active/podman_gitea/quadlets/. $PODMAN_SERVER:/home/gitea/.config/containers/systemd/
|
||||
ssh $PODMAN_SERVER chown -R gitea:gitea /home/gitea/.config/containers/systemd/
|
||||
|
||||
ssh $PODMAN_SERVER
|
||||
machinectl shell gitea@
|
||||
systemctl --user daemon-reload
|
||||
systemctl --user restart gitea postgres
|
||||
```
|
||||
|
||||
### Editing Gitea Config
|
||||
|
||||
@@ -5,7 +5,7 @@ Requires=postgres.service
|
||||
AutoUpdate=registry
|
||||
ContainerName=gitea
|
||||
Environment=GITEA__database__DB_TYPE=postgres GITEA__database__HOST=postgres:5432 GITEA__database__NAME=gitea GITEA__database__USER=gitea GITEA__database__PASSWD=gitea
|
||||
Image=docker.gitea.com/gitea:1.24-rootless
|
||||
Image=docker.gitea.com/gitea:1.25-rootless
|
||||
Network=gitea.network
|
||||
PublishPort=3000:3000
|
||||
PublishPort=2222:2222
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
#
|
||||
# WARNING: To install Immich, follow our guide: https://immich.app/docs/install/docker-compose
|
||||
# WARNING: To install Immich, follow our guide: https://docs.immich.app/install/docker-compose
|
||||
#
|
||||
# Make sure to use the docker-compose.yml of the current release:
|
||||
#
|
||||
@@ -10,7 +10,7 @@
|
||||
services:
|
||||
immich-server:
|
||||
container_name: immich_server
|
||||
image: ghcr.io/immich-app/immich-server:release
|
||||
image: ghcr.io/immich-app/immich-server:v2.3.1
|
||||
# extends:
|
||||
# file: hwaccel.transcoding.yml
|
||||
# service: cpu # set to one of [nvenc, quicksync, rkmpp, vaapi, vaapi-wsl] for accelerated transcoding
|
||||
@@ -35,7 +35,7 @@ services:
|
||||
# For hardware acceleration, add one of -[armnn, cuda, rocm, openvino, rknn] to the image tag.
|
||||
# Example tag: release-cuda
|
||||
image: ghcr.io/immich-app/immich-machine-learning:release
|
||||
# extends: # uncomment this section for hardware acceleration - see https://immich.app/docs/features/ml-hardware-acceleration
|
||||
# extends: # uncomment this section for hardware acceleration - see https://docs.immich.app/features/ml-hardware-acceleration
|
||||
# file: hwaccel.ml.yml
|
||||
# service: cpu # set to one of [armnn, cuda, rocm, openvino, openvino-wsl, rknn] for accelerated inference - use the `-wsl` version for WSL2 where applicable
|
||||
volumes:
|
||||
@@ -59,7 +59,7 @@ services:
|
||||
|
||||
database:
|
||||
container_name: immich_postgres
|
||||
image: ghcr.io/immich-app/postgres:14-vectorchord0.4.3-pgvectors0.2.0@sha256:41eacbe83eca995561fe43814fd4891e16e39632806253848efaf04d3c8a8b84
|
||||
image: ghcr.io/immich-app/postgres:14-vectorchord0.4.3-pgvectors0.2.0@sha256:bcf63357191b76a916ae5eb93464d65c07511da41e3bf7a8416db519b40b1c23
|
||||
environment:
|
||||
POSTGRES_PASSWORD: postgres
|
||||
POSTGRES_USER: postgres
|
||||
|
||||
@@ -14,6 +14,7 @@
|
||||
- [Backup immich](#backup-immich)
|
||||
- [Upgrade immich](#upgrade-immich)
|
||||
- [Upgrade Quadlets](#upgrade-quadlets)
|
||||
- [Upload Images in Bulk](#upload-images-in-bulk)
|
||||
- [Uninstall](#uninstall)
|
||||
- [Notes](#notes)
|
||||
- [SELinux](#selinux)
|
||||
@@ -48,10 +49,10 @@ mkdir -p /home/immich/.config/containers/systemd
|
||||
|
||||
```bash
|
||||
# Pull the compose file
|
||||
wget -O active/podman_immich/compose/compose.yml https://github.com/immich-app/immich/releases/latest/download/docker-compose.yml
|
||||
wget -O active/podman_immich/release-compose.yaml https://github.com/immich-app/immich/releases/latest/download/docker-compose.yml
|
||||
|
||||
# Pull the .env file
|
||||
wget -O active/podman_immich/quadlets/.env https://github.com/immich-app/immich/releases/latest/download/example.env
|
||||
wget -O active/podman_immich/release-env https://github.com/immich-app/immich/releases/latest/download/example.env
|
||||
```
|
||||
|
||||
2. Edit the compose.yaml. Replace all environment variables with their correct values.
|
||||
@@ -79,7 +80,7 @@ quay.io/k9withabone/podlet \
|
||||
compose /compose/compose.yaml
|
||||
|
||||
# Copy the files to the server
|
||||
export PODMAN_SERVER=
|
||||
export PODMAN_SERVER=3dserver
|
||||
scp -r active/podman_immich/quadlets/. $PODMAN_SERVER:/home/immich/.config/containers/systemd/
|
||||
ssh $PODMAN_SERVER chown -R immich:immich /home/immich/.config/containers/systemd/
|
||||
```
|
||||
@@ -147,6 +148,24 @@ ssh immich systemctl --user daemon-reload
|
||||
ssh immich systemctl --user restart immich
|
||||
```
|
||||
|
||||
## Upload Images in Bulk
|
||||
|
||||
<https://docs.immich.app/features/command-line-interface/>
|
||||
|
||||
```bash
|
||||
# Install the CLI
|
||||
npm i -g @immich/cli
|
||||
|
||||
# immich login [url] [key]
|
||||
immich login http://192.168.1.216:2283/api HFEJ38DNSDUEG
|
||||
|
||||
# Check the upload
|
||||
immich upload --dry-run --recursive directory/
|
||||
|
||||
# Upload
|
||||
immich upload --recursive directory/
|
||||
```
|
||||
|
||||
## Uninstall
|
||||
|
||||
```bash
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
[Container]
|
||||
ContainerName=immich_postgres
|
||||
Environment=POSTGRES_PASSWORD=postgres POSTGRES_USER=postgres POSTGRES_DB=immich POSTGRES_INITDB_ARGS=--data-checksums
|
||||
Image=ghcr.io/immich-app/postgres:14-vectorchord0.4.3-pgvectors0.2.0@sha256:41eacbe83eca995561fe43814fd4891e16e39632806253848efaf04d3c8a8b84
|
||||
Image=ghcr.io/immich-app/postgres:14-vectorchord0.4.3-pgvectors0.2.0@sha256:bcf63357191b76a916ae5eb93464d65c07511da41e3bf7a8416db519b40b1c23
|
||||
Network=immich.network
|
||||
ShmSize=128mb
|
||||
Volume=/home/immich/postgres:/var/lib/postgresql/data:Z
|
||||
|
||||
@@ -4,7 +4,7 @@ Requires=redis.service database.service
|
||||
[Container]
|
||||
ContainerName=immich_server
|
||||
EnvironmentFile=.env
|
||||
Image=ghcr.io/immich-app/immich-server:release
|
||||
Image=ghcr.io/immich-app/immich-server:v2.3.1
|
||||
Network=immich.network
|
||||
PublishPort=2283:2283
|
||||
Volume=/home/immich/library:/data:Z
|
||||
|
||||
74
active/podman_immich/release-compose.yaml
Normal file
74
active/podman_immich/release-compose.yaml
Normal file
@@ -0,0 +1,74 @@
|
||||
#
|
||||
# WARNING: To install Immich, follow our guide: https://docs.immich.app/install/docker-compose
|
||||
#
|
||||
# Make sure to use the docker-compose.yml of the current release:
|
||||
#
|
||||
# https://github.com/immich-app/immich/releases/latest/download/docker-compose.yml
|
||||
#
|
||||
# The compose file on main may not be compatible with the latest release.
|
||||
|
||||
name: immich
|
||||
|
||||
services:
|
||||
immich-server:
|
||||
container_name: immich_server
|
||||
image: ghcr.io/immich-app/immich-server:${IMMICH_VERSION:-release}
|
||||
# extends:
|
||||
# file: hwaccel.transcoding.yml
|
||||
# service: cpu # set to one of [nvenc, quicksync, rkmpp, vaapi, vaapi-wsl] for accelerated transcoding
|
||||
volumes:
|
||||
# Do not edit the next line. If you want to change the media storage location on your system, edit the value of UPLOAD_LOCATION in the .env file
|
||||
- ${UPLOAD_LOCATION}:/data
|
||||
- /etc/localtime:/etc/localtime:ro
|
||||
env_file:
|
||||
- .env
|
||||
ports:
|
||||
- '2283:2283'
|
||||
depends_on:
|
||||
- redis
|
||||
- database
|
||||
restart: always
|
||||
healthcheck:
|
||||
disable: false
|
||||
|
||||
immich-machine-learning:
|
||||
container_name: immich_machine_learning
|
||||
# For hardware acceleration, add one of -[armnn, cuda, rocm, openvino, rknn] to the image tag.
|
||||
# Example tag: ${IMMICH_VERSION:-release}-cuda
|
||||
image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}
|
||||
# extends: # uncomment this section for hardware acceleration - see https://docs.immich.app/features/ml-hardware-acceleration
|
||||
# file: hwaccel.ml.yml
|
||||
# service: cpu # set to one of [armnn, cuda, rocm, openvino, openvino-wsl, rknn] for accelerated inference - use the `-wsl` version for WSL2 where applicable
|
||||
volumes:
|
||||
- model-cache:/cache
|
||||
env_file:
|
||||
- .env
|
||||
restart: always
|
||||
healthcheck:
|
||||
disable: false
|
||||
|
||||
redis:
|
||||
container_name: immich_redis
|
||||
image: docker.io/valkey/valkey:8@sha256:81db6d39e1bba3b3ff32bd3a1b19a6d69690f94a3954ec131277b9a26b95b3aa
|
||||
healthcheck:
|
||||
test: redis-cli ping || exit 1
|
||||
restart: always
|
||||
|
||||
database:
|
||||
container_name: immich_postgres
|
||||
image: ghcr.io/immich-app/postgres:14-vectorchord0.4.3-pgvectors0.2.0@sha256:bcf63357191b76a916ae5eb93464d65c07511da41e3bf7a8416db519b40b1c23
|
||||
environment:
|
||||
POSTGRES_PASSWORD: ${DB_PASSWORD}
|
||||
POSTGRES_USER: ${DB_USERNAME}
|
||||
POSTGRES_DB: ${DB_DATABASE_NAME}
|
||||
POSTGRES_INITDB_ARGS: '--data-checksums'
|
||||
# Uncomment the DB_STORAGE_TYPE: 'HDD' var if your database isn't stored on SSDs
|
||||
# DB_STORAGE_TYPE: 'HDD'
|
||||
volumes:
|
||||
# Do not edit the next line. If you want to change the database storage location on your system, edit the value of DB_DATA_LOCATION in the .env file
|
||||
- ${DB_DATA_LOCATION}:/var/lib/postgresql/data
|
||||
shm_size: 128mb
|
||||
restart: always
|
||||
|
||||
volumes:
|
||||
model-cache:
|
||||
22
active/podman_immich/release-env
Normal file
22
active/podman_immich/release-env
Normal file
@@ -0,0 +1,22 @@
|
||||
# You can find documentation for all the supported env variables at https://docs.immich.app/install/environment-variables
|
||||
|
||||
# The location where your uploaded files are stored
|
||||
UPLOAD_LOCATION=./library
|
||||
|
||||
# The location where your database files are stored. Network shares are not supported for the database
|
||||
DB_DATA_LOCATION=./postgres
|
||||
|
||||
# To set a timezone, uncomment the next line and change Etc/UTC to a TZ identifier from this list: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List
|
||||
# TZ=Etc/UTC
|
||||
|
||||
# The Immich version to use. You can pin this to a specific version like "v2.1.0"
|
||||
IMMICH_VERSION=v2
|
||||
|
||||
# Connection secret for postgres. You should change it to a random password
|
||||
# Please use only the characters `A-Za-z0-9`, without special characters or spaces
|
||||
DB_PASSWORD=postgres
|
||||
|
||||
# The values below this line do not need to be changed
|
||||
###################################################################################
|
||||
DB_USERNAME=postgres
|
||||
DB_DATABASE_NAME=immich
|
||||
@@ -1,51 +1,59 @@
|
||||
# Jellyfin
|
||||
|
||||
- [Jellyfin](#jellyfin)
|
||||
- [Install](#install)
|
||||
- [Upgrade](#upgrade)
|
||||
- [Mounting Media Directory](#mounting-media-directory)
|
||||
|
||||
They have podman rootless instructions!
|
||||
|
||||
<https://jellyfin.org/docs/general/installation/container/#managing-via-systemd>
|
||||
|
||||
## Install
|
||||
|
||||
### Create jellyfin btrfs volume
|
||||
1. Create the jellyfin user
|
||||
|
||||
```bash
|
||||
useradd jellyfin
|
||||
loginctl enable-linger $(id -u jellyfin)
|
||||
systemctl --user --machine=jellyfin@.host enable podman-restart
|
||||
systemctl --user --machine=jellyfin@.host enable --now podman.socket
|
||||
mkdir -p /home/jellyfin/.config/containers/systemd
|
||||
```
|
||||
|
||||
2. Update the jellyfin record in Caddy.
|
||||
3. Open port 8096 in the firewall.
|
||||
4. Copy the files to the server and start the service
|
||||
|
||||
```bash
|
||||
export PODMAN_SERVER=3dserver
|
||||
scp -r active/podman_jellyfin/quadlets/. $PODMAN_SERVER:/home/jellyfin/.config/containers/systemd/
|
||||
ssh $PODMAN_SERVER chown -R jellyfin:jellyfin /home/jellyfin/.config/containers/systemd/
|
||||
|
||||
ssh $PODMAN_SERVER
|
||||
machinectl shell jellyfin@
|
||||
systemctl --user daemon-reload
|
||||
systemctl --user restart jellyfin
|
||||
```
|
||||
|
||||
## Upgrade
|
||||
|
||||
1. Check [the blog](https://jellyfin.org/posts) for breaking changes
|
||||
2. Update the `jellyfin.container` with the new image version
|
||||
3. Update quadlets and restart the service
|
||||
|
||||
```bash
|
||||
btrfs sub create /btrfs/jellyfin
|
||||
# Upload quadlets and restart
|
||||
export PODMAN_SERVER=3dserver
|
||||
scp -r active/podman_jellyfin/quadlets/. $PODMAN_SERVER:/home/jellyfin/.config/containers/systemd/
|
||||
ssh $PODMAN_SERVER chown -R jellyfin:jellyfin /home/jellyfin/.config/containers/systemd/
|
||||
|
||||
ssh $PODMAN_SERVER
|
||||
machinectl shell jellyfin@
|
||||
systemctl --user daemon-reload
|
||||
systemctl --user restart jellyfin
|
||||
```
|
||||
|
||||
Add /home/jellyfin mount to /etc/fstab
|
||||
|
||||
```bash
|
||||
systemctl daemon-reload
|
||||
mount -a --mkdir
|
||||
```
|
||||
|
||||
### Create the jellyfin user
|
||||
|
||||
```bash
|
||||
useradd jellyfin
|
||||
loginctl enable-linger $(id -u jellyfin)
|
||||
systemctl --user --machine=jellyfin@.host enable podman-restart
|
||||
systemctl --user --machine=jellyfin@.host enable --now podman.socket
|
||||
mkdir -p /home/jellyfin/.config/containers/systemd
|
||||
```
|
||||
|
||||
### Install jellyfin
|
||||
|
||||
~/.config/containers/systemd/jellyfin.container
|
||||
|
||||
```bash
|
||||
# Copy the files to the server
|
||||
scp -r active/podman_jellyfin/quadlets/. jellyfin:~/.config/containers/systemd/
|
||||
ssh jellyfin systemctl --user daemon-reload
|
||||
ssh jellyfin systemctl --user start jellyfin
|
||||
ssh jellyfin journalctl --user -xeu jellyfin
|
||||
ssh jellyfin systemctl --user enable --now podman-auto-update.timer
|
||||
```
|
||||
|
||||
Update the jellyfin record in Caddy.
|
||||
|
||||
Open port 8096 in the firewall.
|
||||
|
||||
## Mounting Media Directory
|
||||
|
||||
Update /etc/fstab with the smb disk details.
|
||||
|
||||
@@ -1,12 +1,12 @@
|
||||
[Container]
|
||||
Image=docker.io/jellyfin/jellyfin:latest
|
||||
Image=docker.io/jellyfin/jellyfin:10.11.3
|
||||
AutoUpdate=registry
|
||||
PublishPort=8096:8096/tcp
|
||||
UserNS=keep-id
|
||||
SecurityLabelDisable=true
|
||||
Volume=/home/jellyfin/jellyfin-config:/config:Z
|
||||
Volume=/home/jellyfin/jellyfin-cache:/cache:Z
|
||||
Volume=/home/jellyfin/jellyfin-media:/media:Z
|
||||
Volume=/var/media:/media:Z
|
||||
Network=jellyfin.network
|
||||
|
||||
[Service]
|
||||
|
||||
@@ -8,6 +8,7 @@
|
||||
- [Check backup service logs](#check-backup-service-logs)
|
||||
- [Run a Manual Backup](#run-a-manual-backup)
|
||||
- [Back up and Entire System](#back-up-and-entire-system)
|
||||
- [Upgrade a Borg Repo](#upgrade-a-borg-repo)
|
||||
|
||||
## Install Borg
|
||||
|
||||
@@ -248,3 +249,7 @@ borg create \
|
||||
${BORG_REPO}::$(date +"%F-%H-%M-%S") \
|
||||
/
|
||||
```
|
||||
|
||||
## Upgrade a Borg Repo
|
||||
|
||||
<https://borgbackup.readthedocs.io/en/stable/usage/upgrade.html>
|
||||
181
active/software_clamav/clamav.md
Normal file
181
active/software_clamav/clamav.md
Normal file
@@ -0,0 +1,181 @@
|
||||
# Clamav
|
||||
|
||||
- [Clamav](#clamav)
|
||||
- [Installation](#installation)
|
||||
- [Notifications](#notifications)
|
||||
- [Selinux](#selinux)
|
||||
- [On Access Scanning](#on-access-scanning)
|
||||
- [Testing](#testing)
|
||||
- [Ignore Signatures](#ignore-signatures)
|
||||
|
||||
<https://wiki.archlinux.org/title/ClamAV>
|
||||
|
||||
## Installation
|
||||
|
||||
<https://docs.clamav.net/manual/Usage/Configuration.html#first-time-set-up>
|
||||
|
||||
```bash
|
||||
# Install
|
||||
sudo dnf install clamav clamav-freshclam clamd
|
||||
|
||||
##### Set up Freshclam #####
|
||||
|
||||
# Create freshclam's log file
|
||||
sudo touch /var/log/freshclam.log
|
||||
sudo chmod 600 /var/log/freshclam.log
|
||||
sudo chown clamscan /var/log/freshclam.log
|
||||
|
||||
# Copy configuration files
|
||||
sudo cp active/software_clamav/freshclam.conf
|
||||
sudo chown root:root /etc/freshclam.conf
|
||||
sudo chmod u=rw,go=r /etc/freshclam.conf
|
||||
|
||||
# Update the freshclam DB
|
||||
sudo freshclam
|
||||
sudo systemctl enable clamav-freshclam --now
|
||||
|
||||
##### Set up Clamd #####
|
||||
|
||||
# Create clamd's log file
|
||||
sudo touch /var/log/clamd.scan
|
||||
sudo chmod 600 /var/log/clamd.scan
|
||||
sudo chown clamscan /var/log/clamd.scan
|
||||
|
||||
# Copy configuration files
|
||||
# NOTE: Edit scan.conf OnAccessIncludePath to point to your home dir
|
||||
vim active/software_clamav/scan.conf
|
||||
|
||||
sudo cp active/software_clamav/scan.conf /etc/clamd.d/scan.conf
|
||||
sudo chown root:root /etc/clamd.d/scan.conf
|
||||
sudo chmod u=rw,go=r /etc/clamd.d/scan.conf
|
||||
|
||||
# Allow clamav with selinux
|
||||
sudo setsebool -P antivirus_can_scan_system 1
|
||||
```
|
||||
|
||||
Edit the `clamd@` service to limit system resources.
|
||||
|
||||
```bash
|
||||
sudo -E systemctl edit clamd@
|
||||
|
||||
[Service]
|
||||
Nice=18
|
||||
IOSchedulingClass=idle
|
||||
CPUSchedulingPolicy=idle
|
||||
```
|
||||
|
||||
Then start the clamd service
|
||||
|
||||
```bash
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable --now clamd@scan
|
||||
sudo systemctl status clamd@scan
|
||||
```
|
||||
|
||||
Scan something
|
||||
|
||||
```bash
|
||||
sudo clamdscan -c /etc/clamd.d/scan.conf --multiscan --fdpass ~/Downloads
|
||||
```
|
||||
|
||||
Allow your user to run scans
|
||||
|
||||
```bash
|
||||
sudo -E usermod -aG virusgroup $USER
|
||||
```
|
||||
|
||||
## Notifications
|
||||
|
||||
Create a new file called `/etc/clamav/virust-event.sh` and add the following
|
||||
|
||||
```bash
|
||||
#!/bin/bash
|
||||
|
||||
PATH=/usr/bin
|
||||
ALERT="Signature detected by clamav: $CLAM_VIRUSEVENT_VIRUSNAME in $CLAM_VIRUSEVENT_FILENAME"
|
||||
|
||||
# Send an alert to all graphical users.
|
||||
for ADDRESS in /run/user/*; do
|
||||
# Skip root, they likely won't have a desktop session anyway
|
||||
if [ ${ADDRESS} != "/run/user/0" ]; then
|
||||
USERID=${ADDRESS#/run/user/}
|
||||
/usr/bin/sudo -u "#$USERID" DBUS_SESSION_BUS_ADDRESS="unix:path=$ADDRESS/bus" PATH=${PATH} \
|
||||
/usr/bin/notify-send -u critical -i dialog-warning "ClamAV Alert!" "$ALERT"
|
||||
fi
|
||||
done
|
||||
```
|
||||
|
||||
Then ensure you have `VirusEvent /etc/clamav/virus-event.bash` in your
|
||||
`scan.conf`.
|
||||
|
||||
Allow clamav to run notify-send in `/etc/sudoers.d/clamav` by adding `clamav
|
||||
ALL = (ALL) NOPASSWD: SETENV: /usr/bin/notify-send`.
|
||||
|
||||
### Selinux
|
||||
|
||||
Troubleshooting notification permission denied errors is tricky, but it basically involves:
|
||||
|
||||
1. Disable selinux hidden denies: `sudo semodule -DB`
|
||||
2. Clear the selinux audit logs: `sudo rm /var/log/audit/audit.log*`
|
||||
3. Set enforce to permissive: `sudo setenforce 0`
|
||||
4. Try to access eicar.com with clamonacc enabled
|
||||
5. Capture the audit logs in a `sudo ausearch --raw | audit2allow -m clamav-rules`
|
||||
6. Set enforce to enforcing: `sudo setenforce 1`
|
||||
7. Re-enable selinux hidden denies (if you want): `sudo semodule -B`
|
||||
|
||||
8. `sudo setsebool daemons_enable_cluster_mode on`
|
||||
9. `sudo semodule -X 300 -i active/os_fedora/selinux_policies/clamav-notifysend.pp`
|
||||
10. `sudo semodule -X 300 -i active/os_fedora/selinux_policies/clamav-sudo.pp`
|
||||
11. `sudo semodule -X 300 -i active/os_fedora/selinux_policies/clamav-unixchkpwd.pp`
|
||||
|
||||
## On Access Scanning
|
||||
|
||||
If you want to destroy your computer you can enable on-access scanning.
|
||||
|
||||
My recommendation is to only enable on-access scanning for critical ingress
|
||||
paths, like `~/Downloads` or `~/tmp`. This will help keep system resources free
|
||||
while also scanning critical points on your system.
|
||||
|
||||
```bash
|
||||
sudo -E systemctl edit clamav-clamonacc.service
|
||||
|
||||
[Service]
|
||||
ExecStart=
|
||||
ExecStart=/usr/sbin/clamonacc -F --fdpass --config-file=/etc/clamd.d/scan.conf
|
||||
|
||||
sudo systemctl daemon-reload
|
||||
sudo systemctl enable --now clamav-clamonacc.service
|
||||
```
|
||||
|
||||
## Testing
|
||||
|
||||
The `eicar` test malware allows you to test any malware scanner, as every
|
||||
scanner should have its signature included in its database.
|
||||
|
||||
1. Create a new file called `eicar.com`
|
||||
2. Add the contents: `X5O!P%@AP[4\PZX54(P^)7CC)7}$EICAR-STANDARD-ANTIVIRUS-TEST-FILE!$H+H*`
|
||||
3. Save and scan: `clamdscan --fdpass --multiscan eicar.com`
|
||||
|
||||
If you have on access scanning enabled you can try the following
|
||||
|
||||
```bash
|
||||
cd ~/Downloads/
|
||||
wget https://secure.eicar.org/eicar.com.txt
|
||||
# This should not work
|
||||
cat eicar.com.txt
|
||||
```
|
||||
|
||||
## Ignore Signatures
|
||||
|
||||
<https://docs.clamav.net/faq/faq-ignore.html>
|
||||
|
||||
```bash
|
||||
# Create the ignore list
|
||||
cd /var/lib/clamav
|
||||
touch ignore_list.ign2
|
||||
```
|
||||
|
||||
Then add an ignore, like `PUA.Win.Trojan.Xored-1` which is a [known false
|
||||
positive](https://github.com/jensyt/imurmurhash-js/issues/1).
|
||||
|
||||
Then `systemctl restart clamd@scan`.
|
||||
8
active/software_clamav/freshclam.conf
Normal file
8
active/software_clamav/freshclam.conf
Normal file
@@ -0,0 +1,8 @@
|
||||
LogFileMaxSize 100M
|
||||
LogTime yes
|
||||
|
||||
DatabaseDirectory /var/lib/clamav
|
||||
DatabaseOwner clamupdate
|
||||
DatabaseMirror database.clamav.net
|
||||
|
||||
Bytecode yes
|
||||
62
active/software_clamav/scan.conf
Normal file
62
active/software_clamav/scan.conf
Normal file
@@ -0,0 +1,62 @@
|
||||
# ClamAV will refuse to scan files above 2G regardless of what this is set to
|
||||
MaxFileSize 2G
|
||||
# MaxScanSize controls how much of an archive is unpacked
|
||||
MaxScanSize 64G
|
||||
|
||||
LogFileMaxSize 50M
|
||||
LogTime yes
|
||||
LogSyslog yes
|
||||
|
||||
ExtendedDetectionInfo yes
|
||||
|
||||
LocalSocket /var/run/clamd.scan/clamd.socket
|
||||
LocalSocketGroup virusgroup
|
||||
LocalSocketMode 660
|
||||
FixStaleSocket yes
|
||||
|
||||
MaxThreads 8
|
||||
MaxDirectoryRecursion 20
|
||||
|
||||
User clamscan
|
||||
|
||||
Bytecode yes
|
||||
HeuristicAlerts yes
|
||||
DetectPUA yes
|
||||
ScanPE yes
|
||||
ScanELF yes
|
||||
ScanMail yes
|
||||
ScanHTML yes
|
||||
ScanOLE2 yes
|
||||
|
||||
AlertBrokenExecutables no
|
||||
AlertBrokenMedia no
|
||||
AlertEncrypted no
|
||||
AlertEncryptedArchive no
|
||||
AlertEncryptedDoc no
|
||||
AlertOLE2Macros no
|
||||
AlertPartitionIntersection no
|
||||
AlertExceedsMax no
|
||||
|
||||
ScanPDF yes
|
||||
ScanSWF yes
|
||||
ScanXMLDOCS yes
|
||||
ScanHWP3 yes
|
||||
ScanArchive yes
|
||||
|
||||
# These are just examples, add what you think should be protected.
|
||||
OnAccessIncludePath /home/ducoterra/Downloads
|
||||
OnAccessIncludePath /home/ducoterra/Projects
|
||||
OnAccessIncludePath /home/ducoterra/Applications
|
||||
OnAccessIncludePath /home/ducoterra/AUR
|
||||
|
||||
# Prevention doesn't work with OnAccessMountPath.
|
||||
# It works with OnAccessIncludePath, as long as /usr and /etc are not included.
|
||||
# Including /var while activating prevention is also not recommended, because
|
||||
# this would slow down package installation by a factor of 1000.
|
||||
OnAccessPrevention yes
|
||||
|
||||
OnAccessExcludeUname clamupdate
|
||||
OnAccessExcludeUname clamscan
|
||||
OnAccessExtraScanning yes
|
||||
|
||||
VirusEvent /etc/clamav/virus-event.bash
|
||||
33
active/software_firewalld/firewalld.md
Normal file
33
active/software_firewalld/firewalld.md
Normal file
@@ -0,0 +1,33 @@
|
||||
# Firewalld
|
||||
|
||||
## Notes
|
||||
|
||||
```bash
|
||||
# Add a port
|
||||
firewall-cmd --permanent --add-port=22/tcp
|
||||
|
||||
# List active zones
|
||||
firewall-cmd --get-active-zones
|
||||
|
||||
# Set default zone
|
||||
firewall-cmd --set-default-zone=drop
|
||||
|
||||
# Set zone for a specific subnet
|
||||
firewall-cmd --permanent --zone=drop --add-source=10.244.0.0/16
|
||||
|
||||
# Get info about service
|
||||
firewall-cmd --info-service=samba
|
||||
|
||||
# Get zone information
|
||||
firewall-cmd --info-zone=drop
|
||||
```
|
||||
|
||||
## Inspecting Zones
|
||||
|
||||
```bash
|
||||
# List all active rules
|
||||
firewall-cmd --list-all
|
||||
|
||||
# Log all denies
|
||||
firewall-cmd --set-log-denied=all
|
||||
```
|
||||
@@ -4,6 +4,7 @@
|
||||
- [Notes](#notes)
|
||||
- [Quickstart Debugging Setup](#quickstart-debugging-setup)
|
||||
- [Quickstart Production Setup](#quickstart-production-setup)
|
||||
- [NFS](#nfs)
|
||||
- [Tips](#tips)
|
||||
- [Adding a user](#adding-a-user)
|
||||
- [Adding a Smart Card Certificate](#adding-a-smart-card-certificate)
|
||||
@@ -100,41 +101,99 @@ Now skip to [Get PIV Working](#piv)
|
||||
|
||||
<https://www.freeipa.org/page/Quick_Start_Guide>
|
||||
|
||||
- Set your hostname to your server's fqdn with `hostnamectl hostname freeipa.reeselink.com`
|
||||
- Ensure you have a DNS entry pointing to your host
|
||||
- Open ports:
|
||||
1. Set your hostname to your server's fqdn with `hostnamectl hostname freeipa.reeselink.com`
|
||||
2. If you want freeipa to manage your DNS, make sure you don't have a DNS address pointing to your domain
|
||||
3. Open freeipa ports
|
||||
|
||||
```bash
|
||||
firewall-cmd --add-service=freeipa-ldap --add-service=freeipa-ldaps --permanent
|
||||
firewall-cmd --reload
|
||||
```
|
||||
```bash
|
||||
firewall-cmd --add-service=freeipa-4
|
||||
firewall-cmd --add-service=freeipa-4 --permanent
|
||||
firewall-cmd --reload
|
||||
```
|
||||
|
||||
- Set a permanet DNS resolver: `sudo echo "nameserver 1.1.1.1" > /etc/resolv.conf`
|
||||
- Disable NetworkManager DNS management
|
||||
4. Set a permanent DNS resolver
|
||||
|
||||
```bash
|
||||
vim /etc/NetworkManager/NetworkManager.conf
|
||||
```bash
|
||||
rm /etc/resolv.conf
|
||||
echo "nameserver 1.1.1.1" > /etc/resolv.conf
|
||||
```
|
||||
|
||||
[main]
|
||||
dns=none
|
||||
```
|
||||
5. Disable NetworkManager DNS management
|
||||
|
||||
Note, if installing for local use only, set /etc/hosts and reply "yes" to configure dns.
|
||||
```bash
|
||||
vim /etc/NetworkManager/NetworkManager.conf
|
||||
|
||||
```bash
|
||||
vim /etc/hosts
|
||||
[main]
|
||||
dns=none
|
||||
```
|
||||
|
||||
192.168.122.100 freeipa.reeselink.com
|
||||
```
|
||||
|
||||
- Restart NetworkManager: `systemctl restart NetworkManager`
|
||||
- Ensure resolv.conf hasn't been repopulated: `cat /etc/resolv.conf`
|
||||
- Install freeipa: `dnf install -y freeipa-server freeipa-server-dns`
|
||||
- Install the server (mostly choose defaults and sane options): `ipa-server-install`
|
||||
- Authenticate as admin: `kinit admin`
|
||||
6. Reboot
|
||||
7. Ensure resolv.conf hasn't been repopulated: `cat /etc/resolv.conf`
|
||||
8. Install freeipa: `dnf install -y freeipa-server freeipa-server-dns`
|
||||
9. Install the server (mostly choose defaults and sane options): `ipa-server-install`
|
||||
10. Authenticate as admin: `kinit admin`
|
||||
|
||||
Now skip to [Get PIV Working](#piv)
|
||||
|
||||
## NFS
|
||||
|
||||
<https://www.techrepublic.com/article/kerberos-authentication-with-nfsv4/>
|
||||
|
||||
```bash
|
||||
mkdir /exports
|
||||
chmod 1777 /exports
|
||||
mount --bind /srv /exports/srv
|
||||
```
|
||||
|
||||
What this does is remounts /srv to /exports/srv. Effectively, this means that
|
||||
/srv can be accessed directly, or via /exports/srv and changes in one location
|
||||
reflect in the other.
|
||||
|
||||
To make this persistent, add the mount command above to /etc/rc.d/rc.local or
|
||||
some similar executed-at-boot script.
|
||||
|
||||
Next, edit `/etc/sysconfig/nfs` and enable the SECURE_NFS option:
|
||||
|
||||
```conf
|
||||
SECURE_NFS="yes"
|
||||
```
|
||||
|
||||
Then edit `/etc/exports`. The “/exports” entry has the “fsid=0” option, which
|
||||
tells NFS that this is the “root” export.
|
||||
|
||||
```fstab
|
||||
/exports gss/krb5(rw,sync,fsid=0,insecure,no_subtree_check)
|
||||
|
||||
/exports/srv gss/krb5(rw,sync,nohide,insecure,no_subtree_check)
|
||||
```
|
||||
|
||||
Create the server principal for the NFS server and add it to the keytab file on
|
||||
the server using kadmin (usually /etc/krb5.keytab):
|
||||
|
||||
```bash
|
||||
kadmin.local
|
||||
|
||||
kadmin.local: addprinc -randkey nfs/nfsserver.domain.com
|
||||
kadmin.local: ktadd -e des-cbc-crc:normal -k /etc/krb5.keytab nfs/nfsserver.domain.com
|
||||
kadmin.local: quit
|
||||
```
|
||||
|
||||
Edit `/etc/idmapd.conf` and make sure the Nobody-User and Nobody-Group options
|
||||
are correct (i.e. on Red Hat Enterprise Linux and Fedora, use the “nfsnobody”
|
||||
user and group, other distributions may just use “nobody”)
|
||||
|
||||
```bash
|
||||
systemctl restart nfs rpcidmapd
|
||||
```
|
||||
|
||||
open TCP port 2049 for use with NFsv4
|
||||
|
||||
```bash
|
||||
firewall-cmd --add-service=nfs
|
||||
firewall-cmd --add-service=nfs --permanent
|
||||
firewall-cmd --reload
|
||||
```
|
||||
|
||||
## Tips
|
||||
|
||||
```bash
|
||||
@@ -4,6 +4,7 @@
|
||||
- [Searching for GPG Keys](#searching-for-gpg-keys)
|
||||
- [Importing GPG Keys](#importing-gpg-keys)
|
||||
- [Generate GPG Keys](#generate-gpg-keys)
|
||||
- [Change Key Password](#change-key-password)
|
||||
- [Renewing GPG Keys](#renewing-gpg-keys)
|
||||
- [Export GPG Keys](#export-gpg-keys)
|
||||
- [GPG Key Servers](#gpg-key-servers)
|
||||
@@ -11,6 +12,7 @@
|
||||
- [Using GPG keys](#using-gpg-keys)
|
||||
- [Signing Files](#signing-files)
|
||||
- [Encrypting Files](#encrypting-files)
|
||||
- [Yubikey](#yubikey)
|
||||
- [Linux Apps](#linux-apps)
|
||||
- [Evolution Email](#evolution-email)
|
||||
- [Android Apps](#android-apps)
|
||||
@@ -62,6 +64,18 @@ gpg --list-secret-keys
|
||||
gpg --edit-key <id>
|
||||
```
|
||||
|
||||
## Change Key Password
|
||||
|
||||
```bash
|
||||
# You can see all the --edit-key options with `man gpg` and search for '--edit-key'
|
||||
# You can also type "?" to see help
|
||||
gpg --edit-key 7FC1B29700114F4FC589E7065FDDCFA544D77B8C
|
||||
|
||||
> passwd
|
||||
|
||||
> quit
|
||||
```
|
||||
|
||||
## Renewing GPG Keys
|
||||
|
||||
You should set an expiration for your keys. You can extend that expiration (or
|
||||
@@ -174,6 +188,10 @@ gpg -o README.md.gpg --symmetric README.md
|
||||
gpg --decrypt README.md.gpg
|
||||
```
|
||||
|
||||
## Yubikey
|
||||
|
||||
See [Yubikey Notes](/active/device_yubikey/yubikey.md#gpg)
|
||||
|
||||
## Linux Apps
|
||||
|
||||
### Evolution Email
|
||||
|
||||
193
active/software_k0s/k0s.md
Normal file
193
active/software_k0s/k0s.md
Normal file
@@ -0,0 +1,193 @@
|
||||
# K0s
|
||||
|
||||
- [K0s](#k0s)
|
||||
- [Install Single Node Cluster](#install-single-node-cluster)
|
||||
- [Install Multi Node Cluster](#install-multi-node-cluster)
|
||||
- [Uninstall](#uninstall)
|
||||
- [Install Metallb](#install-metallb)
|
||||
- [Uninstall Metallb](#uninstall-metallb)
|
||||
- [Install OpenEBS](#install-openebs)
|
||||
|
||||
## Install Single Node Cluster
|
||||
|
||||
<https://docs.k0sproject.io/v0.11.0/k0s-single-node/>
|
||||
|
||||
```bash
|
||||
# Trust traffic on podCIDR and serviceCIDR subnets
|
||||
firewall-cmd --permanent --zone=trusted \
|
||||
--add-source=10.244.0.0/16 \
|
||||
--add-source=10.96.0.0/12
|
||||
|
||||
# Set default zone to drop packets
|
||||
firewall-cmd --set-default-zone=drop
|
||||
|
||||
# Allow k0s ports
|
||||
firewall-cmd --permanent --zone=drop \
|
||||
--add-port=22/tcp \
|
||||
--add-port=6443/tcp \
|
||||
--add-port=179/tcp \
|
||||
--add-port=4789/udp \
|
||||
--add-port=10250/tcp \
|
||||
--add-port=9443/tcp \
|
||||
--add-port=8132/tcp \
|
||||
--add-port=112/tcp
|
||||
|
||||
# Apply firewall
|
||||
firewall-cmd --reload
|
||||
|
||||
# Install k0s cli
|
||||
curl -sSLf https://get.k0s.sh | sudo sh
|
||||
|
||||
# Setup the config
|
||||
k0s config create > k0s.yaml
|
||||
|
||||
# Install single node cluster controller/node
|
||||
k0s install controller -c k0s.yaml --enable-worker --no-taints
|
||||
|
||||
# Start and enable the service
|
||||
systemctl enable --now k0scontroller
|
||||
|
||||
# Enable bash completion
|
||||
echo 'source <(k0s completion bash)' >>~/.bashrc
|
||||
source ~/.bashrc
|
||||
|
||||
# Make an admin user
|
||||
mkdir ~/.kube
|
||||
k0s kubeconfig create --groups "system:m asters" admin > ~/.kube/config
|
||||
|
||||
# Remove the taint that prevents scheduling on the controller
|
||||
kubectl edit node
|
||||
```
|
||||
|
||||
## Install Multi Node Cluster
|
||||
|
||||
<https://docs.k0sproject.io/v0.11.0/k0s-multi-node/>
|
||||
|
||||
Install the controller on the controller machine
|
||||
|
||||
```bash
|
||||
# Set default zone to drop packets
|
||||
firewall-cmd --set-default-zone=drop
|
||||
|
||||
# Allow k0s ports
|
||||
firewall-cmd --permanent --zone=drop \
|
||||
--add-port=22/tcp \
|
||||
--add-port=6443/tcp \
|
||||
--add-port=179/tcp \
|
||||
--add-port=4789/udp \
|
||||
--add-port=10250/tcp \
|
||||
--add-port=9443/tcp \
|
||||
--add-port=8132/tcp \
|
||||
--add-port=112/tcp
|
||||
|
||||
# Apply firewall
|
||||
firewall-cmd --reload
|
||||
|
||||
# Install k0s cli
|
||||
curl -sSLf https://get.k0s.sh | sudo sh
|
||||
|
||||
# Save default config
|
||||
k0s config create > k0s.yaml
|
||||
|
||||
# Install the controller
|
||||
k0s install controller
|
||||
|
||||
# Enable the controller
|
||||
systemctl enable --now k0scontroller
|
||||
|
||||
# Enable bash completion
|
||||
echo 'source <(k0s completion bash)' >>~/.bashrc
|
||||
source ~/.bashrc
|
||||
|
||||
# Make an admin user (scp ~/.kube/config to your operator machine)
|
||||
# kubectl config set-context --current --namespace kube-system
|
||||
mkdir ~/.kube
|
||||
k0s kubeconfig create --groups "system:masters" admin > ~/.kube/config
|
||||
|
||||
# Generate a worker join token
|
||||
k0s token create --role=worker > worker0-token
|
||||
```
|
||||
|
||||
Now on the worker machine, install the worker
|
||||
|
||||
```bash
|
||||
# Trust traffic on podCIDR and serviceCIDR subnets
|
||||
firewall-cmd --permanent --zone=trusted \
|
||||
--add-source=10.244.0.0/16 \
|
||||
--add-source=10.96.0.0/12
|
||||
|
||||
# Apply firewall
|
||||
firewall-cmd --reload
|
||||
|
||||
# On the operator, copy the token file from the controller to the worker
|
||||
scp vm-k0s-controller:worker0-token vm-k0s-worker:token-file
|
||||
|
||||
# Install k0s cli
|
||||
curl -sSLf https://get.k0s.sh | sudo sh
|
||||
|
||||
# Join the worker
|
||||
k0s install worker --token-file token-file
|
||||
|
||||
# Start the service
|
||||
systemctl enable --now k0sworker
|
||||
|
||||
# Enable bash completion
|
||||
echo 'source <(k0s completion bash)' >>~/.bashrc
|
||||
source ~/.bashrc
|
||||
```
|
||||
|
||||
## Uninstall
|
||||
|
||||
```bash
|
||||
systemctl stop k0scontroller
|
||||
k0s reset
|
||||
reboot
|
||||
```
|
||||
|
||||
## Install Metallb
|
||||
|
||||
<https://docs.k0sproject.io/v1.26.0+k0s.0/examples/metallb-loadbalancer/>
|
||||
|
||||
1. Create a VLAN with a dedicated subnet for Metallb. Disable DHCP.
|
||||
2. Attach this new VLAN to your worker nodes
|
||||
3. Assign the worker nodes an address within the created network.
|
||||
4. Install Metallb. Check `active/software_k0s/metallb-address-pool.yaml` before proceeding.
|
||||
|
||||
```bash
|
||||
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.15.2/config/manifests/metallb-native.yaml
|
||||
|
||||
kubectl apply -f active/software_k0s/metallb-address-pool.yaml
|
||||
```
|
||||
|
||||
### Uninstall Metallb
|
||||
|
||||
```bash
|
||||
kubectl delete -f active/software_k0s/metallb-address-pool.yaml
|
||||
|
||||
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.15.2/config/manifests/metallb-native.yaml
|
||||
```
|
||||
|
||||
## Install OpenEBS
|
||||
|
||||
<https://docs.k0sproject.io/stable/examples/openebs/>
|
||||
|
||||
Add the openebs extension
|
||||
|
||||
```yaml
|
||||
extensions:
|
||||
helm:
|
||||
repositories:
|
||||
- name: openebs-internal
|
||||
url: https://openebs.github.io/charts
|
||||
charts:
|
||||
- name: openebs
|
||||
chartname: openebs-internal/openebs
|
||||
version: "3.9.0"
|
||||
namespace: openebs
|
||||
order: 1
|
||||
values: |
|
||||
localprovisioner:
|
||||
hostpathClass:
|
||||
enabled: true
|
||||
isDefaultClass: false
|
||||
```
|
||||
16
active/software_k0s/metallb-address-pool.yaml
Normal file
16
active/software_k0s/metallb-address-pool.yaml
Normal file
@@ -0,0 +1,16 @@
|
||||
apiVersion: metallb.io/v1beta1
|
||||
kind: IPAddressPool
|
||||
metadata:
|
||||
name: first-pool
|
||||
namespace: metallb-system
|
||||
spec:
|
||||
addresses:
|
||||
- 192.168.123.100-192.168.123.254
|
||||
|
||||
---
|
||||
|
||||
apiVersion: metallb.io/v1beta1
|
||||
kind: L2Advertisement
|
||||
metadata:
|
||||
name: default
|
||||
namespace: metallb-system
|
||||
54
active/software_osbuild/fedora-42-base.toml
Normal file
54
active/software_osbuild/fedora-42-base.toml
Normal file
@@ -0,0 +1,54 @@
|
||||
name = "fedora-42-base"
|
||||
description = "Fedora 42 Base Installation"
|
||||
version = "0.0.1"
|
||||
distro = "fedora-42"
|
||||
modules = []
|
||||
groups = []
|
||||
|
||||
[customizations]
|
||||
hostname = "f42-base"
|
||||
|
||||
[[customizations.disk.partitions]]
|
||||
type = "btrfs"
|
||||
minsize = "32 GiB"
|
||||
|
||||
[[customizations.disk.partitions.subvolumes]]
|
||||
name = "root"
|
||||
mountpoint = "/"
|
||||
|
||||
[[customizations.disk.partitions.subvolumes]]
|
||||
name = "home"
|
||||
mountpoint = "/home"
|
||||
|
||||
[customizations.timezone]
|
||||
timezone = "America/New_York"
|
||||
|
||||
[[customizations.sshkey]]
|
||||
user = "root"
|
||||
key = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGQa781Qj8mNlUdRquFFqg0O2ornG9SBHe705y4+1vPI ssh@ducoterra.net"
|
||||
|
||||
[customizations.firewall.services]
|
||||
enabled = ["ssh"]
|
||||
|
||||
[customizations.services]
|
||||
enabled = ["sshd"]
|
||||
|
||||
[[packages]]
|
||||
name = "bash-completion"
|
||||
version = "*"
|
||||
|
||||
[[packages]]
|
||||
name = "tmux"
|
||||
version = "*"
|
||||
|
||||
[[packages]]
|
||||
name = "openssh-server"
|
||||
version = "*"
|
||||
|
||||
[[packages]]
|
||||
name = "vim"
|
||||
version = "*"
|
||||
|
||||
[[packages]]
|
||||
name = "git"
|
||||
version = "*"
|
||||
77
active/software_osbuild/fedora-43-base.toml
Normal file
77
active/software_osbuild/fedora-43-base.toml
Normal file
@@ -0,0 +1,77 @@
|
||||
name = "fedora-43-base"
|
||||
description = "Fedora 43 Base Installation"
|
||||
version = "0.0.1"
|
||||
distro = "fedora-43"
|
||||
modules = []
|
||||
groups = []
|
||||
|
||||
[customizations]
|
||||
hostname = "f43-base"
|
||||
|
||||
[[customizations.disk.partitions]]
|
||||
type = "btrfs"
|
||||
minsize = "32 GiB"
|
||||
|
||||
[[customizations.disk.partitions.subvolumes]]
|
||||
name = "root"
|
||||
mountpoint = "/"
|
||||
|
||||
[[customizations.disk.partitions.subvolumes]]
|
||||
name = "home"
|
||||
mountpoint = "/home"
|
||||
|
||||
[customizations.timezone]
|
||||
timezone = "America/New_York"
|
||||
|
||||
[[customizations.user]]
|
||||
name = "ducoterra"
|
||||
password = "$6$QqOw6ktp6aiPy5kX$cpN.oar4CiofH0PpxyveJgkjsRFGnZ5ykOX/50DcJyU3hZFxc5R3SASemNW6m3jceLGgZrQHyALQl8SgtcNO90"
|
||||
key = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGQa781Qj8mNlUdRquFFqg0O2ornG9SBHe705y4+1vPI ssh@ducoterra.net"
|
||||
home = "/home/ducoterra/"
|
||||
shell = "/usr/bin/bash"
|
||||
groups = ["wheel"]
|
||||
uid = 1000
|
||||
|
||||
[customizations.firewall.services]
|
||||
enabled = ["ssh"]
|
||||
|
||||
[customizations.services]
|
||||
enabled = ["sshd"]
|
||||
|
||||
[[packages]]
|
||||
name = "bash-completion"
|
||||
version = "*"
|
||||
|
||||
[[packages]]
|
||||
name = "tmux"
|
||||
version = "*"
|
||||
|
||||
[[packages]]
|
||||
name = "openssh-server"
|
||||
version = "*"
|
||||
|
||||
[[packages]]
|
||||
name = "vim"
|
||||
version = "*"
|
||||
|
||||
[[packages]]
|
||||
name = "git"
|
||||
version = "*"
|
||||
|
||||
[[customizations.files]]
|
||||
path = "/root/.inputrc"
|
||||
mode = "0644"
|
||||
user = "root"
|
||||
group = "root"
|
||||
data = """
|
||||
"\\C-h": backward-kill-word
|
||||
"""
|
||||
|
||||
[[customizations.files]]
|
||||
path = "/home/ducoterra/.inputrc"
|
||||
mode = "0644"
|
||||
user = "root"
|
||||
group = "root"
|
||||
data = """
|
||||
"\\C-h": backward-kill-word
|
||||
"""
|
||||
94
active/software_osbuild/image_builder.md
Normal file
94
active/software_osbuild/image_builder.md
Normal file
@@ -0,0 +1,94 @@
|
||||
# Image Builder
|
||||
|
||||
Builds Fedora/RHEL/Centos images like Packer but for Red Hat.
|
||||
|
||||
<https://osbuild.org/docs/user-guide/blueprint-reference/>
|
||||
|
||||
Default credentials for included images is:
|
||||
|
||||
username: `ducoterra`
|
||||
|
||||
password: `osbuild`
|
||||
|
||||
## Warning
|
||||
|
||||
From the [Red Hat
|
||||
Documentation](https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html-single/composing_installing_and_managing_rhel_for_edge_images/index):
|
||||
|
||||
> You cannot build an operating system image that differs from the RHEL image
|
||||
> builder host. For example, you cannot use a RHEL system to build Fedora or
|
||||
> CentOS images.
|
||||
|
||||
## Installing
|
||||
|
||||
```bash
|
||||
# Install the packages
|
||||
dnf install -y osbuild-composer composer-cli
|
||||
|
||||
# Start/enable the service
|
||||
systemctl enable --now osbuild-composer.socket
|
||||
|
||||
# Optional: add your user to the weldr group
|
||||
sudo usermod -aG weldr $USER
|
||||
|
||||
# Optional: cockpit dependency
|
||||
dnf install -y cockpit-composer
|
||||
```
|
||||
|
||||
## Building Images
|
||||
|
||||
1. Create a toml file describing your image
|
||||
|
||||
See `fedora-42-base.toml` for an example.
|
||||
|
||||
2. Push the toml to composer
|
||||
|
||||
```bash
|
||||
composer-cli blueprints push active/software_osbuild/fedora-42-base.toml
|
||||
|
||||
# List blueprints
|
||||
composer-cli blueprints list
|
||||
```
|
||||
|
||||
3. Generate the image
|
||||
|
||||
```bash
|
||||
# List image types
|
||||
composer-cli compose types
|
||||
|
||||
# Build the image
|
||||
composer-cli compose start fedora-42-base qcow2
|
||||
|
||||
# Check status
|
||||
watch composer-cli compose status
|
||||
|
||||
# Download logs if error
|
||||
cd /tmp && composer-cli compose logs f91a12b6-01fd-4f94-91cc-9d5fb68b8129
|
||||
|
||||
# Delete failed images
|
||||
composer-cli compose list failed -j | jq '.[].body.failed.[]?.id' | xargs -I '%' composer-cli compose delete '%'
|
||||
|
||||
# Delete successful images
|
||||
composer-cli compose list finished -j | jq '.[].body.finished.[]?.id' | xargs -I '%' composer-cli compose delete '%'
|
||||
```
|
||||
|
||||
4. Run the image
|
||||
|
||||
```bash
|
||||
# List your images
|
||||
composer-cli compose list finished
|
||||
|
||||
# Download the image
|
||||
composer-cli compose image --filename /var/lib/libvirt/images/fedora-42-base.qcow2 image-uuid
|
||||
|
||||
# Test with qemu
|
||||
qemu-kvm --name test-fedora-42-base -m 4096 -hda ~/Downloads/fedora-42-base.qcow2
|
||||
```
|
||||
|
||||
### Image Build and Watch One Liner
|
||||
|
||||
```bash
|
||||
composer-cli blueprints push active/software_osbuild/fedora-43-base.toml && \
|
||||
composer-cli compose start fedora-43-base qcow2 && \
|
||||
watch composer-cli compose status
|
||||
```
|
||||
5
active/software_packer/packer.md
Normal file
5
active/software_packer/packer.md
Normal file
@@ -0,0 +1,5 @@
|
||||
# Packer
|
||||
|
||||
## Qemu Builds
|
||||
|
||||
<https://github.com/goffinet/packer-kvm?tab=readme-ov-file>
|
||||
@@ -2,10 +2,11 @@
|
||||
|
||||
- [QEMU](#qemu)
|
||||
- [QCOW2](#qcow2)
|
||||
- [Restore qcow snapshots](#restore-qcow-snapshots)
|
||||
- [QCOW2 Snapshots](#qcow2-snapshots)
|
||||
- [Convert qcow to bootable drive](#convert-qcow-to-bootable-drive)
|
||||
- [Convert bootable drive to qcow](#convert-bootable-drive-to-qcow)
|
||||
- [ISO](#iso)
|
||||
- [ISO Access Denied on Boot](#iso-access-denied-on-boot)
|
||||
|
||||
## QCOW2
|
||||
|
||||
@@ -16,7 +17,7 @@ VDI (VirtualBox) vdi
|
||||
VHD (Hyper-V) vpc
|
||||
VMDK (VMware) vmdk
|
||||
|
||||
### Restore qcow snapshots
|
||||
### QCOW2 Snapshots
|
||||
|
||||
```bash
|
||||
# Create a snapshot
|
||||
@@ -51,4 +52,8 @@ with gparted.
|
||||
```bash
|
||||
pacman -S cdrtools
|
||||
mkisofs -r -iso-level 4 -l -o /tmp/arch-files.iso ./arch
|
||||
```
|
||||
```
|
||||
|
||||
## ISO Access Denied on Boot
|
||||
|
||||
Turn off Secure Boot in the VM. Bios -> Device Manager -> Secure Boot Configuration -> Uncheck Attempt Secure Boot
|
||||
162
active/software_selinux/selinux.md
Normal file
162
active/software_selinux/selinux.md
Normal file
@@ -0,0 +1,162 @@
|
||||
# Selinux
|
||||
|
||||
<https://thecybersecguru.com/tutorials/selinux-ultimate-guide/>
|
||||
|
||||
## Terminology
|
||||
|
||||
Subjects vs Objects:
|
||||
|
||||
- A Subject is a process (e.g., the Apache httpd process).
|
||||
- An Object is the thing a subject wants to access (e.g., a file, a directory,
|
||||
a network port, a system socket).
|
||||
|
||||
Labels (Contexts):
|
||||
|
||||
- Every single Subject and every single Object on an SELinux system has a
|
||||
label. This label is called an SELinux Context.
|
||||
- A process’s label is called a domain.
|
||||
- A file’s label is called a type.
|
||||
|
||||
The Policy:
|
||||
|
||||
- The SELinux policy is just a massive database of rules that says which
|
||||
domains can access which types.
|
||||
- A rule might say: “Allow the domain httpd_t (the Apache process) to read
|
||||
files with the type httpd_sys_content_t (web content).”
|
||||
|
||||
Type Enforcement (TE):
|
||||
|
||||
- This is the name of the engine that enforces these rules. When a Subject
|
||||
(process) tries to access an Object (file), the Linux kernel’s hook for
|
||||
SELinux checks their labels.
|
||||
- It looks up the rule in the policy.
|
||||
- If a rule allow httpd_t httpd_sys_content_t:file { read }; exists, the access
|
||||
is granted.
|
||||
- If no “allow” rule exists, the access is implicitly denied, and an event is
|
||||
logged to /var/log/audit/audit.log.
|
||||
|
||||
An SELinux context is a string with four parts, separated by colons: `user:role:type:level`
|
||||
|
||||
- User: An SELinux user (e.g., system_u, unconfined_u). This is not the same as
|
||||
your Linux user (root, john). It’s an identity within the policy.
|
||||
- Role: Used in Role-Based Access Control (RBAC). (e.g., system_r, object_r).
|
||||
- Level: Used in Multi-Level Security (MLS) and Multi-Category Security (MCS).
|
||||
(e.g., s0, or s0:c0,c1). This is what keeps containers (Docker, Podman)
|
||||
separate from each other.
|
||||
- For 99% of all system administration, you can COMPLETELY IGNORE the user, role, and level.
|
||||
|
||||
The only part that matters for day-to-day troubleshooting is the third part:
|
||||
the type. This is the real label.
|
||||
|
||||
- For a process (Subject), the type is its domain.
|
||||
- For a file (Object), the type is its type.
|
||||
|
||||
## Using Selinux
|
||||
|
||||
```bash
|
||||
# List labels
|
||||
# The type is the third item in the four colon-separated values
|
||||
ls -lZ /var
|
||||
ls -lZ /var/log
|
||||
ls -lZ .
|
||||
|
||||
# List processes
|
||||
ps -eZ | grep -i auditd
|
||||
ps -eZ | grep -i clam
|
||||
ps -eZ | grep -i grep
|
||||
```
|
||||
|
||||
### Using Selinux Booleans
|
||||
|
||||
When you see an "selinux denied" error you have two choices:
|
||||
|
||||
1. (Wrong) Write a custom policy module to allow this.
|
||||
2. (Right) Check if there’s an “off” switch for this rule.
|
||||
|
||||
This “off” switch is called an SELinux Boolean.
|
||||
|
||||
Booleans are on/off toggles for common policy rules. They are the first thing
|
||||
you should check when you have a denial.
|
||||
|
||||
Your workflow should look like:
|
||||
|
||||
1. Get a denial.
|
||||
2. Run getsebool -a | grep service_name.
|
||||
3. Read the list of booleans and find one that sounds like the action being denied.
|
||||
4. Temporarily flip it with setsebool.
|
||||
5. Test. If it works, make it permanent with setsebool -P.
|
||||
|
||||
```bash
|
||||
# Get all booleans
|
||||
getsebool -a
|
||||
|
||||
# Find one that's relevant
|
||||
getsebool -a | grep -i clam
|
||||
|
||||
# Flip an sebool on temporarily
|
||||
setsebool httpd_can_network_connect_db on
|
||||
|
||||
# Flip sebool on permanently
|
||||
setsebool -P httpd_can_network_connect_db on
|
||||
```
|
||||
|
||||
### Using Selinux with Audit Log
|
||||
|
||||
If the selinux boolean approach doesn't work.
|
||||
|
||||
```bash
|
||||
# Make sure you have the troubleshooting package installed
|
||||
sudo dnf install -y setroubleshoot-server
|
||||
|
||||
# List all alerts
|
||||
sudo sealert -a /var/log/audit/audit.log
|
||||
|
||||
# If sealert doesn't have a recommendation, like restorecon, then use audit2why
|
||||
# This will tell you what you should do to fix the issue
|
||||
sudo grep "denied" /var/log/audit/audit.log | tail -n 1 | audit2why
|
||||
|
||||
# Check what rule would be created
|
||||
sudo grep "antivirus_t" /var/log/audit/audit.log | audit2allow -m clamav-notifysend
|
||||
|
||||
# Create the .te (human readable) policy, compile it (.pp), and load it into the kernel
|
||||
sudo grep "antivirus_t" /var/log/audit/audit.log | audit2allow -M clamav-notifysend
|
||||
|
||||
# Apply the policy
|
||||
sudo semodule -X 300 -i clamav-notifysend.pp
|
||||
|
||||
# List active policies
|
||||
sudo semodule -l
|
||||
|
||||
# Remove an active policy
|
||||
sudo semodule -r clamav-notifysend
|
||||
```
|
||||
|
||||
### Showing Dontaudit Rules
|
||||
|
||||
Selinux will hide denies that are explicitly prohibited through a rule.
|
||||
|
||||
```bash
|
||||
# Show dontaudit rules
|
||||
semodule -DB
|
||||
|
||||
# Hide dontaudit rules
|
||||
semodule -B
|
||||
```
|
||||
|
||||
## Compiling Modules
|
||||
|
||||
If you have a `te` file you can compile to a `pp` file with the following:
|
||||
|
||||
```bash
|
||||
# Compile to module source, then to policy package
|
||||
checkmodule -M -m -o sample.mod sample.te
|
||||
semodule_package -o sample.pp -m sample.mod
|
||||
```
|
||||
|
||||
## Cloning a Virtual Machine
|
||||
|
||||
```bash
|
||||
cd active/os_fedora/selinux_policies
|
||||
sudo ausearch -c 'rpc-virtstorage' --raw | audit2allow -M my-rpcvirtstorage
|
||||
sudo semodule -X 300 -i my-rpcvirtstorage.pp
|
||||
```
|
||||
10
active/software_steam/steam.md
Normal file
10
active/software_steam/steam.md
Normal file
@@ -0,0 +1,10 @@
|
||||
# Steam
|
||||
|
||||
## Save Game Data
|
||||
|
||||
Steam flatpak games are saved at
|
||||
`~/.var/app/com.valvesoftware.Steam/.local/share/Steam/steamapps/compatdata`
|
||||
|
||||
Search for any game here to get save location details.
|
||||
|
||||
<https://www.pcgamingwiki.com/wiki/Home>
|
||||
1
active/software_virsh/cloud-init/meta-data
Normal file
1
active/software_virsh/cloud-init/meta-data
Normal file
@@ -0,0 +1 @@
|
||||
local-hostname: cloud-init-vm
|
||||
10
active/software_virsh/cloud-init/user-data
Normal file
10
active/software_virsh/cloud-init/user-data
Normal file
@@ -0,0 +1,10 @@
|
||||
#cloud-config
|
||||
#vim:syntax=yaml
|
||||
|
||||
users:
|
||||
- default
|
||||
- name: ducoterra
|
||||
lock_passwd: false
|
||||
sudo: ALL=(ALL) NOPASSWD:ALL
|
||||
ssh_authorized_keys:
|
||||
- ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGQa781Qj8mNlUdRquFFqg0O2ornG9SBHe705y4+1vPI ssh@ducoterra.net
|
||||
11
active/software_virsh/dual-stack-dhcp.xml
Normal file
11
active/software_virsh/dual-stack-dhcp.xml
Normal file
@@ -0,0 +1,11 @@
|
||||
<network>
|
||||
<name>dual-stack</name>
|
||||
<forward mode="nat"/>
|
||||
<domain name="dual-stack"/>
|
||||
<ip address="192.168.123.1" netmask="255.255.255.0">
|
||||
<dhcp>
|
||||
<range start="192.168.123.2" end="192.168.123.99"/>
|
||||
</dhcp>
|
||||
</ip>
|
||||
<ip family="ipv6" address="fd4d:58e7:17f6:1::1" prefix="64"/>
|
||||
</network>
|
||||
7
active/software_virsh/dual-stack-no-dhcp.xml
Normal file
7
active/software_virsh/dual-stack-no-dhcp.xml
Normal file
@@ -0,0 +1,7 @@
|
||||
<network>
|
||||
<name>dual-stack-no-dhcp</name>
|
||||
<forward mode="nat"/>
|
||||
<domain name="dual-stack-no-dhcp"/>
|
||||
<ip address="192.168.123.1" netmask="255.255.255.0"/>
|
||||
<ip family="ipv6" address="fd4d:58e7:17f6:1::1" prefix="64"/>
|
||||
</network>
|
||||
353
active/software_virsh/virsh.md
Normal file
353
active/software_virsh/virsh.md
Normal file
@@ -0,0 +1,353 @@
|
||||
# Virsh
|
||||
|
||||
Virtual Machine Management
|
||||
|
||||
- [Virsh](#virsh)
|
||||
- [Before you Begin](#before-you-begin)
|
||||
- [Connecting to External Servers via SSH](#connecting-to-external-servers-via-ssh)
|
||||
- [Configuring Aliases](#configuring-aliases)
|
||||
- [One-off Connections](#one-off-connections)
|
||||
- [Useful Virsh Commands](#useful-virsh-commands)
|
||||
- [Virsh Networking](#virsh-networking)
|
||||
- [Create a Virtual Network](#create-a-virtual-network)
|
||||
- [Attach a New Virtual Network](#attach-a-new-virtual-network)
|
||||
- [Detach a Virtual Network](#detach-a-virtual-network)
|
||||
- [Destroy a Virtual Network](#destroy-a-virtual-network)
|
||||
- [Set a Static IP](#set-a-static-ip)
|
||||
- [Creating VMs](#creating-vms)
|
||||
- [Create VM with No Graphics and use an Existing QCOW2 Disk](#create-vm-with-no-graphics-and-use-an-existing-qcow2-disk)
|
||||
- [Create a Cloud Init Compatible VM](#create-a-cloud-init-compatible-vm)
|
||||
- [Create VM with Graphics using an ISO Installation Disk](#create-vm-with-graphics-using-an-iso-installation-disk)
|
||||
- [Create VM using Host Device as Disk](#create-vm-using-host-device-as-disk)
|
||||
- [Snapshots](#snapshots)
|
||||
- [Virt Builder](#virt-builder)
|
||||
|
||||
## Before you Begin
|
||||
|
||||
1. Add yourself to the `qemu` and `libvirt` groups: `usermod -aG libvirt,qemu ducoterra`
|
||||
2. Change the images ownership to qemu: `chown -R qemu:qemu /var/lib/libvirt/images`
|
||||
3. Change the iso ownership to qemu: `chown -R qemu:qemu /var/lib/libvirt/iso`
|
||||
4. Allow group write access to images: `chmod 770 /var/lib/libvirt/images`
|
||||
5. Allow group write access to iso: `chmod 770 /var/lib/libvirt/iso`
|
||||
6. Tell virsh to connect to your root system rather than your user: `export LIBVIRT_DEFAULT_URI='qemu:///system'`
|
||||
7. Export your editor so virsh knows what to use: `export EDITOR=vim`
|
||||
|
||||
## Connecting to External Servers via SSH
|
||||
|
||||
<https://libvirt.org/uri.html#ssh-transport>
|
||||
|
||||
### Configuring Aliases
|
||||
|
||||
1. Edit `~/.config/libvirt/libvirt.conf`
|
||||
2. Add your aliases
|
||||
|
||||
```bash
|
||||
uri_aliases = [
|
||||
"3dserver=qemu+ssh://3dserver/system",
|
||||
]
|
||||
```
|
||||
|
||||
3. Export the alias: `export LIBVIRT_DEFAULT_URI=3dserver`
|
||||
|
||||
### One-off Connections
|
||||
|
||||
```bash
|
||||
export LIBVIRT_DEFAULT_URI='qemu+ssh://user@server/system'
|
||||
```
|
||||
|
||||
## Useful Virsh Commands
|
||||
|
||||
```bash
|
||||
# Show node info
|
||||
virsh nodeinfo
|
||||
|
||||
# List OS variants
|
||||
osinfo-query os
|
||||
|
||||
# List all current machines
|
||||
virsh list --all
|
||||
|
||||
# Connect to console VM
|
||||
virsh console fedora42-test
|
||||
|
||||
# Connect to graphical VM
|
||||
virt-viewer --wait fedora42-test
|
||||
|
||||
# Get leased IP Addresses for the default network
|
||||
virsh net-dhcp-leases default
|
||||
|
||||
# Reboot a VM
|
||||
virsh reboot <domain>
|
||||
|
||||
# Shutdown a VM
|
||||
virsh shutdown <domain>
|
||||
|
||||
# Force shutdown a VM
|
||||
virsh destroy <domain>
|
||||
|
||||
# Remove a VM
|
||||
virsh undefine --nvram <domain>
|
||||
|
||||
# Remove a VM including storage
|
||||
virsh undefine <domain> --nvram --remove-all-storage
|
||||
```
|
||||
|
||||
## Virsh Networking
|
||||
|
||||
### Create a Virtual Network
|
||||
|
||||
Creating a new network will require an XML configuration file. To see the
|
||||
default network's configuration, use
|
||||
|
||||
```bash
|
||||
virsh net-dumpxml default > virbr0.xml
|
||||
```
|
||||
|
||||
To create a dual-stack network, use the following. (Note, I generated a unique
|
||||
local ipv6 address [here](https://www.unique-local-ipv6.com/)).
|
||||
|
||||
```xml
|
||||
<network>
|
||||
<name>dual-stack</name>
|
||||
<forward mode="nat"/>
|
||||
<domain name="dual-stack"/>
|
||||
<ip address="192.168.100.1" netmask="255.255.255.0">
|
||||
<dhcp>
|
||||
<range start="192.168.100.2" end="192.168.100.254"/>
|
||||
</dhcp>
|
||||
</ip>
|
||||
<ip family="ipv6" address="fd4d:58e7:17f6:1::1" prefix="64"/>
|
||||
</network>
|
||||
```
|
||||
|
||||
I've already defined this network in `active/software_virsh/dual-stack-dhcp.xml`. Install it with
|
||||
|
||||
```bash
|
||||
# Define and autostart the network
|
||||
virsh net-define active/software_virsh/dual-stack-dhcp.xml
|
||||
virsh net-start dual-stack-dhcp
|
||||
virsh net-autostart dual-stack-dhcp
|
||||
|
||||
# List networks to ensure it created
|
||||
virsh net-list --all
|
||||
|
||||
# Get the UUID of the created network
|
||||
virsh net-uuid dual-stack-dhcp
|
||||
```
|
||||
|
||||
### Attach a New Virtual Network
|
||||
|
||||
```bash
|
||||
export VM_NAME=my_vm
|
||||
virsh attach-interface \
|
||||
--type bridge \
|
||||
--source virbr1 \
|
||||
--model virtio \
|
||||
--config \
|
||||
--live \
|
||||
--domain ${VM_NAME}
|
||||
```
|
||||
|
||||
### Detach a Virtual Network
|
||||
|
||||
```bash
|
||||
# List mac addresses of connected interfaces'
|
||||
export VM_NAME=my_vm
|
||||
virsh domiflist --domain $VM_NAME
|
||||
virsh detach-interface --domain k0s-worker0 --type bridge --mac "52:54:00:f6:b9:83" --live
|
||||
```
|
||||
|
||||
### Destroy a Virtual Network
|
||||
|
||||
```bash
|
||||
export NETWORK_NAME=mynetwork
|
||||
virsh net-undefine --network $NETWORK_NAME
|
||||
virsh net-destroy --network $NETWORK_NAME
|
||||
```
|
||||
|
||||
### Set a Static IP
|
||||
|
||||
To set a static IP, run `virsh net-edit default` and add the following between `<dhcp>` and `</dhcp>`
|
||||
|
||||
```bash
|
||||
# Add a host
|
||||
virsh net-update default add-last ip-dhcp-host \
|
||||
'<host mac="52:54:00:6f:78:f3" ip="192.168.122.222"/>' \
|
||||
--live --config --parent-index 0
|
||||
|
||||
# Modify a host
|
||||
virsh net-update default modify ip-dhcp-host \
|
||||
'<host mac="52:54:00:6f:78:f3" ip="192.168.122.222"/>' \
|
||||
--live --config --parent-index 0
|
||||
|
||||
# Delete a host
|
||||
virsh net-update default delete ip-dhcp-host \
|
||||
'<host mac="52:54:00:6f:78:f3" ip="192.168.122.222"/>' \
|
||||
--live --config --parent-index 0
|
||||
```
|
||||
|
||||
## Creating VMs
|
||||
|
||||
If you have [an osbuild
|
||||
image](/active/software_osbuild/image_builder.md#installing) you can run the
|
||||
following to generate a qcow2 disk image. Then you can [create a VM with an
|
||||
existing qcow2
|
||||
disk](#create-vm-with-no-graphics-and-use-an-existing-qcow2-disk) and skip the
|
||||
installation process altogether.
|
||||
|
||||
```bash
|
||||
sudo systemctl start osbuild-composer.socket
|
||||
composer-cli compose list
|
||||
export IMAGE_UUID=
|
||||
export VM_DISK_PATH=/var/lib/libvirt/images/fedora43-test.qcow2
|
||||
composer-cli compose image --filename ${VM_DISK_PATH} ${IMAGE_UUID}
|
||||
```
|
||||
|
||||
### Create VM with No Graphics and use an Existing QCOW2 Disk
|
||||
|
||||
```bash
|
||||
# Start the default network if it isn't already
|
||||
virsh net-start --network default
|
||||
|
||||
export VM_NAME="fedora43-test"
|
||||
export VM_DISK_PATH=/var/lib/libvirt/images/fedora43-test.qcow2
|
||||
|
||||
# OPTIONAL: export your qcow2 disk now if using osbuild
|
||||
export IMAGE_UUID=
|
||||
composer-cli compose image --filename ${VM_DISK_PATH} ${IMAGE_UUID}
|
||||
|
||||
# Install
|
||||
# `--location /path/to/image.iso` supplies a disk installer. (Remove `--import`)
|
||||
# `--import` skips the installation process.
|
||||
# `--graphics spice --video qxl,model.ram=131072,model.vram=131072,model.vgamem=131072 --channel spicevmc` installs graphics
|
||||
# `--console pty,target.type=virtio` adds a console connection
|
||||
# For any command, use `virt-install --arg=?` to see all available options
|
||||
virt-install \
|
||||
--name "${VM_NAME}" \
|
||||
--boot uefi,firmware.feature0.name=secure-boot,firmware.feature0.enabled=no \
|
||||
--cpu host-passthrough --vcpus sockets=1,cores=8,threads=2 \
|
||||
--ram=8192 \
|
||||
--os-variant=fedora41 \
|
||||
--network bridge:virbr0 \
|
||||
--graphics none \
|
||||
--console pty,target.type=virtio \
|
||||
--import --disk "path=${VM_DISK_PATH},bus=virtio"
|
||||
```
|
||||
|
||||
#### Create a Cloud Init Compatible VM
|
||||
|
||||
<https://cloudinit.readthedocs.io/en/latest/reference/examples.html>
|
||||
|
||||
```bash
|
||||
# Fedora
|
||||
# https://fedoraproject.org/cloud/download
|
||||
export VM_NAME="cloud-init-test-fedora"
|
||||
export VM_DISK_PATH=/var/lib/libvirt/images/Fedora-Cloud-Base-Generic-43-1.6.x86_64.qcow2
|
||||
|
||||
# Rocky
|
||||
# https://rockylinux.org/download
|
||||
export VM_NAME="cloud-init-test-rocky"
|
||||
export VM_DISK_PATH=/var/lib/libvirt/images/Rocky-10-GenericCloud-Base.latest.x86_64.qcow2
|
||||
|
||||
# Ubuntu
|
||||
# https://cloud-images.ubuntu.com/noble/current/
|
||||
export VM_NAME="cloud-init-test-ubuntu"
|
||||
export VM_DISK_PATH=/var/lib/libvirt/images/noble-server-cloudimg-amd64.img
|
||||
|
||||
# Debian
|
||||
# https://cloud.debian.org/images/cloud/trixie/20251117-2299/
|
||||
export VM_NAME="cloud-init-test-debian"
|
||||
export VM_DISK_PATH=/var/lib/libvirt/images/debian-13-generic-amd64-20251117-2299.qcow2
|
||||
|
||||
# Set --cloud-init disable=no to allow cloud-init to run again after first boot
|
||||
virt-install \
|
||||
--name "${VM_NAME}" \
|
||||
--boot uefi,firmware.feature0.name=secure-boot,firmware.feature0.enabled=no \
|
||||
--cpu host-passthrough --vcpus sockets=1,cores=8,threads=2 \
|
||||
--ram=8192 \
|
||||
--os-variant=fedora41 \
|
||||
--network bridge:virbr0 \
|
||||
--graphics none \
|
||||
--import --disk "path=${VM_DISK_PATH},bus=virtio" \
|
||||
--cloud-init disable=yes,user-data="active/software_virsh/cloud-init/user-data,meta-data=active/software_virsh/cloud-init/meta-data"
|
||||
```
|
||||
|
||||
### Create VM with Graphics using an ISO Installation Disk
|
||||
|
||||
```bash
|
||||
# `--cdrom /path/to/image.iso` supplies a disk installer. (Remove `--import`)
|
||||
# `--import` skips the installation process.
|
||||
# `--graphics spice --video qxl --channel spicevmc` installs graphics
|
||||
# `--console pty,target.type=virtio` adds a console connection
|
||||
# For any command, use `virt-install --arg=?` to see all available options
|
||||
export VM_NAME="fedora43-kinoite-test"
|
||||
export VM_ISO_PATH=/var/lib/libvirt/iso/fedora43.iso
|
||||
export VM_DISK_PATH=/var/lib/libvirt/images/fedora43.qcow2
|
||||
virt-install \
|
||||
--name "${VM_NAME}" \
|
||||
--boot uefi,firmware.feature0.name=secure-boot,firmware.feature0.enabled=no \
|
||||
--cpu host-passthrough --vcpus sockets=1,cores=8,threads=2 \
|
||||
--ram=8192 \
|
||||
--os-variant=fedora41 \
|
||||
--network bridge:virbr0 \
|
||||
--graphics spice --video virtio --channel spicevmc \
|
||||
--cdrom ${VM_ISO_PATH} \
|
||||
--disk "path=${VM_DISK_PATH},size=64,bus=virtio,format=qcow2"
|
||||
```
|
||||
|
||||
### Create VM using Host Device as Disk
|
||||
|
||||
```bash
|
||||
# `--cdrom /path/to/image.iso` supplies a disk installer. (Remove `--import`)
|
||||
# `--import` skips the installation process.
|
||||
# `--graphics spice --video qxl --channel spicevmc` installs graphics
|
||||
# `--console pty,target.type=virtio` adds a console connection
|
||||
# `--hostdev 0x1234:0x5678` adds a block storage device
|
||||
# For any command, use `virt-install --arg=?` to see all available options
|
||||
export VM_NAME="usb-linux"
|
||||
virt-install \
|
||||
--name "${VM_NAME}" \
|
||||
--boot uefi,firmware.feature0.name=secure-boot,firmware.feature0.enabled=no \
|
||||
--import \
|
||||
--cpu host-passthrough --vcpus sockets=1,cores=8,threads=2 \
|
||||
--ram=8192 \
|
||||
--os-variant=fedora41 \
|
||||
--network bridge:virbr0 \
|
||||
--graphics spice --video qxl --channel spicevmc \
|
||||
--hostdev 0x13fe:0x6500,boot.order=1 \
|
||||
--disk none
|
||||
```
|
||||
|
||||
## Snapshots
|
||||
|
||||
See [qemu qcow2 snapshots](/active/software_qemu/qemu.md#qcow2-snapshots)
|
||||
|
||||
## Virt Builder
|
||||
|
||||
<https://docs.fedoraproject.org/en-US/fedora-server/virtualization/vm-install-diskimg-virtbuilder/#_minimal_effort_customization>
|
||||
|
||||
You can use virt-builder to build vm images
|
||||
|
||||
```bash
|
||||
export VM_NAME=fedora42-vb
|
||||
export VM_DISK_PATH=/var/lib/libvirt/images/fedora42-vb.qcow2
|
||||
|
||||
# Build the image
|
||||
virt-builder fedora-42 \
|
||||
--format qcow2 --output ${VM_DISK_PATH} \
|
||||
--root-password locked:disabled \
|
||||
--hostname ${VM_NAME} \
|
||||
--selinux-relabel \
|
||||
--firstboot-command 'useradd -m -G wheel -p "" ducoterra ; chage -d 0 ducoterra'
|
||||
|
||||
# Run the built image
|
||||
virt-install \
|
||||
--name "${VM_NAME}" \
|
||||
--cpu host-passthrough --vcpus sockets=1,cores=8,threads=2 \
|
||||
--ram=8192 \
|
||||
--os-variant=fedora41 \
|
||||
--network bridge:virbr0 \
|
||||
--graphics none \
|
||||
--console pty,target.type=virtio \
|
||||
--import --disk "path=${VM_DISK_PATH},bus=virtio"
|
||||
```
|
||||
1
keys/ssh_id_ed25519.pub
Normal file
1
keys/ssh_id_ed25519.pub
Normal file
@@ -0,0 +1 @@
|
||||
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGQa781Qj8mNlUdRquFFqg0O2ornG9SBHe705y4+1vPI ssh@ducoterra.net
|
||||
1
keys/ssh_id_rsa.pub
Normal file
1
keys/ssh_id_rsa.pub
Normal file
@@ -0,0 +1 @@
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC3X3g/2eCucB0mamJzoePvsRz7Lp0DFD6gXzsPO1H2JamidYr14xGTihOsD0vk1TgK/KfQ1UzREsVm6/UIXjF7ZHfCAX5ObiF5Y2mbb139hqddQJ9Z4XLqIdYm/K7sW1Ba0FGgxeuC7POzIT4r3giMysxcwHg8fNQUS629bpR65eiQoum5KfNWXO7IhJ1zzGn2vVn7wIUxP6c75iP7sXY8ADjK5x08+kGldkNyJSTq6y+3I2WweaNg7gEz0frZvNgTXoOKcKOR4u61DIAPjkKmB41vVc8uRbhKYfUS6gW278ab/UwO3M9o/fL7xOOhB2lzlucg4gP+AeSFNNe2Q3tXLEcSUKDq+gxo27loWyZpSZxce4AcXiK+85iAFX99cl/mYoT0aaZTfBkTrwl8qWP/QjYcBBKail5WX2ccdOxeSlaEw9fbO3PtXVkp4XbkobQSQ/8/XHNGWIVGGKfiqFQGA0MpYNka3NKsaJffOhLT1h8lofAck+t1Zoh+WIZEgR08oFbRxe9x7C39l59LKNoqGjfgv/SOTYQP+B4edzfgl0V1ExoiSObH+5AniWz/XFC32pwyX2EpZ0pTVwQeiX/21AmPydlHd7hkcF0jcMU0nyJZjSC57YC2nyXX89VYhrKIbULgWYDLGFHf4qBhR4eV+4g4NldT7FL+mSk9YrZypw== ssh@ducoterra.net
|
||||
@@ -8,15 +8,15 @@ installing the OS onto.
|
||||
- [Foobar](#foobar)
|
||||
- [Installation](#installation)
|
||||
- [Network](#network)
|
||||
- [Firewalld Configuration](#firewalld-configuration)
|
||||
- [Firewall](#firewall)
|
||||
- [Setup SSH](#setup-ssh)
|
||||
- [Fail2Ban](#fail2ban)
|
||||
- [Set eui64 on network interface](#set-eui64-on-network-interface)
|
||||
- [Enable IPv6](#enable-ipv6)
|
||||
- [Set up Network Bridge](#set-up-network-bridge)
|
||||
- [Storage](#storage)
|
||||
- [BTRFS Parent Volumes](#btrfs-parent-volumes)
|
||||
- [BTRFS Snapshots](#btrfs-snapshots)
|
||||
- [BTRFS Maintenance](#btrfs-maintenance)
|
||||
- [Volumes](#volumes)
|
||||
- [Snapshots](#snapshots)
|
||||
- [Maintenance](#maintenance)
|
||||
- [Backups](#backups)
|
||||
- [TPM2 Luks Decryption](#tpm2-luks-decryption)
|
||||
- [Users](#users)
|
||||
- [Change your password](#change-your-password)
|
||||
@@ -25,16 +25,8 @@ installing the OS onto.
|
||||
- [Monitoring](#monitoring)
|
||||
- [Disk Usage](#disk-usage)
|
||||
- [Disk Wear](#disk-wear)
|
||||
- [Backups](#backups)
|
||||
- [Downgrading Kernel](#downgrading-kernel)
|
||||
- [Apps](#apps)
|
||||
- [Package Manager](#package-manager)
|
||||
- [Install and Enable Cockpit](#install-and-enable-cockpit)
|
||||
- [Install and Enable Virtualization](#install-and-enable-virtualization)
|
||||
- [Install and Enable Containers](#install-and-enable-containers)
|
||||
- [Troubleshooting](#troubleshooting)
|
||||
- [Disable Swap](#disable-swap)
|
||||
- [Disable Selinux](#disable-selinux)
|
||||
|
||||
## Installation
|
||||
|
||||
@@ -57,355 +49,85 @@ installing the OS onto.
|
||||
|
||||
## Network
|
||||
|
||||
### Firewalld Configuration
|
||||
### Firewall
|
||||
|
||||
Set the default firewalld zone to `public`
|
||||
Allow SSH
|
||||
|
||||
```bash
|
||||
# Note, you probably don't have to do this. Check Cockpit Network -> Firewall
|
||||
# firewall-cmd --set-default-zone=public
|
||||
```
|
||||
|
||||
Firewalld will be on and blocking by default. You can check the zone and allowed ports with:
|
||||
|
||||
```bash
|
||||
firewall-cmd --zone=public --list-ports
|
||||
firewall-cmd --zone=public --list-services
|
||||
```
|
||||
|
||||
Allow Cockpit with
|
||||
|
||||
```bash
|
||||
firewall-cmd --permanent --zone=public --add-port=9090/tcp
|
||||
firewall-cmd --reload
|
||||
# Your firewall command here
|
||||
```
|
||||
|
||||
### Setup SSH
|
||||
|
||||
See [README](/README.md#ssh-key-generation)
|
||||
|
||||
### Fail2Ban
|
||||
|
||||
On the server:
|
||||
Install SSH
|
||||
|
||||
```bash
|
||||
# Run tmux session
|
||||
tmux
|
||||
|
||||
dnf install -y fail2ban
|
||||
|
||||
# Setup initial rules
|
||||
cat <<EOF > /etc/fail2ban/jail.local
|
||||
# Jail configuration additions for local installation
|
||||
|
||||
# Adjust the default configuration's default values
|
||||
[DEFAULT]
|
||||
# Optional enter an trusted IP never to ban
|
||||
# ignoreip = 2600:1700:1e6c:a81f::0/64
|
||||
bantime = 6600
|
||||
backend = auto
|
||||
|
||||
# The main configuration file defines all services but
|
||||
# deactivates them by default. We have to activate those neeeded
|
||||
[sshd]
|
||||
enabled = true
|
||||
EOF
|
||||
|
||||
systemctl enable fail2ban --now
|
||||
|
||||
# OPTIONAL: follow logs
|
||||
tail -f /var/log/fail2ban.log
|
||||
# Steps to install SSH server
|
||||
```
|
||||
|
||||
Checking, banning, unbanning
|
||||
### Enable IPv6
|
||||
|
||||
```bash
|
||||
# See banned clients
|
||||
fail2ban-client banned
|
||||
# See jails (sshd should be one of them)
|
||||
fail2ban-client status
|
||||
# Unban a client from the sshd jail
|
||||
fail2ban-client set sshd unbanip <IP address>
|
||||
```
|
||||
|
||||
### Set eui64 on network interface
|
||||
|
||||
Ensures consistent mac-based IPv6 address.
|
||||
|
||||
```bash
|
||||
nmcli connection modify Wired\ connection\ 1 ipv6.addr-gen-mode eui64
|
||||
```
|
||||
1. Disable privacy
|
||||
2. Enable eui64 addressing
|
||||
|
||||
### Set up Network Bridge
|
||||
|
||||
Networking -> Add bridge -> add network interface and save
|
||||
|
||||
```bash
|
||||
nmcli connection modify bridge0 ipv6.addr-gen-mode eui64
|
||||
```
|
||||
Create a network bridge for VMs
|
||||
|
||||
## Storage
|
||||
|
||||
### BTRFS Parent Volumes
|
||||
### Volumes
|
||||
|
||||
In `/etc/fstab`, add the parent volumes for your disks mounted with subvolid=5 at `/btrfs` so you can see
|
||||
all subvolumes.
|
||||
Create volumes for `/home`, `/var`, `/var/log`, `/tmp`, etc.
|
||||
|
||||
```conf
|
||||
UUID=64beedac-c0c9-48bf-a3ae-7707df6ebc97 /btrfs/3dserver-root btrfs subvolid=5,compress=zstd:1,x-systemd.device-timeout=0 0 0
|
||||
UUID=3c76b83f-7547-4c18-b08f-9e7902022b8d /btrfs/3dserver-data btrfs subvolid=5,compress=zstd:1,x-systemd.device-timeout=0 0 0
|
||||
```
|
||||
### Snapshots
|
||||
|
||||
```bash
|
||||
systemctl daemon-reload
|
||||
mount -a --mkdir
|
||||
```
|
||||
If supported, set up automated snapshots to keep a history of your data.
|
||||
|
||||
### BTRFS Snapshots
|
||||
### Maintenance
|
||||
|
||||
<https://en.opensuse.org/openSUSE:Snapper_Tutorial>
|
||||
If supported, set up scrub and check jobs to ensure data integrity.
|
||||
|
||||
<http://snapper.io/manpages/snapper-configs.html>
|
||||
### Backups
|
||||
|
||||
We'll be using snapper, a tool for automating and controlling snapshot behavior.
|
||||
|
||||
```bash
|
||||
dnf install snapper dnf-plugin-snapper
|
||||
|
||||
# Allow selinux management
|
||||
semanage permissive -a snapperd_t
|
||||
|
||||
# Note, if you mess something up you can run snapper -c root delete-config to delete
|
||||
# System configs are stored in /etc/sysconfig/snapper as well as /etc/snapper
|
||||
snapper -c root create-config /
|
||||
snapper -c data create-config /path/to/other/data
|
||||
|
||||
# Enable automatic snapshots
|
||||
systemctl enable --now snapper-timeline.timer
|
||||
# Enable automatic cleanup
|
||||
systemctl enable --now snapper-cleanup.timer
|
||||
# Enable snapshots on boot
|
||||
systemctl enable --now snapper-boot.timer
|
||||
|
||||
# List snapshots
|
||||
snapper -c root list
|
||||
# Create snapshot manually
|
||||
snapper -c root create --description "test snapshot"
|
||||
# Delete first snapshot
|
||||
snapper -c root delete 1
|
||||
```
|
||||
|
||||
Note - you probably don't want to keep yearly snapshots.
|
||||
Edit `/etc/snapper/configs/root` and change `TIMELINE_LIMIT_YEARLY=` to `0`.
|
||||
|
||||
### BTRFS Maintenance
|
||||
|
||||
```bash
|
||||
# Start a scrub in the foreground (-B) at /
|
||||
btrfs scrub start -B /
|
||||
```
|
||||
Set up regular backups via a supported tool like Borg.
|
||||
|
||||
### TPM2 Luks Decryption
|
||||
|
||||
Mostly taken from here:
|
||||
<https://gist.github.com/jdoss/777e8b52c8d88eb87467935769c98a95>
|
||||
|
||||
PCR reference for `--tpm2-pcrs` args
|
||||
|
||||
```text
|
||||
0: System firmware executable
|
||||
2: Kernel
|
||||
4: Bootloader
|
||||
7: Secure boot state
|
||||
8: Cmdline
|
||||
9: Initrd
|
||||
```
|
||||
|
||||
Note, if your threat vector is people trying to get data off your old disks after throwing them
|
||||
away, you can set `--tpm2-pcrs=""`. Someone could gain access to your encrypted partition if they
|
||||
can access your machine physically by manipulating the boot parameters but you're guaranteed to
|
||||
unlock despite updates and upgrades.
|
||||
|
||||
Basic commands:
|
||||
|
||||
```bash
|
||||
# Run tmux session
|
||||
tmux
|
||||
|
||||
# Show tpm2 devices
|
||||
systemd-cryptenroll --tpm2-device=list
|
||||
# Show crypto luks block devices
|
||||
blkid -t TYPE=crypto_LUKS
|
||||
|
||||
# Enroll the tpm2 device with systemd-cryptenroll
|
||||
systemd-cryptenroll /dev/nvme0n1p3 --tpm2-device=auto --tpm2-pcrs=""
|
||||
|
||||
####################
|
||||
##### OPTIONAL #####
|
||||
####################
|
||||
# If you have lots of devices to decrypt (like a btrfs raid array), use these commands.
|
||||
# Get all crypto luks partitions
|
||||
blkid | grep crypto_LUKS
|
||||
# List them all space-separated and drop the '/dev'
|
||||
LUKS_DEVS="nvme0n1p4 nvme1n1p1 nvme2n1p1 nvme3n1p1 nvme5n1p1 nvme4n1p1 nvme6n1p1"
|
||||
|
||||
# Check that your list is good
|
||||
for dev in $LUKS_DEVS; do echo will enroll /dev/$dev; done
|
||||
|
||||
# Enroll
|
||||
for dev in $LUKS_DEVS; do \
|
||||
echo "Enrolling /dev/$dev"; \
|
||||
systemd-cryptenroll /dev/$dev --tpm2-device=auto --tpm2-pcrs=""; \
|
||||
done
|
||||
########################
|
||||
##### END OPTIONAL #####
|
||||
########################
|
||||
|
||||
# Append to command line args
|
||||
echo "add_dracutmodules+=\" tpm2-tss \"" | tee /etc/dracut.conf.d/tpm2.conf
|
||||
dracut -f
|
||||
```
|
||||
|
||||
Finally, `vim /etc/default/grub` and add `rd.luks.options=tpm2-device=auto` to GRUB_CMDLINE_LINUX
|
||||
|
||||
```bash
|
||||
# Update Grub
|
||||
grub2-mkconfig -o /boot/grub2/grub.cfg
|
||||
reboot
|
||||
# Cross your fingers that you don't have to go type in the password manually.
|
||||
# Yes, 60 full seconds is too long. Go type your password in.
|
||||
```
|
||||
|
||||
If you need to reenroll for some reason:
|
||||
|
||||
```bash
|
||||
# Reenroll
|
||||
systemd-cryptenroll /dev/nvme0n1p3 --wipe-slot=tpm2 --tpm2-device=auto --tpm2-pcrs=""
|
||||
```
|
||||
If you want automatic decryption via TPM2, set it up here.
|
||||
|
||||
## Users
|
||||
|
||||
### Change your password
|
||||
|
||||
In Cockpit navigate to Accounts -> user -> Set password
|
||||
Change the default user's password.
|
||||
|
||||
## Updates
|
||||
|
||||
Describe what needs updated regularly and how to update it.
|
||||
|
||||
### Automatic Updates
|
||||
|
||||
In Cockpit navigate to software updates -> automatic updates -> install -> security updates only
|
||||
Set up automatic updates or security patches.
|
||||
|
||||
## Monitoring
|
||||
|
||||
In Cockpit: Overview -> View metrics and history -> Install PCP Support -> Metrics settings -> Turn on Collect Metrics
|
||||
Set up a monitoring service for resources or uptime.
|
||||
|
||||
### Disk Usage
|
||||
|
||||
TODO
|
||||
Set up disk usage monitoring.
|
||||
|
||||
### Disk Wear
|
||||
|
||||
TODO
|
||||
|
||||
## Backups
|
||||
|
||||
See [borg.md](/active/systemd_borg/borg.md)
|
||||
|
||||
### Downgrading Kernel
|
||||
|
||||
```bash
|
||||
dnf install koji
|
||||
|
||||
# Note: format is kernel-version.fedora-version
|
||||
cd $(mktemp -d) && koji download-build --arch=x86_64 --arch=noarch kernel-6.11.3-300.fc41 && dnf install ./*
|
||||
|
||||
reboot
|
||||
```
|
||||
Set up physical disk wear monitoring for SSDs if applicable.
|
||||
|
||||
## Apps
|
||||
|
||||
### Package Manager
|
||||
Document any recommended apps here.
|
||||
|
||||
Configure dnf to use the fastest mirror:
|
||||
|
||||
```bash
|
||||
echo 'fastestmirror=1' >> /etc/dnf/dnf.conf
|
||||
dnf clean all
|
||||
dnf update --refresh -y
|
||||
# libdnf5 is required for ansible to work
|
||||
dnf install -y glances tmux vim python3-libdnf5
|
||||
```
|
||||
|
||||
### Install and Enable Cockpit
|
||||
|
||||
<https://cockpit-project.org/running>
|
||||
|
||||
```bash
|
||||
dnf install cockpit
|
||||
systemctl enable --now cockpit.socket
|
||||
firewall-cmd --add-service=cockpit
|
||||
firewall-cmd --add-service=cockpit --permanent
|
||||
```
|
||||
|
||||
## Install and Enable Virtualization
|
||||
|
||||
Don't forget to add a btrfs subvolume for `/var/lib/libvirt`
|
||||
|
||||
```bash
|
||||
# Since we already created our /btrfs mountpoint, this volume will show up automatically
|
||||
# at /btrfs/libvirt
|
||||
btrfs sub create /btrfs/libvirt
|
||||
```
|
||||
|
||||
Now create an fstab entry that mounts the volume at /var/lib/libvirt
|
||||
|
||||
```bash
|
||||
UUID=... /var/lib/libvirt btrfs subvol=libvirt,compress=zstd:1,x-systemd.device-timeout=0 0 0
|
||||
```
|
||||
|
||||
Mount the libvirt volume:
|
||||
|
||||
```bash
|
||||
systemctl daemon-reload
|
||||
mount -a --mkdir
|
||||
# Check that the mount was successful. This will print something if our mount worked.
|
||||
mount | grep -i /var/lib/libvirt
|
||||
```
|
||||
|
||||
Create a snapshot schedule for libvirt.
|
||||
|
||||
```bash
|
||||
snapper -c libvirt create-config /var/lib/libvirt
|
||||
# Don't forget to edit "YEARLY" at /etc/snapper/configs/libvirt
|
||||
```
|
||||
|
||||
Install and enable the virtualization service.
|
||||
|
||||
```bash
|
||||
dnf group install --with-optional virtualization
|
||||
systemctl enable --now libvirtd
|
||||
```
|
||||
|
||||
Install the cockpit machines application.
|
||||
|
||||
## Install and Enable Containers
|
||||
If your package manager requires specific configuration, put it here.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Disable Swap
|
||||
|
||||
```bash
|
||||
swapoff -a
|
||||
zramctl --reset /dev/zram0
|
||||
dnf -y remove zram-generator-defaults
|
||||
```
|
||||
|
||||
### Disable Selinux
|
||||
|
||||
By default selinux will be enforcing. You can set it to permissive with
|
||||
|
||||
```bash
|
||||
setenforce 0
|
||||
```
|
||||
|
||||
And then make it permanent by editing `/etc/selinux/config` and inserting `SELINUX=permissive`.
|
||||
Any troubleshooting recommendations for common issues.
|
||||
@@ -11,7 +11,7 @@
|
||||
|
||||
## Setup foobar Project
|
||||
|
||||
1. Copy and rename this folder to active/systemd_foobar
|
||||
1. Copy and rename this folder to active/software_foobar
|
||||
2. Find and replace foobar with the name of the service
|
||||
3. Write the foobar.service spec
|
||||
4. (OPTIONAL) Write the foobar.timer spec
|
||||
@@ -33,16 +33,16 @@
|
||||
ansible-playbook \
|
||||
-i ansible/inventory.yaml \
|
||||
-l podman \
|
||||
active/systemd_foobar/install_foobar.yaml \
|
||||
-e "@active/systemd_foobar/vars.yaml" \
|
||||
-e "@active/systemd_foobar/secrets/vars.yaml"
|
||||
active/software_foobar/install_foobar.yaml \
|
||||
-e "@active/software_foobar/vars.yaml" \
|
||||
-e "@active/software_foobar/secrets/vars.yaml"
|
||||
```
|
||||
|
||||
## Upgrade foobar
|
||||
|
||||
## Backup foobar
|
||||
|
||||
Follow the [Borg Backup instructions](/active/systemd_borg/borg.md#set-up-a-client-for-backup)
|
||||
Follow the [Borg Backup instructions](/active/software_borg/borg.md#set-up-a-client-for-backup)
|
||||
|
||||
## Restore foobar
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user