20 KiB
Fedora Server
- Fedora Server
Note these instructions differentiate between an operator and a server. The operator can be
any machine that configure the server. A pipeline, laptop, dedicated server, etc. are all options.
The server can be its own operator, though that's not recommended since servers should be ephemeral
and the operator will store information about each server.
Installation
- Configure network first
- Set a hostname
- Disable ipv6 privacy extensions
- Software Selection
- Headless Management
- User Creation
- Set a simple password, we'll change it later
- Disk partitioning
- Select manual (blivet) partitioning
- Create a 1GB EFI system partition and mount it at
/boot/efi - Create a 1GB ext4 partition and mount it at
/boot - Create a btrfs volume with the remaining data and name it something unqiue, do not mount it
- Create a btrfs subvolume called "root" and mount it at
/ - Create any other btrfs subvolumes you might need
- Take note of the ipv4 and ipv6 address. Update any DNS records at this time.
- Install and reboot
Setup SSH
In this setup we'll allow ssh to the root user via key and keep the admin user for cockpit.
On the operator:
export SSH_HOST=kube
ssh-keygen -C ${USER}@${HOSTNAME} -f ~/.ssh/id_${SSH_HOST}_rsa
# Note: If you get "too many authentication failures" it's likely because you have too many private
# keys in your ~/.ssh directory. Use `-o PubkeyAuthentication` to fix it.
ssh-copy-id -o PubkeyAuthentication=no -i ~/.ssh/id_${SSH_HOST}_rsa.pub ducoterra@${SSH_HOST}.reeselink.com
ssh -i ~/.ssh/id_${SSH_HOST}_rsa ducoterra@${SSH_HOST}.reeselink.com
# Copy authorized_keys to root
sudo cp ~/.ssh/authorized_keys /root/.ssh/authorized_keys
exit
cat <<EOF >> ~/.ssh/config
Host ${SSH_HOST}
Hostname ${SSH_HOST}.reeselink.com
User root
Port 22
KeepAlive yes
IdentityFile ~/.ssh/id_${SSH_HOST}_rsa
EOF
ssh ${SSH_HOST}
# Disable password auth
echo "PasswordAuthentication no" > /etc/ssh/sshd_config.d/01-prohibit-password.conf
systemctl restart sshd
# OPTIONAL: Disable sudo password
echo '%wheel ALL=(ALL) NOPASSWD: ALL' > /etc/sudoers.d/01-nopasswd-wheel
exit
# Test if you can SSH with a password
ssh -o PubkeyAuthentication=no ducoterra@${SSH_HOST}.reeselink.com
# Test that you can log into the server with ssh config
ssh $SSH_HOST
DNF
Configure dnf to use the fastest mirror:
echo 'fastestmirror=1' >> /etc/dnf/dnf.conf
dnf clean all
dnf update --refresh -y
# libdnf5 is required for ansible to work
dnf install -y glances tmux vim python3-libdnf5
Fail2Ban
On the server:
# Run tmux session
tmux
dnf install -y fail2ban
# Setup initial rules
cat <<EOF > /etc/fail2ban/jail.local
# Jail configuration additions for local installation
# Adjust the default configuration's default values
[DEFAULT]
# Optional enter an trusted IP never to ban
# ignoreip = 2600:1700:1e6c:a81f::0/64
bantime = 6600
backend = auto
# The main configuration file defines all services but
# deactivates them by default. We have to activate those neeeded
[sshd]
enabled = true
EOF
systemctl enable fail2ban --now
# OPTIONAL: follow logs
tail -f /var/log/fail2ban.log
Checking, banning, unbanning
# See banned clients
fail2ban-client banned
# See jails (sshd should be one of them)
fail2ban-client status
# Unban a client from the sshd jail
fail2ban-client set sshd unbanip <IP address>
BTRFS Parent Volumes
In /etc/fstab, add the parent volumes for your disks mounted with subvolid=5 at /btrfs so you can see
all subvolumes.
UUID=64beedac-c0c9-48bf-a3ae-7707df6ebc97 /btrfs/3dserver-root btrfs subvolid=5,compress=zstd:1,x-systemd.device-timeout=0 0 0
UUID=3c76b83f-7547-4c18-b08f-9e7902022b8d /btrfs/3dserver-data btrfs subvolid=5,compress=zstd:1,x-systemd.device-timeout=0 0 0
systemctl daemon-reload
mount -a --mkdir
BTRFS Snapshots
https://en.opensuse.org/openSUSE:Snapper_Tutorial
http://snapper.io/manpages/snapper-configs.html
We'll be using snapper, a tool for automating and controlling snapshot behavior.
dnf install snapper dnf-plugin-snapper
# Allow selinux management
semanage permissive -a snapperd_t
# Note, if you mess something up you can run snapper -c root delete-config to delete
# System configs are stored in /etc/sysconfig/snapper as well as /etc/snapper
snapper -c root create-config /
snapper -c data create-config /path/to/other/data
# Enable automatic snapshots
systemctl enable --now snapper-timeline.timer
# Enable automatic cleanup
systemctl enable --now snapper-cleanup.timer
# Enable snapshots on boot
systemctl enable --now snapper-boot.timer
# List snapshots
snapper -c root list
# Create snapshot manually
snapper -c root create --description "test snapshot"
# Delete first snapshot
snapper -c root delete 1
Note - you probably don't want to keep yearly snapshots.
Edit /etc/snapper/configs/root and change TIMELINE_LIMIT_YEARLY= to 0.
TPM2 Luks Decryption
Mostly taken from here: https://gist.github.com/jdoss/777e8b52c8d88eb87467935769c98a95
PCR reference for --tpm2-pcrs args
0: System firmware executable
2: Kernel
4: Bootloader
7: Secure boot state
8: Cmdline
9: Initrd
Note, if your threat vector is people trying to get data off your old disks after throwing them
away, you can set --tpm2-pcrs="". Someone could gain access to your encrypted partition if they
can access your machine physically by manipulating the boot parameters but you're guaranteed to
unlock despite updates and upgrades.
Basic commands:
# Run tmux session
tmux
# Show tpm2 devices
systemd-cryptenroll --tpm2-device=list
# Show crypto luks block devices
blkid -t TYPE=crypto_LUKS
# Enroll the tpm2 device with systemd-cryptenroll
systemd-cryptenroll /dev/nvme0n1p3 --tpm2-device=auto --tpm2-pcrs=""
####################
##### OPTIONAL #####
####################
# If you have lots of devices to decrypt (like a btrfs raid array), use these commands.
# Get all crypto luks partitions
blkid | grep crypto_LUKS
# List them all space-separated and drop the '/dev'
LUKS_DEVS="nvme0n1p4 nvme1n1p1 nvme2n1p1 nvme3n1p1 nvme5n1p1 nvme4n1p1 nvme6n1p1"
# Check that your list is good
for dev in $LUKS_DEVS; do echo will enroll /dev/$dev; done
# Enroll
for dev in $LUKS_DEVS; do \
echo "Enrolling /dev/$dev"; \
systemd-cryptenroll /dev/$dev --tpm2-device=auto --tpm2-pcrs=""; \
done
########################
##### END OPTIONAL #####
########################
# Append to command line args
echo "add_dracutmodules+=\" tpm2-tss \"" | tee /etc/dracut.conf.d/tpm2.conf
dracut -f
Finally, vim /etc/default/grub and add rd.luks.options=tpm2-device=auto to GRUB_CMDLINE_LINUX
# Update Grub
grub2-mkconfig -o /boot/grub2/grub.cfg
reboot
# Cross your fingers that you don't have to go type in the password manually.
# Yes, 60 full seconds is too long. Go type your password in.
If you need to reenroll for some reason:
# Reenroll
systemd-cryptenroll /dev/nvme0n1p3 --wipe-slot=tpm2 --tpm2-device=auto --tpm2-pcrs=""
Change your password
In Cockpit navigate to Accounts -> user -> Set password
Automatic Updates
In Cockpit navigate to software updates -> automatic updates -> install -> security updates only
Monitoring
In Cockpit: Overview -> View metrics and history -> Install PCP Support -> Metrics settings -> Turn on Collect Metrics
Disk Usage
TODO
Disk Wear
TODO
Common Storage Mounts
Note: mount these before you install the relavant package!
- For virtual machines:
/var/lib/libvirt - For podman:
/var/lib/containers - For docker:
/var/lib/docker
Network Bridge
Networking -> Add bridge -> add network interface and save
nmcli connection modify bridge0 ipv6.addr-gen-mode eui64
Virtualization
Don't forget to add a btrfs subvolume for /var/lib/libvirt
# Since we already created our /btrfs mountpoint, this volume will show up automatically
# at /btrfs/libvirt
btrfs sub create /btrfs/libvirt
Now create an fstab entry that mounts the volume at /var/lib/libvirt
UUID=... /var/lib/libvirt btrfs subvol=libvirt,compress=zstd:1,x-systemd.device-timeout=0 0 0
Mount the libvirt volume:
systemctl daemon-reload
mount -a --mkdir
# Check that the mount was successful. This will print something if our mount worked.
mount | grep -i /var/lib/libvirt
Create a snapshot schedule for libvirt.
snapper -c libvirt create-config /var/lib/libvirt
# Don't forget to edit "YEARLY" at /etc/snapper/configs/libvirt
Install and enable the virtualization service.
dnf group install --with-optional virtualization
systemctl enable --now libvirtd
Install the cockpit machines application.
Troubleshooting
# Oops, I did this after I installed virtualization
rsync -av /var/lib/libvirt/ /btrfs/libvirt/
rm -rf /var/lib/libvirt
# Find the path to your btrfs volume
lsblk
mount -o subvol=libvirt /dev/mapper/luks-... /var/lib/libvirtd
QEMU Images
# Grow an image to 2TB
qemu-img resize nextcloud_aio-fcfgp.qcow2 2T
# Convert OVA to img
qemu-img convert -f vmdk -O raw in.vmdk out.img
# Convert qcow2 to img
qemu-img convert -f qcow2 -O raw in.raw out.img
Firewalld
Set the default firewalld zone to public
# Note, you probably don't have to do this. Check Cockpit Network -> Firewall
# firewall-cmd --set-default-zone=public
Firewalld will be on and blocking by default. You can check the zone and allowed ports with:
firewall-cmd --zone=public --list-ports
firewall-cmd --zone=public --list-services
Allow Cockpit with
firewall-cmd --permanent --zone=public --add-port=9090/tcp
firewall-cmd --reload
Backups
Note: this assumes you've set up an iscsi backup disk
Connect to the ISCSI Backup Target
Connect to Backup Target with Cockpit
- Storage -> Hamburger menu -> Add iSCSI portal
- Type your portal address, username, and password
Connect to Backup Target with iscsiadm
# Set username and password for discovered nodes
# Optionally you can add "-T <target name>" to specify which target has the username/password
iscsiadm -m node \
-o update \
-n node.session.auth.username -v username \
-n node.session.auth.password -v password
# Set replacement_timeout to 10 minutes in case server reboots
iscsiadm -m node \
-o update \
-n node.session.timeo.replacement_timeout -v 600
systemctl restart iscsid
# Discover targets
iscsiadm -m discovery -t st -p driveripper.reeselink.com
# Login to all nodes
iscsiadm -m node -l
Format backup disk
# list disks
lsblk
# Create partition
fdisk /dev/sdx
# Format partition with btrfs
mkfs.btrfs /dev/sdx1
# Get the UUID
blkid /dev/sdx1
Update /etc/fstab with the iscsi disk details.
Note:
x-systemd.automountwhich only mounts the device when it's accessed.x-systemd.mount-timeout=30allows a 30 second timeout_netdevensures the device won't be mounted until after the network is available
UUID=... /btrfs/some-name btrfs subvolid=5,compress=zstd:1,x-systemd.automount,x-systemd.mount-timeout=30,_netdev 0 0
Troubleshooting Backup ISCSI Connection
# List targets
iscsiadm -m node
# Delete node
iscsiadm -m node -o delete -T iqn.2022-01.com.reeselink:driveripper:iqn.2022-01.com.reeselink:driveripper
# List discovered targets
iscsiadm -m discovery
# Delete from discovery db
iscsiadm -m discoverydb -t sendtargets -p driveripper.reeselink.com -o delete
Quick Backup
rsync -av --progress --exclude '.snapshots' /btrfs/yellow/root /btrfs/backup-yellow --dry-run
Regular Backups with Borg
See borg.md
Optional Steps
Docker with Podman as Runtime
Note, you'll need to ssh into the server as the user in order to start the user's systemd session.
sudo dnf install podman docker docker-compose
sudo loginctl enable-linger 1000 # Or whatever user
systemctl --user enable --now podman.socket
docker context create podman --docker host=unix://$XDG_RUNTIME_DIR/podman/podman.sock
docker context use podman
Vanilla Docker
https://docs.docker.com/engine/install/fedora/
dnf -y install dnf-plugins-core
dnf-3 config-manager --add-repo https://download.docker.com/linux/fedora/docker-ce.repo
dnf install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
systemctl enable --now docker
Extra Software
# Set vim as the default editor
dnf install -y vim-default-editor --allowerasing
# Install glances for system monitoring
dnf install -y glances
# ZSH
dnf install -y zsh
chsh -s $(which zsh) && chsh -s $(which zsh) ducoterra
Disable Swap
swapoff -a
zramctl --reset /dev/zram0
dnf -y remove zram-generator-defaults
Disable Selinux
By default selinux will be enforcing. You can set it to permissive with
setenforce 0
And then make it permanent by editing /etc/selinux/config and inserting SELINUX=permissive.
Downgrading Kernel
dnf install koji
# Note: format is kernel-version.fedora-version
cd $(mktemp -d) && koji download-build --arch=x86_64 --arch=noarch kernel-6.11.3-300.fc41 && dnf install ./*
reboot
Resize logical volume
# Replace /dev/sda2 with whatever your disks are
# This assumes xfs
pvresize /dev/sda2
lvextend /dev/mapper/root -l+100%FREE
xfs_growfs -d /dev/mapper/root
Create XFS LVM
https://www.linuxtechi.com/how-to-create-lvm-partition-in-linux/ https://www.golinuxcloud.com/lvcreate-command-in-linux/#How_to_install_lvcreate
If you get the error "Not creating system devices file due to existing VGs."
Run vgimportdevices -a and check /etc/lvm/devices/system.devices
- Create a new partition for the Physical Volume (fdisk)
# Create the physical volume
pvcreate /dev/vda4
# Create the volume group (vgcreate <vg_name> <pv>)
vgcreate nextcloud_data /dev/vda4
# Create the logical volume (lvcreate -L <Size-of-LV> -n <LV-Name> <VG-Name>)
# Or lvcreate -l 100%FREE
lvcreate -l 100%FREE -n nextcloud_data_vol nextcloud_data
# list the PV, VG, LV
pvs
vgs
lvs
# Format lv
mkfs.btrfs /dev/nextcloud_data/nextcloud_data_vol
LVM Thin Provisioning
https://linuxconfig.org/introduction-to-lvm-thin-provisioning
If you get the error "Not creating system devices file due to existing VGs."
Run vgimportdevices -a and check /etc/lvm/devices/system.devices
Thin provisioning allows you to overprovision your storage drives to make the filesystem think it has more data than it does.
# Create the physical volume
pvcreate /dev/vda4
# Create the volume group
vgcreate vg0 /dev/vda4
# Create the thin pool - the volume with real data that will hold our thing volumes with fake data
lvcreate -l 100%FREE -T vg0/thinpool
# Create the thin volumes with fake data
lvcreate -T -V 2T vg0/thinpool -n local-path-provisioner
lvcreate -T -V 2T vg0/thinpool -n docker-data
# Format the fake volumes
mkfs.xfs /dev/mapper/vg0-local--path--provisioner
mkfs.xfs /dev/mapper/vg0-docker--data
Set eui64 on network interface
nmcli connection modify Wired\ connection\ 1 ipv6.addr-gen-mode eui64
Install and Enable Cockpit
https://cockpit-project.org/running
dnf install cockpit
systemctl enable --now cockpit.socket
firewall-cmd --add-service=cockpit
firewall-cmd --add-service=cockpit --permanent
Troubleshooting
Cockpit Terminal Unusable or Weird Colors
Make sure you give canvas access to the browser (especially in librewolf)
Chroot into a mounted disk
This lets you run grub2-mkconfig among other things.
# Mount root
mount /dev/mapper/vg0-root /mnt
# Mount proc, sys, and dev
mount -t proc /proc proc/
mount --rbind /sys sys/
mount --rbind /dev dev/
# Mount boot and efi
mount /dev/vdb2 /mnt/boot
mount /dev/vdb1 /mnt/boot/efi
chroot /mnt
Resize Last Partition to Fill Available Space
parted /dev/vdb
# to resize /dev/vdb3 to fill 100% of the disk, for example
resizepart 3 100%
quit
# Resize the physical volume to match the partition
pvresize /dev/vdb3
LUKS performance
cryptsetup benchmark
Should output something like:
# Algorithm | Key | Encryption | Decryption
aes-cbc 128b 1409.1 MiB/s 3627.9 MiB/s
serpent-cbc 128b 146.5 MiB/s 981.4 MiB/s
twofish-cbc 128b 289.8 MiB/s 613.3 MiB/s
aes-cbc 256b 1100.2 MiB/s 3448.2 MiB/s
serpent-cbc 256b 150.3 MiB/s 982.1 MiB/s
twofish-cbc 256b 294.3 MiB/s 590.8 MiB/s
aes-xts 256b 4423.5 MiB/s 4561.2 MiB/s
serpent-xts 256b 874.9 MiB/s 883.7 MiB/s
twofish-xts 256b 557.8 MiB/s 559.4 MiB/s
aes-xts 512b 4551.2 MiB/s 4669.6 MiB/s
serpent-xts 512b 890.8 MiB/s 860.5 MiB/s
twofish-xts 512b 557.5 MiB/s 564.2 MiB/s
Which will tell you how fast you can theoretically write/read to encrypted drives.
The default encryption used by most modern operating systems is AES-XTS.
You can see your system's cipher and key with cryptsetup luksDump /dev/nvme0n1p1 | grep -i cipher