clean up k3s for new deployment

This commit is contained in:
2026-04-06 11:45:24 -04:00
parent 67b644005a
commit 25e812ab55
2 changed files with 91 additions and 75 deletions

View File

@@ -3,10 +3,12 @@
- [K3S](#k3s)
- [Guide](#guide)
- [Firewalld](#firewalld)
- [Set SELinux to Permissive](#set-selinux-to-permissive)
- [Install K3S (Single Node)](#install-k3s-single-node)
- [SELinux](#selinux)
- [Install Single Node K3S](#install-single-node-k3s)
- [Dual Stack IPv6 Support](#dual-stack-ipv6-support)
- [Single Stack IPv4](#single-stack-ipv4)
- [Install Multi Node K3S](#install-multi-node-k3s)
- [Network Checks](#network-checks)
- [Kube Credentials](#kube-credentials)
- [Metal LB](#metal-lb)
- [VLAN Setup](#vlan-setup)
@@ -35,25 +37,49 @@
## Firewalld
```bash
firewall-cmd --permanent --zone=public --add-port=6443/tcp # apiserver
firewall-cmd --permanent --zone=trusted --add-source=10.42.0.0/16 # pods
firewall-cmd --permanent --zone=trusted --add-source=fd02:c91e:56f4::/56 # pods
firewall-cmd --permanent --zone=trusted --add-source=10.43.0.0/16 # services
firewall-cmd --permanent --zone=trusted --add-source=fd02:c91e:56f5::/112 # services
# All required ports (https://docs.k3s.io/installation/requirements?_highlight=ports#local-ports)
firewall-cmd \
--permanent \
--zone=public \
--add-port=80/tcp \
--add-port=443/tcp \
--add-port=2379-2380/tcp \
--add-port=6443/tcp \
--add-port=8472/udp \
--add-port=10250/tcp
# IPv4 config
# 10.42 is for pods
# 10.43 is for services
firewall-cmd \
--permanent \
--zone=trusted \
--add-source=10.42.0.0/16 \
--add-source=10.43.0.0/16
# [Optional] IPv6 config
# fd02:c91e:56f4 is for pods
# fd02:c91e:56f5 is for services
firewall-cmd \
--permanent \
--zone=trusted \
--add-source=fd02:c91e:56f4::/56 \
--add-source=fd02:c91e:56f5::/112
firewall-cmd --reload
```
## Set SELinux to Permissive
## SELinux
Make sure to add `--selinux` to your install script.
## Install K3S (Single Node)
## Install Single Node K3S
### Dual Stack IPv6 Support
```bash
curl -sfL https://get.k3s.io | sh -s - \
--selinux \
"--disable" \
"traefik" \
"--disable" \
@@ -67,8 +93,7 @@ curl -sfL https://get.k3s.io | sh -s - \
"--service-cidr" \
"10.43.0.0/16,fd02:c91e:56f5::/112" \
"--cluster-dns" \
"fd02:c91e:56f5::10" \
--selinux
"fd02:c91e:56f5::10"
```
### Single Stack IPv4
@@ -84,16 +109,67 @@ curl -sfL https://get.k3s.io | sh -s - \
--selinux
```
## Install Multi Node K3S
TODO: haproxy (<https://docs.k3s.io/blog/2025/03/10/simple-ha?_highlight=tls&_highlight=san#load-balancer>)
Load balance a single registration point across all active nodes.
```bash
# Generate a shared token for joining nodes
# Copy this token to each node at ~/.k3s-token
pwgen --capitalize --numerals --secure 64 1 > ~/.k3s-token
# Create the first node
curl -sfL https://get.k3s.io | K3S_TOKEN=$(cat ~/.k3s-token) sh -s - \
--cluster-init \
--selinux \
"--disable" \
"traefik" \
"--disable" \
"servicelb" \
"--cluster-cidr" \
"10.42.0.0/16" \
"--service-cidr" \
"10.43.0.0/16"
# Copy the generated token to the other nodes
cat /var/lib/rancher/k3s/server/token
# Join nodes
curl -sfL https://get.k3s.io | K3S_TOKEN=$(cat ~/.k3s-token) sh -s - \
--selinux \
"--disable" \
"traefik" \
"--disable" \
"servicelb" \
"--cluster-cidr" \
"10.42.0.0/16" \
"--service-cidr" \
"10.43.0.0/16" \
--server https://kube1.reeselink.com:6443
```
## Network Checks
At this point it's a good idea to make sure node communication is working as expected.
```bash
firewall-cmd --set-log-denied=all
# You shouldn't see any dropped traffic from your nodes.
dmesg --follow | egrep -i 'REJECT|DROP'
```
## Kube Credentials
On the operator
```bash
export KUBE_SERVER_ADDRESS="https://k3s.reeselink.com:6443"
export KUBE_SERVER_ADDRESS="https://kube1.reeselink.com:6443"
# Copy the kube config down
ssh k3s cat /etc/rancher/k3s/k3s.yaml | \
yq -y ".clusters[0].cluster.server = \"${KUBE_SERVER_ADDRESS}\"" > \
ssh kube1-root cat /etc/rancher/k3s/k3s.yaml | \
yq -r ".clusters[0].cluster.server = \"${KUBE_SERVER_ADDRESS}\"" > \
~/.kube/admin-kube-config
export KUBECONFIG=~/.kube/admin-kube-config
```
## Metal LB
@@ -141,15 +217,7 @@ IP. When that node goes down metallb simply advertises a new mac address for the
address, effectively moving the IP to another node. This isn't really "load balancing" but
"failover". Fortunately, that's exactly what we're looking for.
```bash
helm repo add metallb https://metallb.github.io/metallb
helm repo update
# Install metallb
helm upgrade --install metallb \
--namespace kube-system \
metallb/metallb
```
[Install MetalLB](/active/kubernetes_metallb/metallb.md)
MetalLB doesn't know what IP addresses are available for it to allocate so
we'll have to provide it with a list. The