clean up k3s for new deployment

This commit is contained in:
2026-04-06 11:45:24 -04:00
parent 67b644005a
commit 25e812ab55
2 changed files with 91 additions and 75 deletions

View File

@@ -3,10 +3,12 @@
- [K3S](#k3s) - [K3S](#k3s)
- [Guide](#guide) - [Guide](#guide)
- [Firewalld](#firewalld) - [Firewalld](#firewalld)
- [Set SELinux to Permissive](#set-selinux-to-permissive) - [SELinux](#selinux)
- [Install K3S (Single Node)](#install-k3s-single-node) - [Install Single Node K3S](#install-single-node-k3s)
- [Dual Stack IPv6 Support](#dual-stack-ipv6-support) - [Dual Stack IPv6 Support](#dual-stack-ipv6-support)
- [Single Stack IPv4](#single-stack-ipv4) - [Single Stack IPv4](#single-stack-ipv4)
- [Install Multi Node K3S](#install-multi-node-k3s)
- [Network Checks](#network-checks)
- [Kube Credentials](#kube-credentials) - [Kube Credentials](#kube-credentials)
- [Metal LB](#metal-lb) - [Metal LB](#metal-lb)
- [VLAN Setup](#vlan-setup) - [VLAN Setup](#vlan-setup)
@@ -35,25 +37,49 @@
## Firewalld ## Firewalld
```bash ```bash
firewall-cmd --permanent --zone=public --add-port=6443/tcp # apiserver # All required ports (https://docs.k3s.io/installation/requirements?_highlight=ports#local-ports)
firewall-cmd --permanent --zone=trusted --add-source=10.42.0.0/16 # pods firewall-cmd \
firewall-cmd --permanent --zone=trusted --add-source=fd02:c91e:56f4::/56 # pods --permanent \
firewall-cmd --permanent --zone=trusted --add-source=10.43.0.0/16 # services --zone=public \
firewall-cmd --permanent --zone=trusted --add-source=fd02:c91e:56f5::/112 # services --add-port=80/tcp \
--add-port=443/tcp \
--add-port=2379-2380/tcp \
--add-port=6443/tcp \
--add-port=8472/udp \
--add-port=10250/tcp
# IPv4 config
# 10.42 is for pods
# 10.43 is for services
firewall-cmd \
--permanent \
--zone=trusted \
--add-source=10.42.0.0/16 \
--add-source=10.43.0.0/16
# [Optional] IPv6 config
# fd02:c91e:56f4 is for pods
# fd02:c91e:56f5 is for services
firewall-cmd \
--permanent \
--zone=trusted \
--add-source=fd02:c91e:56f4::/56 \
--add-source=fd02:c91e:56f5::/112
firewall-cmd --reload firewall-cmd --reload
``` ```
## Set SELinux to Permissive ## SELinux
Make sure to add `--selinux` to your install script. Make sure to add `--selinux` to your install script.
## Install K3S (Single Node) ## Install Single Node K3S
### Dual Stack IPv6 Support ### Dual Stack IPv6 Support
```bash ```bash
curl -sfL https://get.k3s.io | sh -s - \ curl -sfL https://get.k3s.io | sh -s - \
--selinux \
"--disable" \ "--disable" \
"traefik" \ "traefik" \
"--disable" \ "--disable" \
@@ -67,8 +93,7 @@ curl -sfL https://get.k3s.io | sh -s - \
"--service-cidr" \ "--service-cidr" \
"10.43.0.0/16,fd02:c91e:56f5::/112" \ "10.43.0.0/16,fd02:c91e:56f5::/112" \
"--cluster-dns" \ "--cluster-dns" \
"fd02:c91e:56f5::10" \ "fd02:c91e:56f5::10"
--selinux
``` ```
### Single Stack IPv4 ### Single Stack IPv4
@@ -84,16 +109,67 @@ curl -sfL https://get.k3s.io | sh -s - \
--selinux --selinux
``` ```
## Install Multi Node K3S
TODO: haproxy (<https://docs.k3s.io/blog/2025/03/10/simple-ha?_highlight=tls&_highlight=san#load-balancer>)
Load balance a single registration point across all active nodes.
```bash
# Generate a shared token for joining nodes
# Copy this token to each node at ~/.k3s-token
pwgen --capitalize --numerals --secure 64 1 > ~/.k3s-token
# Create the first node
curl -sfL https://get.k3s.io | K3S_TOKEN=$(cat ~/.k3s-token) sh -s - \
--cluster-init \
--selinux \
"--disable" \
"traefik" \
"--disable" \
"servicelb" \
"--cluster-cidr" \
"10.42.0.0/16" \
"--service-cidr" \
"10.43.0.0/16"
# Copy the generated token to the other nodes
cat /var/lib/rancher/k3s/server/token
# Join nodes
curl -sfL https://get.k3s.io | K3S_TOKEN=$(cat ~/.k3s-token) sh -s - \
--selinux \
"--disable" \
"traefik" \
"--disable" \
"servicelb" \
"--cluster-cidr" \
"10.42.0.0/16" \
"--service-cidr" \
"10.43.0.0/16" \
--server https://kube1.reeselink.com:6443
```
## Network Checks
At this point it's a good idea to make sure node communication is working as expected.
```bash
firewall-cmd --set-log-denied=all
# You shouldn't see any dropped traffic from your nodes.
dmesg --follow | egrep -i 'REJECT|DROP'
```
## Kube Credentials ## Kube Credentials
On the operator On the operator
```bash ```bash
export KUBE_SERVER_ADDRESS="https://k3s.reeselink.com:6443" export KUBE_SERVER_ADDRESS="https://kube1.reeselink.com:6443"
# Copy the kube config down # Copy the kube config down
ssh k3s cat /etc/rancher/k3s/k3s.yaml | \ ssh kube1-root cat /etc/rancher/k3s/k3s.yaml | \
yq -y ".clusters[0].cluster.server = \"${KUBE_SERVER_ADDRESS}\"" > \ yq -r ".clusters[0].cluster.server = \"${KUBE_SERVER_ADDRESS}\"" > \
~/.kube/admin-kube-config ~/.kube/admin-kube-config
export KUBECONFIG=~/.kube/admin-kube-config
``` ```
## Metal LB ## Metal LB
@@ -141,15 +217,7 @@ IP. When that node goes down metallb simply advertises a new mac address for the
address, effectively moving the IP to another node. This isn't really "load balancing" but address, effectively moving the IP to another node. This isn't really "load balancing" but
"failover". Fortunately, that's exactly what we're looking for. "failover". Fortunately, that's exactly what we're looking for.
```bash [Install MetalLB](/active/kubernetes_metallb/metallb.md)
helm repo add metallb https://metallb.github.io/metallb
helm repo update
# Install metallb
helm upgrade --install metallb \
--namespace kube-system \
metallb/metallb
```
MetalLB doesn't know what IP addresses are available for it to allocate so MetalLB doesn't know what IP addresses are available for it to allocate so
we'll have to provide it with a list. The we'll have to provide it with a list. The

View File

@@ -1,52 +0,0 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: ingress-nginx-demo-1
namespace: default
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx-demo-1
strategy:
type: Recreate
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx-demo-1
spec:
containers:
- name: httpd
image: httpd
ports:
- containerPort: 80
name: http
resources:
requests:
memory: "100Mi"
cpu: "1m"
limits:
memory: "256Mi"
cpu: "1"
---
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx-demo-1
namespace: default
annotations:
metallb.universe.tf/address-pool: "unifi-pool"
spec:
ipFamilyPolicy: PreferDualStack
ipFamilies:
- IPv6
- IPv4
type: LoadBalancer
ports:
- name: http
protocol: TCP
port: 8001
targetPort: 80
selector:
app.kubernetes.io/name: ingress-nginx-demo-1