10 KiB
K3S
- K3S
Guide
- Configure Host
- Install CoreDNS for inter-container discovery
- Install Metal LB for load balancer IP address assignment
- install External DNS for laod balancer IP and ingress DNS records
- Install Nginx Ingress for http services
- Install Cert Manager for automatic Let's Encrypt certificates for Ingress nginx
- Install longhorn storage for automatic PVC creation and management
- Set up automatic database backups
Disable Firewalld
https://docs.k3s.io/advanced#red-hat-enterprise-linux--centos--fedora
Disable firewalld. You could add rules for each service but every time you open a port from a container you'd need to run a firewalld rule.
You can disable firewalld from the web interface.
Set SELinux to Permissive
K3S is more than capable of running with SELinux set to enforcing. We won't be doing that, however. We'll set it to permissive and you can reenable it once you've added all the rules you need to keep your services running.
Set SELinux to permissive by editing /etc/selinux/config
SELINUX=permissive
Install K3S (Single Node)
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.30.2+k3s2 sh -s - \
"--cluster-init" \
"--flannel-ipv6-masq" \
"--disable" \
"traefik" \
"--disable" \
"servicelb" \
"--disable" \
"coredns" \
"--disable" \
"local-storage" \
"--tls-san" \
"kube.reeselink.com" \
"--cluster-cidr" \
"10.42.0.0/16,fd02:c91e:56f4::/56" \
"--service-cidr" \
"10.43.0.0/16,fd02:c91e:56f5::/112" \
"--cluster-dns" \
"fd02:c91e:56f5::10"
Kube Credentials
On the operator
# Copy the kube config down
scp kube:/etc/rancher/k3s/k3s.yaml ~/.kube/admin-kube-config
# Edit the server to match the remote address.
Storage
mkdir /var/lib/rancher/k3s/storage- Edit fstab to mount your drive to
/var/lib/rancher/k3s/storage systemctl daemon-reloadmount -a
https://github.com/rancher/local-path-provisioner/tree/master/deploy/chart/local-path-provisioner
# Download the updated template from github
kubectl kustomize "github.com/rancher/local-path-provisioner/deploy?ref=v0.0.28" > local-path-provisioner/local-path-storage.yaml
# Apply customizations (ssd/hdd storage, read write many support)
kubectl kustomize local-path-provisioner | kubectl apply -f -
# Create test pod
kubectl apply -f k3s/tests/local-storage-test.yaml
Coredns
- Edit
coredns/values.yamlto ensure the forward nameserver is correct.
# Install CoreDNS
helm upgrade --install \
--namespace=kube-system \
--values coredns/values.yaml \
coredns coredns/coredns
# Test DNS works
kubectl run -it --rm \
--restart=Never \
--image=infoblox/dnstools:latest \
dnstools
Metal LB
We'll be swapping K3S's default load balancer with Metal LB for more flexibility. ServiceLB was struggling to allocate IP addresses for load balanced services. MetallLB does make things a little more complicated- you'll need special annotations (see below) but it's otherwise a well-tested, stable load balancing service with features to grow into.
Metallb is pretty cool. It works via l2 advertisement or BGP. We won't be using BGP, so let's focus on l2.
When we connect our nodes to a network we give them an IP address range: ex. 192.168.122.20/24.
This range represents all the available addresses the node could be assigned. Usually we assign
a single "static" IP address for our node and direct traffic to it by port forwarding from our
router. This is fine for single nodes - but what if we have a cluster of nodes and we don't want
our service to disappear just because one node is down for maintenance?
This is where l2 advertising comes in. Metallb will assign a static IP address from a given pool to any arbitrary node - then advertise that node's mac address as the location for the IP. When that node goes down metallb simply advertises a new mac address for the same IP address, effectively moving the IP to another node. This isn't really "load balancing" but "failover". Fortunately, that's exactly what we're looking for.
helm repo add metallb https://metallb.github.io/metallb
helm repo update
helm upgrade --install metallb \
--namespace kube-system \
metallb/metallb
MetalLB doesn't know what IP addresses are available for it to allocate so we'll have
to provide it with a list. The metallb-addresspool.yaml has one IP address (we'll get to
IP address sharing in a second) which is an unassigned IP address not allocated to any of our
nodes. Note if you have many public IPs which all point to the same router or virtual network
you can list them. We're only going to use one because we want to port forward from our router.
# create the metallb allocation pool
kubectl apply -f metallb/addresspool.yaml
You'll need to annotate your service as follows if you want an external IP:
metadata:
annotations:
metallb.universe.tf/address-pool: "external"
# or
metallb.universe.tf/address-pool: "internal"
spec:
ipFamilyPolicy: SingleStack
ipFamilies:
- IPv6
External DNS
https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/aws.md
Credentials
- Generate credentials for the cluster
aws iam create-user --user-name "externaldns"
aws iam attach-user-policy --user-name "externaldns" --policy-arn arn:aws:iam::892236928704:policy/update-reeseapps
aws iam attach-user-policy --user-name "externaldns" --policy-arn arn:aws:iam::892236928704:policy/update-reeselink
SECRET_ACCESS_KEY=$(aws iam create-access-key --user-name "externaldns")
ACCESS_KEY_ID=$(echo $SECRET_ACCESS_KEY | jq -r '.AccessKey.AccessKeyId')
cat <<-EOF > secrets/externaldns-credentials
[default]
aws_access_key_id = $(echo $ACCESS_KEY_ID)
aws_secret_access_key = $(echo $SECRET_ACCESS_KEY | jq -r '.AccessKey.SecretAccessKey')
EOF
kubectl create secret generic external-dns \
--namespace kube-system --from-file secrets/externaldns-credentials
kubectl apply -f external-dns/sa.yaml
kubectl apply -f external-dns/deploy.yaml
Annotation
metadata:
annotations:
external-dns.alpha.kubernetes.io/hostname: example.com
Nginx Ingress
Now we need an ingress solution (preferably with certs for https). We'll be using nginx since it's a little bit more configurable than traefik (though don't sell traefik short, it's really good. Just finnicky when you have use cases they haven't explicitly coded for).
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm upgrade --install \
ingress-nginx \
ingress-nginx/ingress-nginx \
--values ingress-nginx/values.yaml \
--namespace kube-system
Cert Manager
Install cert-manager
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm upgrade --install \
cert-manager jetstack/cert-manager \
--namespace kube-system \
--set crds.enabled=true
Create the let's encrypt issuer (Route53 DNS)
kubectl apply -f certmanager/letsencrypt-issuer.yaml
You can test if your ingress is working with:
# Navigate to demo.reeseapps.com
kubectl apply -f k3s/tests/ingress-nginx-test.yaml
# Cleanup
kubectl delete -f k3s/tests/ingress-nginx-test.yaml
Test Minecraft Server
helm upgrade --install minecraft ./minecraft -n minecraft --create-namespace
Automatic Updates
https://docs.k3s.io/upgrades/automated
kubectl create namespace system-upgrade
kubectl apply -f https://github.com/rancher/system-upgrade-controller/releases/latest/download/system-upgrade-controller.yaml
kubectl apply -f https://github.com/rancher/system-upgrade-controller/releases/latest/download/crd.yaml
kubectl apply -f k3s/upgrade-plan.yaml
# Check plan
kubectl get plan -n system-upgrade
Database Backups
https://docs.k3s.io/cli/etcd-snapshot
Note, you must backup /var/lib/rancher/k3s/server/token
and use the contents as the toklisten when restoring the backup as data is encrypted with that token.
Backups are saved to /var/lib/rancher/k3s/server/db/snapshots/ by default.
k3s etcd-snapshot save
k3s etcd-snapshot list
k3s server \
--cluster-reset \
--cluster-reset-restore-path=/var/lib/rancher/k3s/server/db/snapshots/on-demand-kube-1720459685
Quickstart
# Create certsigner pod for all other operations
./setup.sh <server_fqdn>
# Create a user, use "admin" to create an admin user
./upsertuser.sh <ssh_address> <server_fqdn (for kubectl)> <user>
# Remove a user, their namespace, and their access
./removeuserspace <server_fqdn> <user>
Help
Troubleshooting
Deleting a stuck namespace
NAMESPACE=nginx
kubectl proxy &
kubectl get namespace $NAMESPACE -o json |jq '.spec = {"finalizers":[]}' >temp.json
curl -k -H "Content-Type: application/json" -X PUT --data-binary @temp.json 127.0.0.1:8001/api/v1/namespaces/$NAMESPACE/finalize
Fixing a bad volume
xfs_repair -L /dev/sdg
Mounting an ix-application volume from truenas
# set the mountpoint
zfs set mountpoint=/ix_pvc enc1/ix-applications/releases/gitea/volumes/pvc-40e27277-71e3-4469-88a3-a39f53435a8b
#"unset" the mountpoint (back to legacy)
zfs set mountpoint=legacy enc1/ix-applications/releases/gitea/volumes/pvc-40e27277-71e3-4469-88a3-a39f53435a8b
Mounting a volume
# mount
mount -t xfs /dev/zvol/enc0/dcsi/apps/pvc-d5090258-cf20-4f2e-a5cf-330ac00d0049 /mnt/dcsi_pvc
# unmount
umount /mnt/dcsi_pvc
Uninstall
/usr/local/bin/k3s-uninstall.sh