# K3S - [K3S](#k3s) - [Guide](#guide) - [Disable Firewalld](#disable-firewalld) - [Set SELinux to Permissive](#set-selinux-to-permissive) - [Install K3S (Single Node)](#install-k3s-single-node) - [Kube Credentials](#kube-credentials) - [Storage](#storage) - [Coredns](#coredns) - [Metal LB](#metal-lb) - [External DNS](#external-dns) - [Credentials](#credentials) - [Annotation](#annotation) - [Nginx Ingress](#nginx-ingress) - [Cert Manager](#cert-manager) - [Test Minecraft Server](#test-minecraft-server) - [Automatic Updates](#automatic-updates) - [Database Backups](#database-backups) - [Quickstart](#quickstart) - [Help](#help) - [Troubleshooting](#troubleshooting) - [Deleting a stuck namespace](#deleting-a-stuck-namespace) - [Fixing a bad volume](#fixing-a-bad-volume) - [Mounting an ix-application volume from truenas](#mounting-an-ix-application-volume-from-truenas) - [Mounting a volume](#mounting-a-volume) - [Uninstall](#uninstall) ## Guide 1. Configure Host 2. Install CoreDNS for inter-container discovery 3. Install Metal LB for load balancer IP address assignment 4. install External DNS for laod balancer IP and ingress DNS records 5. Install Nginx Ingress for http services 6. Install Cert Manager for automatic Let's Encrypt certificates for Ingress nginx 7. Install longhorn storage for automatic PVC creation and management 8. Set up automatic database backups ## Disable Firewalld Disable firewalld. You could add rules for each service but every time you open a port from a container you'd need to run a firewalld rule. You can disable firewalld from the web interface. ## Set SELinux to Permissive K3S is more than capable of running with SELinux set to enforcing. We won't be doing that, however. We'll set it to permissive and you can reenable it once you've added all the rules you need to keep your services running. Set SELinux to permissive by editing `/etc/selinux/config` SELINUX=permissive ## Install K3S (Single Node) ```bash curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.30.2+k3s2 sh -s - \ "--cluster-init" \ "--flannel-ipv6-masq" \ "--disable" \ "traefik" \ "--disable" \ "servicelb" \ "--disable" \ "coredns" \ "--disable" \ "local-storage" \ "--tls-san" \ "kube.reeselink.com" \ "--cluster-cidr" \ "10.42.0.0/16,fd02:c91e:56f4::/56" \ "--service-cidr" \ "10.43.0.0/16,fd02:c91e:56f5::/112" \ "--cluster-dns" \ "fd02:c91e:56f5::10" ``` ## Kube Credentials On the operator ```bash # Copy the kube config down scp kube:/etc/rancher/k3s/k3s.yaml ~/.kube/admin-kube-config # Edit the server to match the remote address. ``` ## Storage 1. `mkdir /var/lib/rancher/k3s/storage` 2. Edit fstab to mount your drive to `/var/lib/rancher/k3s/storage` 3. `systemctl daemon-reload` 4. `mount -a` ```bash # Download the updated template from github kubectl kustomize "github.com/rancher/local-path-provisioner/deploy?ref=v0.0.28" > local-path-provisioner/local-path-storage.yaml # Apply customizations (ssd/hdd storage, read write many support) kubectl kustomize local-path-provisioner | kubectl apply -f - # Create test pod kubectl apply -f k3s/tests/local-storage-test.yaml ``` ## Coredns 1. Edit `coredns/values.yaml` to ensure the forward nameserver is correct. ```bash # Install CoreDNS helm upgrade --install \ --namespace=kube-system \ --values coredns/values.yaml \ coredns coredns/coredns # Test DNS works kubectl run -it --rm \ --restart=Never \ --image=infoblox/dnstools:latest \ dnstools ``` ## Metal LB We'll be swapping K3S's default load balancer with Metal LB for more flexibility. ServiceLB was struggling to allocate IP addresses for load balanced services. MetallLB does make things a little more complicated- you'll need special annotations (see below) but it's otherwise a well-tested, stable load balancing service with features to grow into. Metallb is pretty cool. It works via l2 advertisement or BGP. We won't be using BGP, so let's focus on l2. When we connect our nodes to a network we give them an IP address range: ex. `192.168.122.20/24`. This range represents all the available addresses the node could be assigned. Usually we assign a single "static" IP address for our node and direct traffic to it by port forwarding from our router. This is fine for single nodes - but what if we have a cluster of nodes and we don't want our service to disappear just because one node is down for maintenance? This is where l2 advertising comes in. Metallb will assign a static IP address from a given pool to any arbitrary node - then advertise that node's mac address as the location for the IP. When that node goes down metallb simply advertises a new mac address for the same IP address, effectively moving the IP to another node. This isn't really "load balancing" but "failover". Fortunately, that's exactly what we're looking for. ```bash helm repo add metallb https://metallb.github.io/metallb helm repo update helm upgrade --install metallb \ --namespace kube-system \ metallb/metallb ``` MetalLB doesn't know what IP addresses are available for it to allocate so we'll have to provide it with a list. The `metallb-addresspool.yaml` has one IP address (we'll get to IP address sharing in a second) which is an unassigned IP address not allocated to any of our nodes. Note if you have many public IPs which all point to the same router or virtual network you can list them. We're only going to use one because we want to port forward from our router. ```bash # create the metallb allocation pool kubectl apply -f metallb/addresspool.yaml ``` You'll need to annotate your service as follows if you want an external IP: ```yaml metadata: annotations: metallb.universe.tf/address-pool: "external" # or metallb.universe.tf/address-pool: "internal" spec: ipFamilyPolicy: SingleStack ipFamilies: - IPv6 ``` ## External DNS ### Credentials 1. Generate credentials for the cluster ```bash aws iam create-user --user-name "externaldns" aws iam attach-user-policy --user-name "externaldns" --policy-arn arn:aws:iam::892236928704:policy/update-reeseapps aws iam attach-user-policy --user-name "externaldns" --policy-arn arn:aws:iam::892236928704:policy/update-reeselink SECRET_ACCESS_KEY=$(aws iam create-access-key --user-name "externaldns") ACCESS_KEY_ID=$(echo $SECRET_ACCESS_KEY | jq -r '.AccessKey.AccessKeyId') cat <<-EOF > secrets/externaldns-credentials [default] aws_access_key_id = $(echo $ACCESS_KEY_ID) aws_secret_access_key = $(echo $SECRET_ACCESS_KEY | jq -r '.AccessKey.SecretAccessKey') EOF kubectl create secret generic external-dns \ --namespace kube-system --from-file secrets/externaldns-credentials kubectl apply -f external-dns/sa.yaml kubectl apply -f external-dns/deploy.yaml ``` ### Annotation ```yaml metadata: annotations: external-dns.alpha.kubernetes.io/hostname: example.com ``` ## Nginx Ingress Now we need an ingress solution (preferably with certs for https). We'll be using nginx since it's a little bit more configurable than traefik (though don't sell traefik short, it's really good. Just finnicky when you have use cases they haven't explicitly coded for). ```bash helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update helm upgrade --install \ ingress-nginx \ ingress-nginx/ingress-nginx \ --values ingress-nginx/values.yaml \ --namespace kube-system ``` ## Cert Manager Install cert-manager ```bash helm repo add jetstack https://charts.jetstack.io helm repo update helm upgrade --install \ cert-manager jetstack/cert-manager \ --namespace kube-system \ --set crds.enabled=true ``` Create the let's encrypt issuer (Route53 DNS) ```bash kubectl apply -f certmanager/letsencrypt-issuer.yaml ``` You can test if your ingress is working with: ```bash # Navigate to demo.reeseapps.com kubectl apply -f k3s/tests/ingress-nginx-test.yaml # Cleanup kubectl delete -f k3s/tests/ingress-nginx-test.yaml ``` ## Test Minecraft Server ```bash helm upgrade --install minecraft ./minecraft -n minecraft --create-namespace ``` ## Automatic Updates ```bash kubectl create namespace system-upgrade kubectl apply -f https://github.com/rancher/system-upgrade-controller/releases/latest/download/system-upgrade-controller.yaml kubectl apply -f https://github.com/rancher/system-upgrade-controller/releases/latest/download/crd.yaml kubectl apply -f k3s/upgrade-plan.yaml # Check plan kubectl get plan -n system-upgrade ``` ## Database Backups Note, you must backup `/var/lib/rancher/k3s/server/token` and use the contents as the toklisten when restoring the backup as data is encrypted with that token. Backups are saved to `/var/lib/rancher/k3s/server/db/snapshots/` by default. ```bash k3s etcd-snapshot save k3s etcd-snapshot list k3s server \ --cluster-reset \ --cluster-reset-restore-path=/var/lib/rancher/k3s/server/db/snapshots/on-demand-kube-1720459685 ``` ### Quickstart ```bash # Create certsigner pod for all other operations ./setup.sh # Create a user, use "admin" to create an admin user ./upsertuser.sh # Remove a user, their namespace, and their access ./removeuserspace ``` ## Help ### Troubleshooting #### Deleting a stuck namespace ```bash NAMESPACE=nginx kubectl proxy & kubectl get namespace $NAMESPACE -o json |jq '.spec = {"finalizers":[]}' >temp.json curl -k -H "Content-Type: application/json" -X PUT --data-binary @temp.json 127.0.0.1:8001/api/v1/namespaces/$NAMESPACE/finalize ``` #### Fixing a bad volume ```bash xfs_repair -L /dev/sdg ``` #### Mounting an ix-application volume from truenas ```bash # set the mountpoint zfs set mountpoint=/ix_pvc enc1/ix-applications/releases/gitea/volumes/pvc-40e27277-71e3-4469-88a3-a39f53435a8b #"unset" the mountpoint (back to legacy) zfs set mountpoint=legacy enc1/ix-applications/releases/gitea/volumes/pvc-40e27277-71e3-4469-88a3-a39f53435a8b ``` #### Mounting a volume ```bash # mount mount -t xfs /dev/zvol/enc0/dcsi/apps/pvc-d5090258-cf20-4f2e-a5cf-330ac00d0049 /mnt/dcsi_pvc # unmount umount /mnt/dcsi_pvc ``` ## Uninstall ```bash /usr/local/bin/k3s-uninstall.sh ```