370 lines
11 KiB
Markdown
370 lines
11 KiB
Markdown
# K3S
|
|
|
|
- [K3S](#k3s)
|
|
- [Guide](#guide)
|
|
- [Disable Firewalld](#disable-firewalld)
|
|
- [Set SELinux to Permissive](#set-selinux-to-permissive)
|
|
- [Install K3S (Single Node)](#install-k3s-single-node)
|
|
- [Kube Credentials](#kube-credentials)
|
|
- [Storage](#storage)
|
|
- [Coredns](#coredns)
|
|
- [Metal LB](#metal-lb)
|
|
- [VLAN Setup](#vlan-setup)
|
|
- [Installation](#installation)
|
|
- [External DNS](#external-dns)
|
|
- [Credentials](#credentials)
|
|
- [Annotation](#annotation)
|
|
- [Nginx Ingress](#nginx-ingress)
|
|
- [Cert Manager](#cert-manager)
|
|
- [Test Minecraft Server](#test-minecraft-server)
|
|
- [Automatic Updates](#automatic-updates)
|
|
- [Database Backups](#database-backups)
|
|
- [Uninstall](#uninstall)
|
|
|
|
## Guide
|
|
|
|
1. Configure Host
|
|
2. Install CoreDNS for inter-container discovery
|
|
3. Install Metal LB for load balancer IP address assignment
|
|
4. install External DNS for laod balancer IP and ingress DNS records
|
|
5. Install Nginx Ingress for http services
|
|
6. Install Cert Manager for automatic Let's Encrypt certificates for Ingress nginx
|
|
7. Install longhorn storage for automatic PVC creation and management
|
|
8. Set up automatic database backups
|
|
|
|
## Disable Firewalld
|
|
|
|
<https://docs.k3s.io/advanced#red-hat-enterprise-linux--centos--fedora>
|
|
|
|
Disable firewalld. You could add rules for each service but every time you open a port
|
|
from a container you'd need to run a firewalld rule.
|
|
|
|
You can disable firewalld from the web interface.
|
|
|
|
## Set SELinux to Permissive
|
|
|
|
K3S is more than capable of running with SELinux set to enforcing. We won't be doing
|
|
that, however. We'll set it to permissive and you can reenable it once you've added all
|
|
the rules you need to keep your services running.
|
|
|
|
Set SELinux to permissive by editing `/etc/selinux/config`
|
|
|
|
SELINUX=permissive
|
|
|
|
## Install K3S (Single Node)
|
|
|
|
```bash
|
|
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.30.2+k3s2 sh -s - \
|
|
"--cluster-init" \
|
|
"--flannel-ipv6-masq" \
|
|
"--disable" \
|
|
"traefik" \
|
|
"--disable" \
|
|
"servicelb" \
|
|
"--disable" \
|
|
"coredns" \
|
|
"--disable" \
|
|
"local-storage" \
|
|
"--tls-san" \
|
|
"kube.reeselink.com" \
|
|
"--cluster-cidr" \
|
|
"10.42.0.0/16,fd02:c91e:56f4::/56" \
|
|
"--service-cidr" \
|
|
"10.43.0.0/16,fd02:c91e:56f5::/112" \
|
|
"--cluster-dns" \
|
|
"fd02:c91e:56f5::10"
|
|
```
|
|
|
|
## Kube Credentials
|
|
|
|
On the operator
|
|
|
|
```bash
|
|
# Copy the kube config down
|
|
scp kube:/etc/rancher/k3s/k3s.yaml ~/.kube/admin-kube-config
|
|
|
|
# Edit the server to match the remote address.
|
|
```
|
|
|
|
## Storage
|
|
|
|
1. `mkdir /var/lib/rancher/k3s/storage`
|
|
2. Edit fstab to mount your drive to `/var/lib/rancher/k3s/storage`
|
|
3. `systemctl daemon-reload`
|
|
4. `mount -a`
|
|
|
|
<https://github.com/rancher/local-path-provisioner/tree/master/deploy/chart/local-path-provisioner>
|
|
|
|
```bash
|
|
# Download the updated template from github
|
|
kubectl kustomize "github.com/rancher/local-path-provisioner/deploy?ref=v0.0.28" > kubernetes/incubating/local-path-provisioner/local-path-storage.yaml
|
|
|
|
# Apply customizations (ssd/hdd storage, read write many support)
|
|
kubectl kustomize kubernetes/incubating/local-path-provisioner/local-path-provisioner | kubectl apply -f -
|
|
|
|
# Create test pod
|
|
kubectl apply -f infrastructure/graduated/k3s/tests/local-storage-test.yaml
|
|
```
|
|
|
|
## Coredns
|
|
|
|
1. Edit `kubernetes/graduated/coredns/values.yaml` to ensure the forward nameserver is correct.
|
|
|
|
```bash
|
|
# Install CoreDNS
|
|
helm upgrade --install \
|
|
--namespace=kube-system \
|
|
--values kubernetes/graduated/coredns/values.yaml \
|
|
coredns coredns/coredns
|
|
|
|
# Test DNS works
|
|
kubectl run -it --rm \
|
|
--restart=Never \
|
|
--image=infoblox/dnstools:latest \
|
|
dnstools
|
|
```
|
|
|
|
## Metal LB
|
|
|
|
### VLAN Setup
|
|
|
|
Before working with Metallb you'll need at least one available VLAN. On Unifi equipment
|
|
this is accomplished by creating a new network. Don't assign it to anything.
|
|
|
|
On the linux machine you can use nmcli or cockpit to configure a new VLAN network interface.
|
|
With cockpit:
|
|
|
|
1. Add a new VLAN network
|
|
2. The parent should be the physical adapter connected to your switch
|
|
3. Set the VLAN ID to the VLAN number of your created unifi network
|
|
4. Click create
|
|
5. Click into the new network
|
|
6. Turn off IPv4 and IPv6 DNS (it will overload the resolv.conf hosts limit)
|
|
7. Turn on the network interface
|
|
8. Attempt to ping the acquired address(es)
|
|
|
|
### Installation
|
|
|
|
We'll be swapping K3S's default load balancer with Metal LB for more flexibility. ServiceLB was
|
|
struggling to allocate IP addresses for load balanced services. MetallLB does make things a little
|
|
more complicated- you'll need special annotations (see below) but it's otherwise a well-tested,
|
|
stable load balancing service with features to grow into.
|
|
|
|
Metallb is pretty cool. It works via l2 advertisement or BGP. We won't be using BGP, so let's
|
|
focus on l2.
|
|
|
|
When we connect our nodes to a network we give them an IP address range: ex. `192.168.122.20/24`.
|
|
This range represents all the available addresses the node could be assigned. Usually we assign
|
|
a single "static" IP address for our node and direct traffic to it by port forwarding from our
|
|
router. This is fine for single nodes - but what if we have a cluster of nodes and we don't want
|
|
our service to disappear just because one node is down for maintenance?
|
|
|
|
This is where l2 advertising comes in. Metallb will assign a static IP address from a given
|
|
pool to any arbitrary node - then advertise that node's mac address as the location for the
|
|
IP. When that node goes down metallb simply advertises a new mac address for the same IP
|
|
address, effectively moving the IP to another node. This isn't really "load balancing" but
|
|
"failover". Fortunately, that's exactly what we're looking for.
|
|
|
|
```bash
|
|
helm repo add metallb https://metallb.github.io/metallb
|
|
helm repo update
|
|
helm upgrade --install metallb \
|
|
--namespace kube-system \
|
|
metallb/metallb
|
|
```
|
|
|
|
MetalLB doesn't know what IP addresses are available for it to allocate so we'll have
|
|
to provide it with a list. The `metallb-addresspool.yaml` has one IP address (we'll get to
|
|
IP address sharing in a second) which is an unassigned IP address not allocated to any of our
|
|
nodes. Note if you have many public IPs which all point to the same router or virtual network
|
|
you can list them. We're only going to use one because we want to port forward from our router.
|
|
|
|
```bash
|
|
# create the metallb allocation pool
|
|
kubectl apply -f kubernetes/graduated/metallb/addresspool.yaml
|
|
```
|
|
|
|
You'll need to annotate your service as follows if you want an external IP:
|
|
|
|
```yaml
|
|
metadata:
|
|
annotations:
|
|
metallb.universe.tf/address-pool: "external"
|
|
# or
|
|
metallb.universe.tf/address-pool: "internal"
|
|
spec:
|
|
ipFamilyPolicy: SingleStack
|
|
ipFamilies:
|
|
- IPv6
|
|
```
|
|
|
|
## External DNS
|
|
|
|
<https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/aws.md>
|
|
|
|
### Credentials
|
|
|
|
1. Generate credentials for the cluster
|
|
|
|
```bash
|
|
aws iam create-user --user-name "externaldns"
|
|
aws iam attach-user-policy --user-name "externaldns" --policy-arn arn:aws:iam::892236928704:policy/update-reeseapps
|
|
aws iam attach-user-policy --user-name "externaldns" --policy-arn arn:aws:iam::892236928704:policy/update-reeselink
|
|
|
|
SECRET_ACCESS_KEY=$(aws iam create-access-key --user-name "externaldns")
|
|
ACCESS_KEY_ID=$(echo $SECRET_ACCESS_KEY | jq -r '.AccessKey.AccessKeyId')
|
|
|
|
cat <<-EOF > secrets/externaldns-credentials
|
|
|
|
[default]
|
|
aws_access_key_id = $(echo $ACCESS_KEY_ID)
|
|
aws_secret_access_key = $(echo $SECRET_ACCESS_KEY | jq -r '.AccessKey.SecretAccessKey')
|
|
EOF
|
|
|
|
kubectl create secret generic external-dns \
|
|
--namespace kube-system --from-file secrets/externaldns-credentials
|
|
|
|
kubectl apply -f kubernetes/graduated/external-dns/sa.yaml
|
|
|
|
kubectl apply -f kubernetes/graduated/external-dns/deploy.yaml
|
|
```
|
|
|
|
### Annotation
|
|
|
|
```yaml
|
|
metadata:
|
|
annotations:
|
|
external-dns.alpha.kubernetes.io/hostname: example.com
|
|
```
|
|
|
|
## Nginx Ingress
|
|
|
|
Now we need an ingress solution (preferably with certs for https). We'll be using nginx since
|
|
it's a little bit more configurable than traefik (though don't sell traefik short, it's really
|
|
good. Just finnicky when you have use cases they haven't explicitly coded for).
|
|
|
|
```bash
|
|
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
|
|
helm repo update
|
|
helm upgrade --install \
|
|
ingress-nginx \
|
|
ingress-nginx/ingress-nginx \
|
|
--values kubernetes/graduated/ingress-nginx/values.yaml \
|
|
--namespace kube-system
|
|
```
|
|
|
|
## Cert Manager
|
|
|
|
Install cert-manager
|
|
|
|
```bash
|
|
helm repo add jetstack https://charts.jetstack.io
|
|
helm repo update
|
|
helm upgrade --install \
|
|
cert-manager jetstack/cert-manager \
|
|
--namespace kube-system \
|
|
--set crds.enabled=true
|
|
```
|
|
|
|
Create the let's encrypt issuer (Route53 DNS)
|
|
|
|
```bash
|
|
export LE_ACCESS_KEY_ID=
|
|
export LE_SECRET_KEY=
|
|
|
|
cat <<EOF > secrets/cert-manager-secret.yaml
|
|
apiVersion: v1
|
|
kind: Secret
|
|
metadata:
|
|
name: prod-route53-credentials-cert-manager
|
|
data:
|
|
access-key-id: $(echo $LE_ACCESS_KEY_ID | base64)
|
|
secret-access-key: $(echo $LE_SECRET_KEY | base64)
|
|
EOF
|
|
|
|
kubectl apply -f secrets/cert-manager-secret.yaml
|
|
```
|
|
|
|
```bash
|
|
cat <<EOF > secrets/route53-cluster-issuer.yaml
|
|
apiVersion: cert-manager.io/v1
|
|
kind: ClusterIssuer
|
|
metadata:
|
|
name: letsencrypt
|
|
spec:
|
|
acme:
|
|
server: https://acme-v02.api.letsencrypt.org/directory
|
|
email: nginx@ducoterra.net
|
|
privateKeySecretRef:
|
|
name: letsencrypt
|
|
solvers:
|
|
- selector:
|
|
dnsZones:
|
|
- "reeseapps.com"
|
|
dns01:
|
|
route53:
|
|
region: us-east-1
|
|
hostedZoneID: Z012820733346FJ0U4FUF
|
|
accessKeyID: ${LE_ACCESS_KEY_ID}
|
|
secretAccessKeySecretRef:
|
|
name: prod-route53-credentials-cert-manager
|
|
key: secret-access-key
|
|
EOF
|
|
|
|
kubectl apply -f secrets/route53-cluster-issuer.yaml
|
|
```
|
|
|
|
You can test if your ingress is working with:
|
|
|
|
```bash
|
|
# Navigate to demo.reeseapps.com
|
|
kubectl apply -f infrastructure/graduated/k3s/tests/ingress-nginx-test.yaml
|
|
|
|
# Cleanup
|
|
kubectl delete -f infrastructure/graduated/k3s/tests/ingress-nginx-test.yaml
|
|
```
|
|
|
|
## Test Minecraft Server
|
|
|
|
```bash
|
|
helm upgrade --install minecraft kubernetes/graduated/minecraft -n minecraft --create-namespace
|
|
```
|
|
|
|
## Automatic Updates
|
|
|
|
<https://docs.k3s.io/upgrades/automated>
|
|
|
|
```bash
|
|
kubectl create namespace system-upgrade
|
|
kubectl apply -f https://github.com/rancher/system-upgrade-controller/releases/latest/download/system-upgrade-controller.yaml
|
|
kubectl apply -f https://github.com/rancher/system-upgrade-controller/releases/latest/download/crd.yaml
|
|
kubectl apply -f infrastructure/graduated/k3s/upgrade-plan.yaml
|
|
|
|
# Check plan
|
|
kubectl get plan -n system-upgrade
|
|
```
|
|
|
|
## Database Backups
|
|
|
|
<https://docs.k3s.io/cli/etcd-snapshot>
|
|
|
|
Note, you must backup `/var/lib/rancher/k3s/server/token`
|
|
and use the contents as the toklisten when restoring the backup as data is encrypted with that token.
|
|
|
|
Backups are saved to `/var/lib/rancher/k3s/server/db/snapshots/` by default.
|
|
|
|
```bash
|
|
k3s etcd-snapshot save
|
|
k3s etcd-snapshot list
|
|
|
|
k3s server \
|
|
--cluster-reset \
|
|
--cluster-reset-restore-path=/var/lib/rancher/k3s/server/db/snapshots/on-demand-kube-1720459685
|
|
```
|
|
|
|
## Uninstall
|
|
|
|
```bash
|
|
/usr/local/bin/k3s-uninstall.sh
|
|
```
|