move to project lifecycle structure

This commit is contained in:
2024-07-21 02:20:48 -04:00
parent fd1fde499d
commit e6aff894e8
121 changed files with 6234 additions and 196 deletions

View File

@@ -0,0 +1,428 @@
# K3S
- [K3S](#k3s)
- [Guide](#guide)
- [Disable Firewalld](#disable-firewalld)
- [Set SELinux to Permissive](#set-selinux-to-permissive)
- [Install K3S (Single Node)](#install-k3s-single-node)
- [Kube Credentials](#kube-credentials)
- [Storage](#storage)
- [Coredns](#coredns)
- [Metal LB](#metal-lb)
- [VLAN Setup](#vlan-setup)
- [Installation](#installation)
- [External DNS](#external-dns)
- [Credentials](#credentials)
- [Annotation](#annotation)
- [Nginx Ingress](#nginx-ingress)
- [Cert Manager](#cert-manager)
- [Test Minecraft Server](#test-minecraft-server)
- [Automatic Updates](#automatic-updates)
- [Database Backups](#database-backups)
- [Quickstart](#quickstart)
- [Help](#help)
- [Troubleshooting](#troubleshooting)
- [Deleting a stuck namespace](#deleting-a-stuck-namespace)
- [Fixing a bad volume](#fixing-a-bad-volume)
- [Mounting an ix-application volume from truenas](#mounting-an-ix-application-volume-from-truenas)
- [Mounting a volume](#mounting-a-volume)
- [Uninstall](#uninstall)
## Guide
1. Configure Host
2. Install CoreDNS for inter-container discovery
3. Install Metal LB for load balancer IP address assignment
4. install External DNS for laod balancer IP and ingress DNS records
5. Install Nginx Ingress for http services
6. Install Cert Manager for automatic Let's Encrypt certificates for Ingress nginx
7. Install longhorn storage for automatic PVC creation and management
8. Set up automatic database backups
## Disable Firewalld
<https://docs.k3s.io/advanced#red-hat-enterprise-linux--centos--fedora>
Disable firewalld. You could add rules for each service but every time you open a port
from a container you'd need to run a firewalld rule.
You can disable firewalld from the web interface.
## Set SELinux to Permissive
K3S is more than capable of running with SELinux set to enforcing. We won't be doing
that, however. We'll set it to permissive and you can reenable it once you've added all
the rules you need to keep your services running.
Set SELinux to permissive by editing `/etc/selinux/config`
SELINUX=permissive
## Install K3S (Single Node)
```bash
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.30.2+k3s2 sh -s - \
"--cluster-init" \
"--flannel-ipv6-masq" \
"--disable" \
"traefik" \
"--disable" \
"servicelb" \
"--disable" \
"coredns" \
"--disable" \
"local-storage" \
"--tls-san" \
"kube.reeselink.com" \
"--cluster-cidr" \
"10.42.0.0/16,fd02:c91e:56f4::/56" \
"--service-cidr" \
"10.43.0.0/16,fd02:c91e:56f5::/112" \
"--cluster-dns" \
"fd02:c91e:56f5::10"
```
## Kube Credentials
On the operator
```bash
# Copy the kube config down
scp kube:/etc/rancher/k3s/k3s.yaml ~/.kube/admin-kube-config
# Edit the server to match the remote address.
```
## Storage
1. `mkdir /var/lib/rancher/k3s/storage`
2. Edit fstab to mount your drive to `/var/lib/rancher/k3s/storage`
3. `systemctl daemon-reload`
4. `mount -a`
<https://github.com/rancher/local-path-provisioner/tree/master/deploy/chart/local-path-provisioner>
```bash
# Download the updated template from github
kubectl kustomize "github.com/rancher/local-path-provisioner/deploy?ref=v0.0.28" > local-path-provisioner/local-path-storage.yaml
# Apply customizations (ssd/hdd storage, read write many support)
kubectl kustomize local-path-provisioner | kubectl apply -f -
# Create test pod
kubectl apply -f k3s/tests/local-storage-test.yaml
```
## Coredns
1. Edit `coredns/values.yaml` to ensure the forward nameserver is correct.
```bash
# Install CoreDNS
helm upgrade --install \
--namespace=kube-system \
--values coredns/values.yaml \
coredns coredns/coredns
# Test DNS works
kubectl run -it --rm \
--restart=Never \
--image=infoblox/dnstools:latest \
dnstools
```
## Metal LB
### VLAN Setup
Before working with Metallb you'll need at least one available VLAN. On Unifi equipment
this is accomplished by creating a new network. Don't assign it to anything.
On the linux machine you can use nmcli or cockpit to configure a new VLAN network interface.
With cockpit:
1. Add a new VLAN network
2. The parent should be the physical adapter connected to your switch
3. Set the VLAN ID to the VLAN number of your created unifi network
4. Click create
5. Click into the new network
6. Turn off IPv4 and IPv6 DNS (it will overload the resolv.conf hosts limit)
7. Turn on the network interface
8. Attempt to ping the acquired address(es)
### Installation
We'll be swapping K3S's default load balancer with Metal LB for more flexibility. ServiceLB was
struggling to allocate IP addresses for load balanced services. MetallLB does make things a little
more complicated- you'll need special annotations (see below) but it's otherwise a well-tested,
stable load balancing service with features to grow into.
Metallb is pretty cool. It works via l2 advertisement or BGP. We won't be using BGP, so let's
focus on l2.
When we connect our nodes to a network we give them an IP address range: ex. `192.168.122.20/24`.
This range represents all the available addresses the node could be assigned. Usually we assign
a single "static" IP address for our node and direct traffic to it by port forwarding from our
router. This is fine for single nodes - but what if we have a cluster of nodes and we don't want
our service to disappear just because one node is down for maintenance?
This is where l2 advertising comes in. Metallb will assign a static IP address from a given
pool to any arbitrary node - then advertise that node's mac address as the location for the
IP. When that node goes down metallb simply advertises a new mac address for the same IP
address, effectively moving the IP to another node. This isn't really "load balancing" but
"failover". Fortunately, that's exactly what we're looking for.
```bash
helm repo add metallb https://metallb.github.io/metallb
helm repo update
helm upgrade --install metallb \
--namespace kube-system \
metallb/metallb
```
MetalLB doesn't know what IP addresses are available for it to allocate so we'll have
to provide it with a list. The `metallb-addresspool.yaml` has one IP address (we'll get to
IP address sharing in a second) which is an unassigned IP address not allocated to any of our
nodes. Note if you have many public IPs which all point to the same router or virtual network
you can list them. We're only going to use one because we want to port forward from our router.
```bash
# create the metallb allocation pool
kubectl apply -f metallb/addresspool.yaml
```
You'll need to annotate your service as follows if you want an external IP:
```yaml
metadata:
annotations:
metallb.universe.tf/address-pool: "external"
# or
metallb.universe.tf/address-pool: "internal"
spec:
ipFamilyPolicy: SingleStack
ipFamilies:
- IPv6
```
## External DNS
<https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/aws.md>
### Credentials
1. Generate credentials for the cluster
```bash
aws iam create-user --user-name "externaldns"
aws iam attach-user-policy --user-name "externaldns" --policy-arn arn:aws:iam::892236928704:policy/update-reeseapps
aws iam attach-user-policy --user-name "externaldns" --policy-arn arn:aws:iam::892236928704:policy/update-reeselink
SECRET_ACCESS_KEY=$(aws iam create-access-key --user-name "externaldns")
ACCESS_KEY_ID=$(echo $SECRET_ACCESS_KEY | jq -r '.AccessKey.AccessKeyId')
cat <<-EOF > secrets/externaldns-credentials
[default]
aws_access_key_id = $(echo $ACCESS_KEY_ID)
aws_secret_access_key = $(echo $SECRET_ACCESS_KEY | jq -r '.AccessKey.SecretAccessKey')
EOF
kubectl create secret generic external-dns \
--namespace kube-system --from-file secrets/externaldns-credentials
kubectl apply -f external-dns/sa.yaml
kubectl apply -f external-dns/deploy.yaml
```
### Annotation
```yaml
metadata:
annotations:
external-dns.alpha.kubernetes.io/hostname: example.com
```
## Nginx Ingress
Now we need an ingress solution (preferably with certs for https). We'll be using nginx since
it's a little bit more configurable than traefik (though don't sell traefik short, it's really
good. Just finnicky when you have use cases they haven't explicitly coded for).
```bash
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm upgrade --install \
ingress-nginx \
ingress-nginx/ingress-nginx \
--values ingress-nginx/values.yaml \
--namespace kube-system
```
## Cert Manager
Install cert-manager
```bash
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm upgrade --install \
cert-manager jetstack/cert-manager \
--namespace kube-system \
--set crds.enabled=true
```
Create the let's encrypt issuer (Route53 DNS)
```bash
export LE_ACCESS_KEY_ID=
export LE_SECRET_KEY=
cat <<EOF > secrets/cert-manager-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: prod-route53-credentials-cert-manager
data:
access-key-id: $(echo $LE_ACCESS_KEY_ID | base64)
secret-access-key: $(echo $LE_SECRET_KEY | base64)
EOF
kubectl apply -f secrets/cert-manager-secret.yaml
```
```bash
cat <<EOF > secrets/route53-cluster-issuer.yaml
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
server: https://acme-v02.api.letsencrypt.org/directory
email: nginx@ducoterra.net
privateKeySecretRef:
name: letsencrypt
solvers:
- selector:
dnsZones:
- "reeseapps.com"
dns01:
route53:
region: us-east-1
hostedZoneID: Z012820733346FJ0U4FUF
accessKeyID: ${LE_ACCESS_KEY_ID}
secretAccessKeySecretRef:
name: prod-route53-credentials-cert-manager
key: secret-access-key
EOF
kubectl apply -f secrets/route53-cluster-issuer.yaml
```
You can test if your ingress is working with:
```bash
# Navigate to demo.reeseapps.com
kubectl apply -f k3s/tests/ingress-nginx-test.yaml
# Cleanup
kubectl delete -f k3s/tests/ingress-nginx-test.yaml
```
## Test Minecraft Server
```bash
helm upgrade --install minecraft ./minecraft -n minecraft --create-namespace
```
## Automatic Updates
<https://docs.k3s.io/upgrades/automated>
```bash
kubectl create namespace system-upgrade
kubectl apply -f https://github.com/rancher/system-upgrade-controller/releases/latest/download/system-upgrade-controller.yaml
kubectl apply -f https://github.com/rancher/system-upgrade-controller/releases/latest/download/crd.yaml
kubectl apply -f k3s/upgrade-plan.yaml
# Check plan
kubectl get plan -n system-upgrade
```
## Database Backups
<https://docs.k3s.io/cli/etcd-snapshot>
Note, you must backup `/var/lib/rancher/k3s/server/token`
and use the contents as the toklisten when restoring the backup as data is encrypted with that token.
Backups are saved to `/var/lib/rancher/k3s/server/db/snapshots/` by default.
```bash
k3s etcd-snapshot save
k3s etcd-snapshot list
k3s server \
--cluster-reset \
--cluster-reset-restore-path=/var/lib/rancher/k3s/server/db/snapshots/on-demand-kube-1720459685
```
### Quickstart
```bash
# Create certsigner pod for all other operations
./setup.sh <server_fqdn>
# Create a user, use "admin" to create an admin user
./upsertuser.sh <ssh_address> <server_fqdn (for kubectl)> <user>
# Remove a user, their namespace, and their access
./removeuserspace <server_fqdn> <user>
```
## Help
### Troubleshooting
#### Deleting a stuck namespace
```bash
NAMESPACE=nginx
kubectl proxy &
kubectl get namespace $NAMESPACE -o json |jq '.spec = {"finalizers":[]}' >temp.json
curl -k -H "Content-Type: application/json" -X PUT --data-binary @temp.json 127.0.0.1:8001/api/v1/namespaces/$NAMESPACE/finalize
```
#### Fixing a bad volume
```bash
xfs_repair -L /dev/sdg
```
#### Mounting an ix-application volume from truenas
```bash
# set the mountpoint
zfs set mountpoint=/ix_pvc enc1/ix-applications/releases/gitea/volumes/pvc-40e27277-71e3-4469-88a3-a39f53435a8b
#"unset" the mountpoint (back to legacy)
zfs set mountpoint=legacy enc1/ix-applications/releases/gitea/volumes/pvc-40e27277-71e3-4469-88a3-a39f53435a8b
```
#### Mounting a volume
```bash
# mount
mount -t xfs /dev/zvol/enc0/dcsi/apps/pvc-d5090258-cf20-4f2e-a5cf-330ac00d0049 /mnt/dcsi_pvc
# unmount
umount /mnt/dcsi_pvc
```
## Uninstall
```bash
/usr/local/bin/k3s-uninstall.sh
```

View File

@@ -0,0 +1,93 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: zfs-iscsi-enc0
annotations:
"helm.sh/resource-policy": keep
spec:
storageClassName: zfs-iscsi-enc0
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: zfs-iscsi-enc1
annotations:
"helm.sh/resource-policy": keep
spec:
storageClassName: zfs-iscsi-enc1
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: zfs-nfs-enc1
annotations:
"helm.sh/resource-policy": keep
spec:
storageClassName: zfs-nfs-enc1
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: democratic-csi-test
spec:
selector:
matchLabels:
app: democratic-csi-test
template:
metadata:
labels:
app: democratic-csi-test
spec:
containers:
- image: debian
command:
- bash
- -c
- 'sleep infinity'
name: democratic-csi-test
volumeMounts:
- mountPath: /zfs_iscsi_enc0
name: zfs-iscsi-enc0
- mountPath: /zfs_iscsi_enc1
name: zfs-iscsi-enc1
- mountPath: /zfs_nfs_enc1
name: zfs-nfs-enc1
resources:
limits:
memory: "4Gi"
cpu: "2"
requests:
memory: "1Mi"
cpu: "1m"
restartPolicy: Always
volumes:
- name: zfs-iscsi-enc0
persistentVolumeClaim:
claimName: zfs-iscsi-enc0
- name: zfs-iscsi-enc1
persistentVolumeClaim:
claimName: zfs-iscsi-enc1
- name: zfs-nfs-enc1
persistentVolumeClaim:
claimName: zfs-nfs-enc1

View File

@@ -0,0 +1,45 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: ffmpeg
spec:
selector:
matchLabels:
app: ffmpeg
template:
metadata:
labels:
app: ffmpeg
spec:
volumes:
- name: data
persistentVolumeClaim:
claimName: ffmpeg
containers:
- name: ffmpeg
image: linuxserver/ffmpeg:latest
volumeMounts:
- mountPath: /config
name: data
command:
- /bin/bash
- -c
- 'sleep infinity'
resources:
limits:
memory: "2Gi"
cpu: "8"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ffmpeg
spec:
storageClassName: zfs-iscsi-enc0-ext4
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 64Gi

View File

@@ -0,0 +1,66 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: ingress-nginx-demo
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx-demo
strategy:
type: Recreate
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx-demo
spec:
containers:
- name: httpd
image: httpd
ports:
- containerPort: 80
name: http
---
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx-demo
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: ingress-nginx-demo
ports:
- name: ingress-nginx-demo
protocol: TCP
port: 80
targetPort: http
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-nginx-demo
annotations:
cert-manager.io/cluster-issuer: letsencrypt
external-dns.alpha.kubernetes.io/ttl: "60"
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.org/client-max-body-size: "0"
spec:
ingressClassName: nginx
rules:
- host: demo.reeseapps.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ingress-nginx-demo
port:
number: 80
tls:
- hosts:
- demo.reeseapps.com
secretName: ingress-nginx-demo-tls-cert

View File

@@ -0,0 +1,71 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ssd-test
namespace: default
spec:
storageClassName: ssd
accessModes:
- ReadWriteMany
resources:
requests:
storage: 8Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: hdd-test
namespace: default
spec:
storageClassName: hdd
accessModes:
- ReadWriteMany
resources:
requests:
storage: 8Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: local-storage-test
namespace: default
spec:
selector:
matchLabels:
app: local-storage-test
template:
metadata:
labels:
app: local-storage-test
spec:
containers:
- image: debian
command:
- bash
- -c
- 'sleep infinity'
name: local-storage-test
volumeMounts:
- mountPath: /ssd
name: ssd
- mountPath: /hdd
name: hdd
resources:
limits:
memory: "4Gi"
cpu: "2"
requests:
memory: "1Mi"
cpu: "1m"
restartPolicy: Always
volumes:
- name: hdd
persistentVolumeClaim:
claimName: hdd-test
- name: ssd
persistentVolumeClaim:
claimName: ssd-test

View File

@@ -0,0 +1,101 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: ingress-nginx-demo-1
namespace: default
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx-demo-1
strategy:
type: Recreate
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx-demo-1
spec:
containers:
- name: httpd
image: httpd
ports:
- containerPort: 80
name: http
resources:
requests:
memory: "100Mi"
cpu: "1m"
limits:
memory: "256Mi"
cpu: "1"
---
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx-demo-1
namespace: default
annotations:
metallb.universe.tf/allow-shared-ip: "production"
metallb.universe.tf/address-pool: production
spec:
type: LoadBalancer
ports:
- name: http
protocol: TCP
port: 8001
targetPort: 80
selector:
app.kubernetes.io/name: ingress-nginx-demo-1
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ingress-nginx-demo-2
namespace: default
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx-demo-2
strategy:
type: Recreate
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx-demo-2
spec:
containers:
- name: httpd
image: httpd
ports:
- containerPort: 80
name: http
resources:
requests:
memory: "100Mi"
cpu: "1m"
limits:
memory: "256Mi"
cpu: "1"
---
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx-demo-2
namespace: default
annotations:
metallb.universe.tf/allow-shared-ip: "production"
metallb.universe.tf/address-pool: production
spec:
type: LoadBalancer
ports:
- name: http
protocol: TCP
port: 8002
targetPort: 80
selector:
app.kubernetes.io/name: ingress-nginx-demo-2

View File

@@ -0,0 +1,49 @@
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx # has to match .spec.template.metadata.labels
serviceName: "nginx"
replicas: 3 # by default is 1
minReadySeconds: 10 # by default is 0
template:
metadata:
labels:
app: nginx # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: registry.k8s.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-path"
resources:
requests:
storage: 1Gi

View File

@@ -0,0 +1,19 @@
# Server plan
apiVersion: upgrade.cattle.io/v1
kind: Plan
metadata:
name: server-plan
namespace: system-upgrade
spec:
concurrency: 1
cordon: true
nodeSelector:
matchExpressions:
- key: node-role.kubernetes.io/control-plane
operator: In
values:
- "true"
serviceAccountName: system-upgrade
upgrade:
image: rancher/k3s-upgrade
channel: https://update.k3s.io/v1-release/channels/stable