Containers!
A project to store container-based hosting stuff.
Platform
Before you being be sure to take a look at the Fedora Server Config readme which explains how to set up a basic fedora server hosting platform with certbot.
K3S
Install K3S
We're going to be tweaking some installation parameters so if you already have k3s installed you can either uninstall it or skip these steps.
This installation disables traefik and local-storage (We don't really need either):
curl -sfL https://get.k3s.io | sh -s - \
"--disable" \
"traefik" \
"--disable" \
"local-storage" \
"--disable" \
"coredns" \
"--cluster-dns" \
"10.43.0.10"
Now you can change the ownership of (and copy) the k3s.yaml file:
chown ducoterra /etc/rancher/k3s/k3s.yaml
scp /etc/rancher/k3s/k3s.yaml ~/.kube/config
Edit ~/.kube/config and change 127.0.0.1 to containers.reeselink.com
Database Backups
We're using SQLite (because it's all we really need). The db is stored at
/var/lib/rancher/k3s/server/db/. You can just copy that folder to backup the database
and restore it by copying it back. Note, you must also copy/var/lib/rancher/k3s/server/token
and use the contents as the token when restoring the backup as data is encrypted with that token.
CoreDNS
We'll use our own coredns server so we can add custom hosts. This prevents the server from collapsing if the internet drops out (something that apparently happens quite frequently)
helm repo add coredns https://coredns.github.io/helm
helm repo update
helm upgrade --install \
--namespace=kube-system \
--values coredns-values.yaml \
coredns \
coredns/coredns
You can test your dns config with
kubectl run -it --rm --restart=Never --image=infoblox/dnstools:latest dnstools
Nginx Ingress
Now we need an ingress solution (preferably with certs for https). We'll be using nginx since it's a little bit more configurable than traefik (though don't sell traefik short, it's really good. Just finnicky when you have use cases they haven't explicitly coded for).
-
Install nginx
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx helm repo update helm install \ ingress-nginx \ ingress-nginx/ingress-nginx \ --namespace ingress-nginx \ --create-namespace -
Install cert-manager
helm repo add jetstack https://charts.jetstack.io helm repo update helm install \ cert-manager jetstack/cert-manager \ --namespace cert-manager \ --create-namespace \ --version v1.11.0 \ --set installCRDs=true -
Create the let's encrypt issuer
kubectl apply -f letsencrypt-issuer.yaml
You can test if your ingress is working with kubectl apply -f ingress-nginx-test.yaml
Navigate to ingress-nginx-test.reeseapps.com
Storage
We'll be installing democratic csi for our volume manager. Specifically, we'll be installing the freenas-api-nfs driver. All configuration is stored in truenas-nfs.yaml.
The nfs driver will provision an nfs store owned by user 3000 (kube). You may have to make that user on Truenas. The nfs share created will be world-read/write, so any user can write to it. Users that write to the share will have their uid/gid mapped to Truenas, so if user 33 writes a file to the nfs share it will show up as owned by user 33 on Truenas.
The iscsi driver will require a portal ID. This is NOT what is reflected in the UI. The most reliable way (seriously) to get the real ID is to open the network monitor in the browser, reload truenas and find the websocket connection, click on it, create the portal and click on the server reseponse. It'll look something like:
{"msg": "added", "collection": "iscsi.portal.query", "id": 7, "fields": {"id": 7, "tag": 1, "comment": "democratic-csi", "listen": [{"ip": "172.20.0.1", "port": 3260}], "discovery_authmethod": "NONE", "discovery_authgroup": null}}
The initiator group IDs seem to line up.
It's good practice to have separate hostnames for your share export and your truenas server. This way you can have a direct link without worrying about changing the user-facing hostname. For example: your truenas server might be driveripper.reeselink.com and your kube server might be containers.reeselink.com. You should also have a democratic-csi-server.reeselink.com and a democratic-csi-client-1.reeselink.com which might be on 172.20.0.1 and 172.20.0.2.
https://github.com/democratic-csi/democratic-csi
ISCSI requires a bit of server config before proceeding:
# Install the following system packages
sudo dnf install -y lsscsi iscsi-initiator-utils sg3_utils device-mapper-multipath
# Enable multipathing
sudo mpathconf --enable --with_multipathd y
# Ensure that iscsid and multipathd are running
sudo systemctl enable iscsid multipathd
sudo systemctl start iscsid multipathd
# Start and enable iscsi
sudo systemctl enable iscsi
sudo systemctl start iscsi
And now you can install the drivers:
helm repo add democratic-csi https://democratic-csi.github.io/charts/
helm repo update
# enc0 bulk storage (iscsi)
helm upgrade \
--install \
--values truenas-iscsi-enc0.yaml \
--namespace democratic-csi \
--create-namespace \
zfs-iscsi-enc0 democratic-csi/democratic-csi
# enc1 fast storage (iscsi)
helm upgrade \
--install \
--values truenas-iscsi-enc1.yaml \
--namespace democratic-csi \
--create-namespace \
zfs-iscsi-enc1 democratic-csi/democratic-csi
You can test that things worked with:
kubectl apply -f democratic-csi-pvc-test.yaml
Dashboard
Create the dashboard and an admin user with:
helm upgrade \
--install \
--namespace kubernetes-dashboard \
--create-namespace \
dashboard-user ./helm/dashboard-user
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
Then login with the following:
kubectl -n kubernetes-dashboard create token admin-user
kubectl proxy
Nextcloud
The first chart we'll deploy is nextcloud.
helm upgrade --install \
nextcloud \
./helm/nextcloud \
--namespace nextcloud \
--create-namespace
Need to copy lots of files? Copy them to the user data dir and then run
./occ files:scan --all
Gitea
Gitea provides a helm chart here
helm repo add gitea-charts https://dl.gitea.io/charts/
helm repo update
helm upgrade --install \
gitea \
gitea-charts/gitea \
--values gitea-values.yaml \
--namespace gitea \
--create-namespace
Minecraft
Minecraft is available through the custom helm chart (including a server downloader)
helm upgrade --install \
nimcraft \
./helm/minecraft \
--namespace nimcraft \
--create-namespace
Troubleshooting
Deleting a stuck namespace
NAMESPACE=nginx
kubectl proxy &
kubectl get namespace $NAMESPACE -o json |jq '.spec = {"finalizers":[]}' >temp.json
curl -k -H "Content-Type: application/json" -X PUT --data-binary @temp.json 127.0.0.1:8001/api/v1/namespaces/$NAMESPACE/finalize