update readme with gitea garbage

This commit is contained in:
ducoterra
2023-04-06 10:58:13 -04:00
parent 4afa2c621d
commit 403be9ba0a
11 changed files with 210 additions and 406 deletions

199
README.md
View File

@@ -24,6 +24,8 @@ curl -sfL https://get.k3s.io | sh -s - \
"local-storage" \
"--disable" \
"coredns" \
"--disable" \
"servicelb" \
"--cluster-dns" \
"10.43.0.10"
```
@@ -66,6 +68,60 @@ You can test your dns config with
kubectl run -it --rm --restart=Never --image=infoblox/dnstools:latest dnstools
```
### Metal LB
We'll be swapping K3S's default load balancer with Metal LB for more flexibility. ServiceLB was
struggling to allocate IP addresses for load balanced services. MetallLB does make things a little
more complicated- you'll need special annotations (see below) but it's otherwise a well-tested,
stable load balancing service with features to grow into.
```bash
helm repo add metallb https://metallb.github.io/metallb
helm repo update
helm upgrade --install metallb \
--namespace metallb \
--create-namespace \
metallb/metallb
```
MetalLB doesn't know what IP addresses are available for it to allocate, so we'll have
to provide it with a list. The `metallb-addresspool.yaml` has one IP address (we'll get to
IP address sharing in a second) which is the IP address of our node.
```bash
# create the metallb allocation pool
kubectl apply -f metallb-addresspool.yaml
```
In order to allow services to allocate the same IP address we'll need to annotate them
as such. MetalLB will allow services to allocate the same IP if:
- They both have the same sharing key.
- They request the use of different ports (e.g. tcp/80 for one and tcp/443 for the other).
- They both use the Cluster external traffic policy, or they both point to the exact same set of pods (i.e. the pod selectors are identical).
See https://metallb.org/usage/#ip-address-sharing for more info.
You'll need to annotate your service as follows if you want an external IP:
```yaml
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}
annotations:
metallb.universe.tf/allow-shared-ip: "containers"
spec:
externalTrafficPolicy: Cluster
selector:
app: {{ .Release.Name }}
ports:
- port: {{ .Values.ports.containerPort }}
targetPort: {{ .Values.ports.targetPort }}
name: {{ .Release.Name }}
type: LoadBalancer
```
### Nginx Ingress
Now we need an ingress solution (preferably with certs for https). We'll be using nginx since
@@ -77,9 +133,10 @@ good. Just finnicky when you have use cases they haven't explicitly coded for).
```bash
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install \
helm upgrade --install \
ingress-nginx \
ingress-nginx/ingress-nginx \
--values ingress-nginx-values.yaml \
--namespace ingress-nginx \
--create-namespace
```
@@ -183,8 +240,68 @@ You can test that things worked with:
kubectl apply -f democratic-csi-pvc-test.yaml
```
Because iscsi will mount block devices, troubleshooting mounting issues, data corruption,
and exploring pvc contents must happen on the client device. Here are a few cheat-sheet
commands to make things easier:
```bash
# discover all targets on the server
iscsiadm --mode discovery \
--type sendtargets \
--portal democratic-csi-server.reeselink.com:3260
export ISCSI_TARGET=
# delete the discovered targets
iscsiadm --mode discovery \
--portal democratic-csi-server.reeselink.com:3260 \
--op delete
# view discovered targets
iscsiadm --mode node
# view current session
iscsiadm --mode session
# prevent automatic login
iscsiadm --mode node \
--portal democratic-csi-server.reeselink.com:3260 \
--op update \
--name node.startup \
--value manual
# connect a target
iscsiadm --mode node \
--login \
--portal democratic-csi-server.reeselink.com:3260 \
--targetname $ISCSI_TARGET
# disconnect a target
# you might have to do this if pods can't mount their volumes.
# manually connecting a target tends to make it unavailable for the pods since there
# will be two targets with the same name.
iscsiadm --mode node \
--logout \
--portal democratic-csi-server.reeselink.com:3260 \
--targetname $ISCSI_TARGET
# view all connected disks
ls /dev/disk/by-path/
# mount a disk
mount -t xfs /dev/disk/by-path/... /mnt/iscsi
# emergency - by-path isn't available
# (look for "Attached scsi disk")
iscsiadm --mode session -P 3 | grep Target -A 2 -B 2
```
### Dashboard
The kubernetes dashboard isn't all that useful but it can sometimes give you a good
visual breakdown when things are going wrong. It's sometimes faster than running
`kubectl get` commands over and over.
Create the dashboard and an admin user with:
```bash
@@ -206,7 +323,14 @@ kubectl proxy
### Nextcloud
The first chart we'll deploy is nextcloud.
The first chart we'll deploy is nextcloud. This is a custom chart because Nextcloud
doesn't support helm installation natively (yet). There is a native Docker image and
really detailed installation instructions so we can pretty easily piece together what's
required.
This image runs the nextcloud cron job automatically and creates random secrets for all
infrastructure - very helpful for a secure deployment, not very helpful for migrating
clusters. You'll want to export the secrets and save them in a secure location.
```bash
helm upgrade --install \
@@ -216,8 +340,7 @@ helm upgrade --install \
--create-namespace
```
Need to copy lots of files? Copy them to the user data
dir and then run
Need to copy lots of files? Copy them to the user data dir and then run
```bash
./occ files:scan --all
@@ -225,7 +348,11 @@ dir and then run
### Gitea
Gitea provides a helm chart [here](https://gitea.com/gitea/helm-chart/)
Gitea provides a helm chart [here](https://gitea.com/gitea/helm-chart/). We're not
going to modify much, but we are going to solidify some of the default values in case
they decide to change things. This is the first chart (besides ingress-nginx) where
we need to pay attention to the MetalLB annotation. This has been set in the values.yaml
file.
```bash
helm repo add gitea-charts https://dl.gitea.io/charts/
@@ -233,14 +360,46 @@ helm repo update
helm upgrade --install \
gitea \
gitea-charts/gitea \
--values gitea-values.yaml \
--values secrets/gitea-values.yaml \
--namespace gitea \
--create-namespace
```
If you need to backup your database you can run:
```bash
# Backup
kubectl exec -it -n gitea gitea-postgresql-0 -- \
pg_dump \
--no-owner \
--dbname=postgresql://gitea:gitea@localhost:5432 > gitea_backup.db
# Take gitea down to zero pods
kubectl scale statefulset gitea --replicas 0
# Drop the existing database
kubectl exec -it -n gitea gitea-postgresql-0 -- psql -U gitea
\c postgres;
drop database gitea;
CREATE DATABASE gitea WITH OWNER gitea TEMPLATE template0 ENCODING UTF8 LC_COLLATE 'en_US.UTF-8' LC_CTYPE 'en_US.UTF-8';
exit
# restore from backup
kubectl exec -it -n gitea gitea-postgresql-0 -- \
psql \
postgresql://gitea:gitea@localhost:5432 gitea < gitea_backup.db
# Restore gitea to 1 pod
kubectl scale statefulset gitea --replicas 1
```
### Minecraft
Minecraft is available through the custom helm chart (including a server downloader)
Minecraft is available through the custom helm chart (including a server downloader). The example
below installs nimcraft. For each installation you'll want to create your own values.yaml
with a new port. The server-downloader is called "minecraft_get_server" and is available on
[Github](https://github.com/ducoterra/minecraft_get_server).
```bash
helm upgrade --install \
@@ -260,3 +419,29 @@ kubectl proxy &
kubectl get namespace $NAMESPACE -o json |jq '.spec = {"finalizers":[]}' >temp.json
curl -k -H "Content-Type: application/json" -X PUT --data-binary @temp.json 127.0.0.1:8001/api/v1/namespaces/$NAMESPACE/finalize
```
Fixing a bad volume
```bash
xfs_repair -L /dev/sdg
```
Mounting an ix-application volume from truenas
```bash
# set the mountpoint
zfs set mountpoint=/ix_pvc enc1/ix-applications/releases/gitea/volumes/pvc-40e27277-71e3-4469-88a3-a39f53435a8b
#"unset" the mountpoing (back to legacy)
zfs set mountpoint=legacy enc1/ix-applications/releases/gitea/volumes/pvc-40e27277-71e3-4469-88a3-a39f53435a8b
```
Mounting a volume
```bash
# mount
mount -t xfs /dev/zvol/enc0/dcsi/apps/pvc-d5090258-cf20-4f2e-a5cf-330ac00d0049 /mnt/dcsi_pvc
# unmount
umount /mnt/dcsi_pvc
```