update readme with gitea garbage

This commit is contained in:
ducoterra
2023-04-06 10:58:13 -04:00
parent 4afa2c621d
commit 403be9ba0a
11 changed files with 210 additions and 406 deletions

1
.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
secrets/

199
README.md
View File

@@ -24,6 +24,8 @@ curl -sfL https://get.k3s.io | sh -s - \
"local-storage" \
"--disable" \
"coredns" \
"--disable" \
"servicelb" \
"--cluster-dns" \
"10.43.0.10"
```
@@ -66,6 +68,60 @@ You can test your dns config with
kubectl run -it --rm --restart=Never --image=infoblox/dnstools:latest dnstools
```
### Metal LB
We'll be swapping K3S's default load balancer with Metal LB for more flexibility. ServiceLB was
struggling to allocate IP addresses for load balanced services. MetallLB does make things a little
more complicated- you'll need special annotations (see below) but it's otherwise a well-tested,
stable load balancing service with features to grow into.
```bash
helm repo add metallb https://metallb.github.io/metallb
helm repo update
helm upgrade --install metallb \
--namespace metallb \
--create-namespace \
metallb/metallb
```
MetalLB doesn't know what IP addresses are available for it to allocate, so we'll have
to provide it with a list. The `metallb-addresspool.yaml` has one IP address (we'll get to
IP address sharing in a second) which is the IP address of our node.
```bash
# create the metallb allocation pool
kubectl apply -f metallb-addresspool.yaml
```
In order to allow services to allocate the same IP address we'll need to annotate them
as such. MetalLB will allow services to allocate the same IP if:
- They both have the same sharing key.
- They request the use of different ports (e.g. tcp/80 for one and tcp/443 for the other).
- They both use the Cluster external traffic policy, or they both point to the exact same set of pods (i.e. the pod selectors are identical).
See https://metallb.org/usage/#ip-address-sharing for more info.
You'll need to annotate your service as follows if you want an external IP:
```yaml
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}
annotations:
metallb.universe.tf/allow-shared-ip: "containers"
spec:
externalTrafficPolicy: Cluster
selector:
app: {{ .Release.Name }}
ports:
- port: {{ .Values.ports.containerPort }}
targetPort: {{ .Values.ports.targetPort }}
name: {{ .Release.Name }}
type: LoadBalancer
```
### Nginx Ingress
Now we need an ingress solution (preferably with certs for https). We'll be using nginx since
@@ -77,9 +133,10 @@ good. Just finnicky when you have use cases they haven't explicitly coded for).
```bash
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install \
helm upgrade --install \
ingress-nginx \
ingress-nginx/ingress-nginx \
--values ingress-nginx-values.yaml \
--namespace ingress-nginx \
--create-namespace
```
@@ -183,8 +240,68 @@ You can test that things worked with:
kubectl apply -f democratic-csi-pvc-test.yaml
```
Because iscsi will mount block devices, troubleshooting mounting issues, data corruption,
and exploring pvc contents must happen on the client device. Here are a few cheat-sheet
commands to make things easier:
```bash
# discover all targets on the server
iscsiadm --mode discovery \
--type sendtargets \
--portal democratic-csi-server.reeselink.com:3260
export ISCSI_TARGET=
# delete the discovered targets
iscsiadm --mode discovery \
--portal democratic-csi-server.reeselink.com:3260 \
--op delete
# view discovered targets
iscsiadm --mode node
# view current session
iscsiadm --mode session
# prevent automatic login
iscsiadm --mode node \
--portal democratic-csi-server.reeselink.com:3260 \
--op update \
--name node.startup \
--value manual
# connect a target
iscsiadm --mode node \
--login \
--portal democratic-csi-server.reeselink.com:3260 \
--targetname $ISCSI_TARGET
# disconnect a target
# you might have to do this if pods can't mount their volumes.
# manually connecting a target tends to make it unavailable for the pods since there
# will be two targets with the same name.
iscsiadm --mode node \
--logout \
--portal democratic-csi-server.reeselink.com:3260 \
--targetname $ISCSI_TARGET
# view all connected disks
ls /dev/disk/by-path/
# mount a disk
mount -t xfs /dev/disk/by-path/... /mnt/iscsi
# emergency - by-path isn't available
# (look for "Attached scsi disk")
iscsiadm --mode session -P 3 | grep Target -A 2 -B 2
```
### Dashboard
The kubernetes dashboard isn't all that useful but it can sometimes give you a good
visual breakdown when things are going wrong. It's sometimes faster than running
`kubectl get` commands over and over.
Create the dashboard and an admin user with:
```bash
@@ -206,7 +323,14 @@ kubectl proxy
### Nextcloud
The first chart we'll deploy is nextcloud.
The first chart we'll deploy is nextcloud. This is a custom chart because Nextcloud
doesn't support helm installation natively (yet). There is a native Docker image and
really detailed installation instructions so we can pretty easily piece together what's
required.
This image runs the nextcloud cron job automatically and creates random secrets for all
infrastructure - very helpful for a secure deployment, not very helpful for migrating
clusters. You'll want to export the secrets and save them in a secure location.
```bash
helm upgrade --install \
@@ -216,8 +340,7 @@ helm upgrade --install \
--create-namespace
```
Need to copy lots of files? Copy them to the user data
dir and then run
Need to copy lots of files? Copy them to the user data dir and then run
```bash
./occ files:scan --all
@@ -225,7 +348,11 @@ dir and then run
### Gitea
Gitea provides a helm chart [here](https://gitea.com/gitea/helm-chart/)
Gitea provides a helm chart [here](https://gitea.com/gitea/helm-chart/). We're not
going to modify much, but we are going to solidify some of the default values in case
they decide to change things. This is the first chart (besides ingress-nginx) where
we need to pay attention to the MetalLB annotation. This has been set in the values.yaml
file.
```bash
helm repo add gitea-charts https://dl.gitea.io/charts/
@@ -233,14 +360,46 @@ helm repo update
helm upgrade --install \
gitea \
gitea-charts/gitea \
--values gitea-values.yaml \
--values secrets/gitea-values.yaml \
--namespace gitea \
--create-namespace
```
If you need to backup your database you can run:
```bash
# Backup
kubectl exec -it -n gitea gitea-postgresql-0 -- \
pg_dump \
--no-owner \
--dbname=postgresql://gitea:gitea@localhost:5432 > gitea_backup.db
# Take gitea down to zero pods
kubectl scale statefulset gitea --replicas 0
# Drop the existing database
kubectl exec -it -n gitea gitea-postgresql-0 -- psql -U gitea
\c postgres;
drop database gitea;
CREATE DATABASE gitea WITH OWNER gitea TEMPLATE template0 ENCODING UTF8 LC_COLLATE 'en_US.UTF-8' LC_CTYPE 'en_US.UTF-8';
exit
# restore from backup
kubectl exec -it -n gitea gitea-postgresql-0 -- \
psql \
postgresql://gitea:gitea@localhost:5432 gitea < gitea_backup.db
# Restore gitea to 1 pod
kubectl scale statefulset gitea --replicas 1
```
### Minecraft
Minecraft is available through the custom helm chart (including a server downloader)
Minecraft is available through the custom helm chart (including a server downloader). The example
below installs nimcraft. For each installation you'll want to create your own values.yaml
with a new port. The server-downloader is called "minecraft_get_server" and is available on
[Github](https://github.com/ducoterra/minecraft_get_server).
```bash
helm upgrade --install \
@@ -260,3 +419,29 @@ kubectl proxy &
kubectl get namespace $NAMESPACE -o json |jq '.spec = {"finalizers":[]}' >temp.json
curl -k -H "Content-Type: application/json" -X PUT --data-binary @temp.json 127.0.0.1:8001/api/v1/namespaces/$NAMESPACE/finalize
```
Fixing a bad volume
```bash
xfs_repair -L /dev/sdg
```
Mounting an ix-application volume from truenas
```bash
# set the mountpoint
zfs set mountpoint=/ix_pvc enc1/ix-applications/releases/gitea/volumes/pvc-40e27277-71e3-4469-88a3-a39f53435a8b
#"unset" the mountpoing (back to legacy)
zfs set mountpoint=legacy enc1/ix-applications/releases/gitea/volumes/pvc-40e27277-71e3-4469-88a3-a39f53435a8b
```
Mounting a volume
```bash
# mount
mount -t xfs /dev/zvol/enc0/dcsi/apps/pvc-d5090258-cf20-4f2e-a5cf-330ac00d0049 /mnt/dcsi_pvc
# unmount
umount /mnt/dcsi_pvc
```

View File

@@ -1,44 +0,0 @@
ingress:
enabled: true
annotations:
cert-manager.io/cluster-issuer: letsencrypt
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.org/client-max-body-size: "0"
hosts:
- host: gitea.reeseapps.com
paths:
- path: /
pathType: Prefix
tls:
- hosts:
- gitea.reeseapps.com
secretName: gitea-tls-cert
persistence:
enabled: true
storageClass: zfs-iscsi-enc0
annotations:
"helm.sh/resource-policy": keep
postgresql:
persistence:
enabled: true
storageClass: zfs-iscsi-enc0
annotations:
"helm.sh/resource-policy": keep
gitea:
admin:
username: "ducoterra"
password: "changeme"
email: "gitea@ducoterra.net"
config:
service:
DISABLE_REGISTRATION: true
service:
ssh:
port: 2222
type: LoadBalancer
externalTrafficPolicy: Local

View File

@@ -97,7 +97,10 @@ apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}
annotations:
metallb.universe.tf/allow-shared-ip: "containers"
spec:
externalTrafficPolicy: Cluster
selector:
app: {{ .Release.Name }}
ports:

View File

@@ -1,6 +1,6 @@
image: ducoterra/minecraft:latest
get_server:
image: ducoterra/get-minecraft:2.0.0
image: ducoterra/get-minecraft:2.0.2
server_version: "1.19.4"
port: 25565
max_cpu: 4

View File

@@ -0,0 +1,5 @@
controller:
service:
annotations:
metallb.universe.tf/allow-shared-ip: "containers"
externalTrafficPolicy: Cluster

8
metallb-addresspool.yaml Normal file
View File

@@ -0,0 +1,8 @@
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: containers
namespace: metallb
spec:
addresses:
- 10.1.2.13/32

View File

@@ -1,94 +0,0 @@
csiDriver:
name: "driveripper.zfs-iscsi-enc0"
# add note here about volume expansion requirements
storageClasses:
- name: zfs-iscsi-enc0
defaultClass: false
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
parameters:
# for block-based storage can be ext3, ext4, xfs
fsType: xfs
mountOptions: []
secrets:
provisioner-secret:
controller-publish-secret:
node-stage-secret:
node-publish-secret:
controller-expand-secret:
volumeSnapshotClasses: []
driver:
config:
driver: freenas-api-iscsi
instance_id:
httpConnection:
protocol: https
host: driveripper.reeseapps.com
port: 8443
# use only 1 of apiKey or username/password
# if both are present, apiKey is preferred
# apiKey is only available starting in TrueNAS-12
apiKey: <key>
# username: root
# password:
allowInsecure: false
# use apiVersion 2 for TrueNAS-12 and up (will work on 11.x in some scenarios as well)
# leave unset for auto-detection
apiVersion: 2
zfs:
# can be used to set arbitrary values on the dataset/zvol
# can use handlebars templates with the parameters from the storage class/CO
datasetProperties:
"org.freenas:description": "{{ parameters.[csi.storage.k8s.io/pvc/namespace] }}/{{ parameters.[csi.storage.k8s.io/pvc/name] }}"
datasetParentName: enc0/dcsi/apps
# do NOT make datasetParentName and detachedSnapshotsDatasetParentName overlap
# they may be siblings, but neither should be nested in the other
detachedSnapshotsDatasetParentName: enc0/dcsi/snaps
zvolCompression:
# "" (inherit), on, off, verify
zvolDedup:
zvolEnableReservation: false
# 512, 1K, 2K, 4K, 8K, 16K, 64K, 128K default is 16K
zvolBlocksize:
iscsi:
targetPortal: "democratic-csi-server.reeselink.com"
# for multipath
# targetPortals: [] # [ "server[:port]", "server[:port]", ... ]
# leave empty to omit usage of -I with iscsiadm
interface:
# MUST ensure uniqueness
# full iqn limit is 223 bytes, plan accordingly
# default is "{{ name }}"
nameTemplate: "{{ parameters.[csi.storage.k8s.io/pvc/namespace] }}-{{ parameters.[csi.storage.k8s.io/pvc/name] }}"
namePrefix: "dcsi-"
nameSuffix: "-enc0"
# add as many as needed
targetGroups:
# get the correct ID from the "portal" section in the UI
- targetGroupPortalGroup: 7
# get the correct ID from the "initiators" section in the UI
targetGroupInitiatorGroup: 1
# None, CHAP, or CHAP Mutual
targetGroupAuthType: None
# get the correct ID from the "Authorized Access" section of the UI
# only required if using Chap
targetGroupAuthGroup:
extentCommentTemplate: "{{ parameters.[csi.storage.k8s.io/pvc/namespace] }}/{{ parameters.[csi.storage.k8s.io/pvc/name] }}"
extentInsecureTpc: true
extentXenCompat: false
extentDisablePhysicalBlocksize: true
# 512, 1024, 2048, or 4096,
extentBlocksize: 512
# "" (let FreeNAS decide, currently defaults to SSD), Unknown, SSD, 5400, 7200, 10000, 15000
extentRpm: "SSD"
# 0-100 (0 == ignore)
extentAvailThreshold: 0

View File

@@ -1,94 +0,0 @@
csiDriver:
name: "driveripper.zfs-iscsi-enc1"
# add note here about volume expansion requirements
storageClasses:
- name: zfs-iscsi-enc1
defaultClass: false
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
parameters:
# for block-based storage can be ext3, ext4, xfs
fsType: xfs
mountOptions: []
secrets:
provisioner-secret:
controller-publish-secret:
node-stage-secret:
node-publish-secret:
controller-expand-secret:
volumeSnapshotClasses: []
driver:
config:
driver: freenas-api-iscsi
instance_id:
httpConnection:
protocol: https
host: driveripper.reeseapps.com
port: 8443
# use only 1 of apiKey or username/password
# if both are present, apiKey is preferred
# apiKey is only available starting in TrueNAS-12
apiKey: <key>
# username: root
# password:
allowInsecure: false
# use apiVersion 2 for TrueNAS-12 and up (will work on 11.x in some scenarios as well)
# leave unset for auto-detection
apiVersion: 2
zfs:
# can be used to set arbitrary values on the dataset/zvol
# can use handlebars templates with the parameters from the storage class/CO
datasetProperties:
"org.freenas:description": "{{ parameters.[csi.storage.k8s.io/pvc/namespace] }}/{{ parameters.[csi.storage.k8s.io/pvc/name] }}"
datasetParentName: enc1/dcsi/apps
# do NOT make datasetParentName and detachedSnapshotsDatasetParentName overlap
# they may be siblings, but neither should be nested in the other
detachedSnapshotsDatasetParentName: enc1/dcsi/snaps
zvolCompression:
# "" (inherit), on, off, verify
zvolDedup:
zvolEnableReservation: false
# 512, 1K, 2K, 4K, 8K, 16K, 64K, 128K default is 16K
zvolBlocksize:
iscsi:
targetPortal: "democratic-csi-server.reeselink.com"
# for multipath
# targetPortals: [] # [ "server[:port]", "server[:port]", ... ]
# leave empty to omit usage of -I with iscsiadm
interface:
# MUST ensure uniqueness
# full iqn limit is 223 bytes, plan accordingly
# default is "{{ name }}"
nameTemplate: "{{ parameters.[csi.storage.k8s.io/pvc/namespace] }}-{{ parameters.[csi.storage.k8s.io/pvc/name] }}"
namePrefix: "dcsi-"
nameSuffix: "-enc1"
# add as many as needed
targetGroups:
# get the correct ID from the "portal" section in the UI
- targetGroupPortalGroup: 7
# get the correct ID from the "initiators" section in the UI
targetGroupInitiatorGroup: 1
# None, CHAP, or CHAP Mutual
targetGroupAuthType: None
# get the correct ID from the "Authorized Access" section of the UI
# only required if using Chap
targetGroupAuthGroup:
extentCommentTemplate: "{{ parameters.[csi.storage.k8s.io/pvc/namespace] }}/{{ parameters.[csi.storage.k8s.io/pvc/name] }}"
extentInsecureTpc: true
extentXenCompat: false
extentDisablePhysicalBlocksize: true
# 512, 1024, 2048, or 4096,
extentBlocksize: 512
# "" (let FreeNAS decide, currently defaults to SSD), Unknown, SSD, 5400, 7200, 10000, 15000
extentRpm: "SSD"
# 0-100 (0 == ignore)
extentAvailThreshold: 0

View File

@@ -1,83 +0,0 @@
csiDriver:
name: "driveripper.zfs-nfs-enc0"
storageClasses:
- name: zfs-nfs-enc0
defaultClass: true
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
parameters:
fsType: nfs
mountOptions:
- nfsvers=4
secrets:
provisioner-secret:
controller-publish-secret:
node-stage-secret:
node-publish-secret:
controller-expand-secret:
volumeSnapshotClasses: []
driver:
config:
driver: freenas-api-nfs
instance_id:
httpConnection:
protocol: https
host: driveripper.reeselink.com
port: 443
# use only 1 of apiKey or username/password
# if both are present, apiKey is preferred
# apiKey is only available starting in TrueNAS-12
apiKey: <key>
# username: root
# password:
allowInsecure: false
# use apiVersion 2 for TrueNAS-12 and up (will work on 11.x in some scenarios as well)
# leave unset for auto-detection
apiVersion: 2
zfs:
# can be used to override defaults if necessary
# the example below is useful for TrueNAS 12
#cli:
# sudoEnabled: true
#
# leave paths unset for auto-detection
# paths:
# zfs: /usr/local/sbin/zfs
# zpool: /usr/local/sbin/zpool
# sudo: /usr/local/bin/sudo
# chroot: /usr/sbin/chroot
# can be used to set arbitrary values on the dataset/zvol
# can use handlebars templates with the parameters from the storage class/CO
datasetProperties:
"org.freenas:description": "{{ parameters.[csi.storage.k8s.io/pvc/namespace] }}/{{ parameters.[csi.storage.k8s.io/pvc/name] }}"
datasetParentName: enc0/democratic-csi/apps
# do NOT make datasetParentName and detachedSnapshotsDatasetParentName overlap
# they may be siblings, but neither should be nested in the other
detachedSnapshotsDatasetParentName: enc0/democratic-csi/snaps
datasetEnableQuotas: true
datasetEnableReservation: false
datasetPermissionsMode: "0777"
datasetPermissionsUser: 3000
datasetPermissionsGroup: 3000
# not supported yet
#datasetPermissionsAcls:
#- "-m everyone@:full_set:allow"
#- "-m u:kube:full_set:allow"
nfs:
shareCommentTemplate: "{{ parameters.[csi.storage.k8s.io/pvc/namespace] }}-{{ parameters.[csi.storage.k8s.io/pvc/name] }}"
shareHost: democratic-csi-server.reeselink.com
shareAlldirs: false
shareAllowedHosts: [democratic-csi-client-1.reeselink.com]
shareAllowedNetworks: []
shareMaprootUser: kube
shareMaprootGroup: kube
shareMapallUser: ""
shareMapallGroup: ""

View File

@@ -1,83 +0,0 @@
csiDriver:
name: "driveripper.zfs-nfs-enc1"
storageClasses:
- name: zfs-nfs-enc1
defaultClass: false
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
parameters:
fsType: nfs
mountOptions:
- nfsvers=4
secrets:
provisioner-secret:
controller-publish-secret:
node-stage-secret:
node-publish-secret:
controller-expand-secret:
volumeSnapshotClasses: []
driver:
config:
driver: freenas-api-nfs
instance_id:
httpConnection:
protocol: https
host: driveripper.reeselink.com
port: 443
# use only 1 of apiKey or username/password
# if both are present, apiKey is preferred
# apiKey is only available starting in TrueNAS-12
apiKey: <key>
# username: root
# password:
allowInsecure: false
# use apiVersion 2 for TrueNAS-12 and up (will work on 11.x in some scenarios as well)
# leave unset for auto-detection
apiVersion: 2
zfs:
# can be used to override defaults if necessary
# the example below is useful for TrueNAS 12
#cli:
# sudoEnabled: true
#
# leave paths unset for auto-detection
# paths:
# zfs: /usr/local/sbin/zfs
# zpool: /usr/local/sbin/zpool
# sudo: /usr/local/bin/sudo
# chroot: /usr/sbin/chroot
# can be used to set arbitrary values on the dataset/zvol
# can use handlebars templates with the parameters from the storage class/CO
datasetProperties:
"org.freenas:description": "{{ parameters.[csi.storage.k8s.io/pvc/namespace] }}/{{ parameters.[csi.storage.k8s.io/pvc/name] }}"
datasetParentName: enc1/democratic-csi/apps
# do NOT make datasetParentName and detachedSnapshotsDatasetParentName overlap
# they may be siblings, but neither should be nested in the other
detachedSnapshotsDatasetParentName: enc1/democratic-csi/snaps
datasetEnableQuotas: true
datasetEnableReservation: false
datasetPermissionsMode: "0777"
datasetPermissionsUser: 3000
datasetPermissionsGroup: 3000
# not supported yet
#datasetPermissionsAcls:
#- "-m everyone@:full_set:allow"
#- "-m u:kube:full_set:allow"
nfs:
shareCommentTemplate: "{{ parameters.[csi.storage.k8s.io/pvc/namespace] }}-{{ parameters.[csi.storage.k8s.io/pvc/name] }}"
shareHost: democratic-csi-server.reeselink.com
shareAlldirs: false
shareAllowedHosts: [democratic-csi-client-1.reeselink.com]
shareAllowedNetworks: []
shareMaprootUser: kube
shareMaprootGroup: kube
shareMapallUser: ""
shareMapallGroup: ""