chart fixes and readme edits

This commit is contained in:
2023-10-20 00:03:15 -04:00
parent 0462913304
commit 42b6aa33a0
24 changed files with 697 additions and 258 deletions

View File

@@ -3,22 +3,151 @@
Fedora server is an awesome container hosting OS. It has a lot built in, and setup is pretty Fedora server is an awesome container hosting OS. It has a lot built in, and setup is pretty
quick. quick.
## Setup - [Fedora Server](#fedora-server)
- [Initialization](#initialization)
- [Disable swap](#disable-swap)
- [Enable ISCSI](#enable-iscsi)
- [Disable Firewalld](#disable-firewalld)
- [Set SELinux to Permissive](#set-selinux-to-permissive)
- [Install K3S](#install-k3s)
- [Database Backups](#database-backups)
- [Expanding Root Partition](#expanding-root-partition)
- [Optional Steps](#optional-steps)
- [Certbot for Cockpit](#certbot-for-cockpit)
## Initialization
1. `dnf install vim pwgen wireguard-tools`
2. `hostnamectl hostname node1`
3. Set a static IP through the web interface
## Disable swap
```bash ```bash
scp .ssh/authorized_keys containers:~/.ssh/authorized_keys swapoff -a
dnf remove zram-generator-defaults
``` ```
mask <systemd-zram-setup@zram0.service>
## Enable ISCSI
```bash ```bash
sudo hostnamectl hostname containers # Install the following system packages
sudo dnf install vim dnf install -y lsscsi iscsi-initiator-utils sg3_utils device-mapper-multipath
sudo vim /etc/ssh/sshd_config
sudo systemctl restart sshd # Enable multipathing
mpathconf --enable --with_multipathd y
# Ensure that iscsid and multipathd are running
systemctl enable --now iscsid multipathd
# Test that discovery works
iscsiadm -m discovery -t st -p democratic-csi-server.reeselink.com
# Remove them - democratic-csi will populate this
rm -rf /var/lib/iscsi/nodes/
# Start and enable iscsi
systemctl enable --now iscsi
``` ```
## Certbot for Cockpit ## Disable Firewalld
### AWS User <https://docs.k3s.io/advanced#red-hat-enterprise-linux--centos--fedora>
Disable firewalld. You could add rules for each service but every time you open a port
from a container you'd need to run a firewalld rule.
You can disable firewalld from the web interface.
## Set SELinux to Permissive
K3S is more than capable of running with SELinux set to enforcing. We won't be doing
that, however. We'll set it to permissive and you can reenable it once you've added all
the rules you need to keep your services running.
Set SELinux to permissive by editing `/etc/selinux/config`
SELINUX=permissive
## Install K3S
<https://docs.k3s.io/installation/requirements>
We're going to be tweaking some installation parameters so if you already have k3s
installed you can either uninstall it or skip these steps.
This installation disables Traefik, local-storage, and Klipper. We'll replace them with
our own components.
1. Generate a secure token for each node to use when connecting
umask 077
echo -n $(pwgen 16 4) | sed 's/ /-/g' > token.txt
2. Create the cluster
export SECRET=$(cat token.txt)
curl -sfL https://get.k3s.io | K3S_TOKEN=$SECRET sh -s - \
"--cluster-init" \
"--flannel-backend=wireguard-native" \
"--disable" \
"traefik" \
"--disable" \
"local-storage" \
"--disable" \
"coredns" \
"--disable" \
"servicelb" \
"--cluster-dns" \
"10.43.0.10"
3. Join each server node
export SECRET=$(cat token.txt)
curl -sfL https://get.k3s.io | K3S_TOKEN=$SECRET sh -s - server \
--server https://node1.reeselink.com:6443 \
--flannel-backend=wireguard-native \
"--disable" \
"traefik" \
"--disable" \
"local-storage" \
"--disable" \
"coredns" \
"--disable" \
"servicelb" \
"--cluster-dns" \
"10.43.0.10"
Now you can change the ownership of (and copy) the k3s.yaml file:
chown ducoterra /etc/rancher/k3s/k3s.yaml
scp /etc/rancher/k3s/k3s.yaml ~/.kube/config
Edit ~/.kube/config and change 127.0.0.1 to containers.reeselink.com
### Database Backups
<https://docs.k3s.io/cli/etcd-snapshot>
Note, you must backup `/var/lib/rancher/k3s/server/token`
and use the contents as the token when restoring the backup as data is encrypted with that token.
## Expanding Root Partition
lvextend -l +100%FREE fedora
xfs_growfs /dev/mapper/fedora-root
## Optional Steps
### Certbot for Cockpit
During this process you'll pick one node to act as your manager for your other nodes.
You'll only need to cert a single node and then it will connect via ssh over your local
network to the other nodes.
Create an AWS user which will have route53 access. This is required for certbot's route53 Create an AWS user which will have route53 access. This is required for certbot's route53
validation. validation.
@@ -90,16 +219,11 @@ vim ~/.aws/credentials
```conf ```conf
[default] [default]
aws_access_key_id= aws_access_key_id=<key>
aws_secret_access_key= aws_secret_access_key=<key>
``` ```
### Initial Setup Install the aws cli v2 on the manager node:
1. Create a "containers" user in AWS. Copy the permissions from Freenas
2. Create credentials
3. Add your credentials to root
4. Install the aws cli v2
```bash ```bash
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
@@ -107,7 +231,10 @@ aws_secret_access_key=
sudo ./aws/install sudo ./aws/install
``` ```
5. Test your credentials with `aws route53 list-hosted-zones` Test your credentials with `aws route53 list-hosted-zones`. You should see as list of your
hosted zones.
Now install certbot and acquire a cert using those credentials:
```bash ```bash
sudo dnf install certbot python3-certbot-dns-route53 sudo dnf install certbot python3-certbot-dns-route53
@@ -119,7 +246,6 @@ sudo cp /etc/letsencrypt/live/containers.reeselink.com/privkey.pem /etc/cockpit/
sudo cp /etc/letsencrypt/live/containers.reeseapps.com/fullchain.pem /etc/cockpit/ws-certs.d/60-letsencrypt.cert sudo cp /etc/letsencrypt/live/containers.reeseapps.com/fullchain.pem /etc/cockpit/ws-certs.d/60-letsencrypt.cert
sudo cp /etc/letsencrypt/live/containers.reeseapps.com/privkey.pem /etc/cockpit/ws-certs.d/60-letsencrypt.key sudo cp /etc/letsencrypt/live/containers.reeseapps.com/privkey.pem /etc/cockpit/ws-certs.d/60-letsencrypt.key
``` ```
Test the renewal process with: Test the renewal process with:
@@ -129,8 +255,6 @@ sudo certbot renew --cert-name containers.reeselink.com --dry-run
sudo certbot renew --cert-name containers.reeseapps.com --dry-run sudo certbot renew --cert-name containers.reeseapps.com --dry-run
``` ```
### Renewal
Create a renewal script in /usr/lib/scripts/certbot-renew.sh Create a renewal script in /usr/lib/scripts/certbot-renew.sh
/usr/lib/scripts/certbot-renew.sh (chmod +x) /usr/lib/scripts/certbot-renew.sh (chmod +x)
@@ -180,32 +304,4 @@ Enable the service
systemctl enable --now certbot-renew.timer systemctl enable --now certbot-renew.timer
``` ```
### Disable FirewallD Cockpit now has a valid TLS certificate that auto-renews!
Firewalld conflicts with k3s. Disable it from the UI.
## Disable SELinux
SELinux interferes with ISCSI mounts. Disable it by editing `/etc/selinux/config`
```bash
SELINUX=permissive
```
### Allow ISCSI
```bash
# Install the following system packages
sudo dnf install -y lsscsi iscsi-initiator-utils sg3_utils device-mapper-multipath
# Enable multipathing
sudo mpathconf --enable --with_multipathd y
# Ensure that iscsid and multipathd are running
sudo systemctl enable iscsid multipathd
sudo systemctl start iscsid multipathd
# Start and enable iscsi
sudo systemctl enable iscsi
sudo systemctl start iscsi
```

211
README.md
View File

@@ -2,48 +2,36 @@
A project to store container-based hosting stuff. A project to store container-based hosting stuff.
## Table of Contents
- [Containers](#containers)
- [Table of Contents](#table-of-contents)
- [Platform](#platform)
- [Components](#components)
- [CoreDNS](#coredns)
- [Metal LB](#metal-lb)
- [Nginx Ingress](#nginx-ingress)
- [Storage](#storage)
- [Apps](#apps)
- [Dashboard](#dashboard)
- [Nextcloud](#nextcloud)
- [Test Deploy](#test-deploy)
- [Gitea](#gitea)
- [Minecraft](#minecraft)
- [Nimcraft](#nimcraft)
- [Testing](#testing)
- [Snapdrop](#snapdrop)
- [Jellyfin](#jellyfin)
- [Upgrading](#upgrading)
- [Help](#help)
- [Troubleshooting](#troubleshooting)
## Platform ## Platform
Before you being be sure to take a look at the [Fedora Server Config](FedoraServer.md) readme Before you being be sure to take a look at the [Fedora Server Config](FedoraServer.md) readme
which explains how to set up a basic fedora server hosting platform with certbot. which explains how to set up a basic fedora server hosting platform with certbot.
## K3S ## Components
### Install K3S
We're going to be tweaking some installation parameters so if you already have k3s
installed you can either uninstall it or skip these steps.
This installation disables traefik and local-storage (We don't really need either):
```bash
curl -sfL https://get.k3s.io | sh -s - \
"--disable" \
"traefik" \
"--disable" \
"local-storage" \
"--disable" \
"coredns" \
"--cluster-dns" \
"10.43.0.10"
```
Now you can change the ownership of (and copy) the k3s.yaml file:
```bash
chown ducoterra /etc/rancher/k3s/k3s.yaml
scp /etc/rancher/k3s/k3s.yaml ~/.kube/config
```
Edit ~/.kube/config and change 127.0.0.1 to containers.reeselink.com
### Database Backups
We're using SQLite (because it's all we really need). The db is stored at
`/var/lib/rancher/k3s/server/db/`. You can just copy that folder to backup the database
and restore it by copying it back. Note, you must also copy`/var/lib/rancher/k3s/server/token`
and use the contents as the token when restoring the backup as data is encrypted with that token.
### CoreDNS ### CoreDNS
@@ -68,17 +56,26 @@ kubectl run -it --rm --restart=Never --image=infoblox/dnstools:latest dnstools
### Metal LB ### Metal LB
The problem with metallb is when a service needs the real IP address of a client. You can
get the real IP with `externalTrafficPolicy: Local` but that prevents shared IP addresses even
if services are running on different ports. Klipper, on the other hand, seems to handle this
just fine. MetalLB isn't great for a local installation for this reason, but I'm leaving
the docs here just in case.
We'll be swapping K3S's default load balancer with Metal LB for more flexibility. ServiceLB was We'll be swapping K3S's default load balancer with Metal LB for more flexibility. ServiceLB was
struggling to allocate IP addresses for load balanced services. MetallLB does make things a little struggling to allocate IP addresses for load balanced services. MetallLB does make things a little
more complicated- you'll need special annotations (see below) but it's otherwise a well-tested, more complicated- you'll need special annotations (see below) but it's otherwise a well-tested,
stable load balancing service with features to grow into. stable load balancing service with features to grow into.
Metallb is pretty cool. It works via l2 advertisement or BGP. We won't be using BGP, so let's
focus on l2.
When we connect our nodes to a network we give them an IP address range: ex. `192.168.122.20/24`.
This range represents all the available addresses the node could be assigned. Usually we assign
a single "static" IP address for our node and direct traffic to it by port forwarding from our
router. This is fine for single nodes - but what if we have a cluster of nodes and we don't want
our service to disappear just because one node is down for maintenance?
This is where l2 advertising comes in. Metallb will assign a static IP address from a given
pool to any arbitrary node - then advertise that node's mac address as the location for the
IP. When that node goes down metallb simply advertises a new mac address for the same IP
address, effectively moving the IP to another node. This isn't really "load balancing" but
"failover". Fortunately, that's exactly what we're looking for.
```bash ```bash
helm repo add metallb https://metallb.github.io/metallb helm repo add metallb https://metallb.github.io/metallb
helm repo update helm repo update
@@ -88,23 +85,35 @@ helm upgrade --install metallb \
metallb/metallb metallb/metallb
``` ```
MetalLB doesn't know what IP addresses are available for it to allocate, so we'll have MetalLB doesn't know what IP addresses are available for it to allocate so we'll have
to provide it with a list. The `metallb-addresspool.yaml` has one IP address (we'll get to to provide it with a list. The `metallb-addresspool.yaml` has one IP address (we'll get to
IP address sharing in a second) which is the IP address of our node. IP address sharing in a second) which is an unassigned IP address not allocated to any of our
nodes. Note if you have many public IPs which all point to the same router or virtual network
you can list them. We're only going to use one because we want to port forward from our router.
```bash ```bash
# create the metallb allocation pool # create the metallb allocation pool
kubectl apply -f metallb-addresspool.yaml kubectl apply -f metallb-addresspool.yaml
``` ```
In order to allow services to allocate the same IP address we'll need to annotate them Now we need to create the l2 advertisement. This is handled with a custom resource definition
as such. MetalLB will allow services to allocate the same IP if: which specifies that all nodes listed are eligible to be assigned, and advertise, our
"production" IP addresses.
```bash
kubectl apply -f metallb-l2advertisement.yaml
```
We now have a problem. We only have a signle production IP address and Metallb
really doesn't want to share it. In order to allow services to allocate the
same IP address (on different ports) we'll need to annotate them as such.
MetalLB will allow services to allocate the same IP if:
- They both have the same sharing key. - They both have the same sharing key.
- They request the use of different ports (e.g. tcp/80 for one and tcp/443 for the other). - They request the use of different ports (e.g. tcp/80 for one and tcp/443 for the other).
- They both use the Cluster external traffic policy, or they both point to the exact same set of pods (i.e. the pod selectors are identical). - They both use the Cluster external traffic policy, or they both point to the exact same set of pods (i.e. the pod selectors are identical).
See https://metallb.org/usage/#ip-address-sharing for more info. See <https://metallb.org/usage/#ip-address-sharing> for more info.
You'll need to annotate your service as follows if you want an external IP: You'll need to annotate your service as follows if you want an external IP:
@@ -114,7 +123,8 @@ kind: Service
metadata: metadata:
name: {{ .Release.Name }} name: {{ .Release.Name }}
annotations: annotations:
metallb.universe.tf/allow-shared-ip: "containers" metallb.universe.tf/address-pool: "production"
metallb.universe.tf/allow-shared-ip: "production"
spec: spec:
externalTrafficPolicy: Cluster externalTrafficPolicy: Cluster
selector: selector:
@@ -170,6 +180,10 @@ Navigate to ingress-nginx-test.reeseapps.com
### Storage ### Storage
<https://github.com/democratic-csi/democratic-csi/blob/master/examples/freenas-nfs.yaml>
Use nfsv4. It works without rpcbind which makes it lovely.
We'll be installing democratic csi for our volume manager. Specifically, we'll be installing the We'll be installing democratic csi for our volume manager. Specifically, we'll be installing the
freenas-api-nfs driver. All configuration is stored in truenas-nfs.yaml. freenas-api-nfs driver. All configuration is stored in truenas-nfs.yaml.
@@ -279,6 +293,14 @@ helm upgrade \
--namespace democratic-csi \ --namespace democratic-csi \
--create-namespace \ --create-namespace \
zfs-iscsi-enc1 democratic-csi/democratic-csi zfs-iscsi-enc1 democratic-csi/democratic-csi
# enc1 stable storage (nfs)
helm upgrade \
--install \
--values secrets/truenas-nfs-enc1.yaml \
--namespace democratic-csi \
--create-namespace \
zfs-nfs-enc1 democratic-csi/democratic-csi
``` ```
You can test that things worked with: You can test that things worked with:
@@ -288,6 +310,41 @@ kubectl apply -f tests/democratic-csi-pvc-test.yaml
kubectl delete -f tests/democratic-csi-pvc-test.yaml kubectl delete -f tests/democratic-csi-pvc-test.yaml
``` ```
And run some performance tests. You can use network and disk monitoring tools
to see performance during the tests.
```bash
# Big writes
count=0
start_time=$EPOCHREALTIME
while true; do
dd if=/dev/zero of=test.dat bs=1M count=100 1> /dev/null 2> /dev/null
current=$(echo "$EPOCHREALTIME - $start_time" | bc)
current_gt_one=$(echo "$current > 10" | bc)
if [ $current_gt_one -eq 0 ]; then
count=$((count + 1))
echo -e '\e[1A\e[K'$count
else
break
fi
done
# Lots of little writes
count=0
start_time=$EPOCHREALTIME
while true; do
dd if=/dev/zero of=test.dat bs=1K count=1 1> /dev/null 2> /dev/null
current=$(echo "$EPOCHREALTIME - $start_time" | bc)
current_gt_one=$(echo "$current > 1" | bc)
if [ $current_gt_one -eq 0 ]; then
count=$((count + 1))
echo -e '\e[1A\e[K'$count
else
break
fi
done
```
Because iscsi will mount block devices, troubleshooting mounting issues, data corruption, Because iscsi will mount block devices, troubleshooting mounting issues, data corruption,
and exploring pvc contents must happen on the client device. Here are a few cheat-sheet and exploring pvc contents must happen on the client device. Here are a few cheat-sheet
commands to make things easier: commands to make things easier:
@@ -346,6 +403,8 @@ mount -t xfs /dev/zvol/... /mnt/iscsi
iscsiadm --mode session -P 3 | grep Target -A 2 -B 2 iscsiadm --mode session -P 3 | grep Target -A 2 -B 2
``` ```
## Apps
### Dashboard ### Dashboard
The kubernetes dashboard isn't all that useful but it can sometimes give you a good The kubernetes dashboard isn't all that useful but it can sometimes give you a good
@@ -387,8 +446,7 @@ helm upgrade --install \
nextcloud \ nextcloud \
./helm/nextcloud \ ./helm/nextcloud \
--namespace nextcloud \ --namespace nextcloud \
--create-namespace \ --create-namespace
--values secrets/nextcloud-values.yaml
``` ```
Need to copy lots of files? Copy them to the user data dir and then run Need to copy lots of files? Copy them to the user data dir and then run
@@ -399,7 +457,28 @@ Need to copy lots of files? Copy them to the user data dir and then run
Set up SES with the following links: Set up SES with the following links:
https://docs.aws.amazon.com/general/latest/gr/ses.html <https://docs.aws.amazon.com/general/latest/gr/ses.html>
#### Test Deploy
You can create a test deployment with the following:
```bash
helm upgrade --install nextcloud ./helm/nextcloud \
--namespace nextcloud-test \
--create-namespace \
--set nextcloud.domain=nextcloud-test.reeseapps.com \
--set nextcloud.html.storageClassName=zfs-nfs-enc1 \
--set nextcloud.html.storage=8Gi \
--set nextcloud.data.storageClassName=zfs-nfs-enc1 \
--set nextcloud.data.storage=8Gi \
--set postgres.storageClassName=zfs-nfs-enc1 \
--set postgres.storage=8Gi \
--set redis.storageClassName=zfs-nfs-enc1 \
--set redis.storage=8Gi \
--set show_passwords=true \
--dry-run
```
### Gitea ### Gitea
@@ -456,6 +535,8 @@ below installs nimcraft. For each installation you'll want to create your own va
with a new port. The server-downloader is called "minecraft_get_server" and is available on with a new port. The server-downloader is called "minecraft_get_server" and is available on
[Github](https://github.com/ducoterra/minecraft_get_server). [Github](https://github.com/ducoterra/minecraft_get_server).
#### Nimcraft
```bash ```bash
helm upgrade --install \ helm upgrade --install \
nimcraft \ nimcraft \
@@ -464,6 +545,17 @@ helm upgrade --install \
--create-namespace --create-namespace
``` ```
#### Testing
```bash
helm upgrade --install \
testcraft \
./helm/minecraft \
--namespace testcraft \
--create-namespace \
--set port=25566
```
### Snapdrop ### Snapdrop
Snapdrop is a file sharing app that allows airdrop-like functionality over the web Snapdrop is a file sharing app that allows airdrop-like functionality over the web
@@ -488,6 +580,21 @@ helm upgrade --install \
--create-namespace --create-namespace
``` ```
## Upgrading
<https://docs.k3s.io/upgrades/manual#manually-upgrade-k3s-using-the-binary>
```bash
sudo su -
wget https://github.com/k3s-io/k3s/releases/download/v1.28.2%2Bk3s1/k3s
systemctl stop k3s
chmod +x k3s
mv k3s /usr/local/bin/k3s
systemctl start k3s
```
## Help
### Troubleshooting ### Troubleshooting
Deleting a stuck namespace Deleting a stuck namespace

View File

@@ -17,6 +17,15 @@ spec:
runAsUser: 1000 runAsUser: 1000
runAsGroup: 1000 runAsGroup: 1000
fsGroup: 1000 fsGroup: 1000
tolerations:
- key: "node.kubernetes.io/unreachable"
operator: "Exists"
effect: "NoExecute"
tolerationSeconds: 1
- key: "node.kubernetes.io/not-ready"
operator: "Exists"
effect: "NoExecute"
tolerationSeconds: 1
containers: containers:
- name: jellyfin - name: jellyfin
image: {{ .Values.jellyfin.image }} image: {{ .Values.jellyfin.image }}

View File

@@ -13,6 +13,15 @@ spec:
labels: labels:
app: {{ .Release.Name }} app: {{ .Release.Name }}
spec: spec:
tolerations:
- key: "node.kubernetes.io/unreachable"
operator: "Exists"
effect: "NoExecute"
tolerationSeconds: 1
- key: "node.kubernetes.io/not-ready"
operator: "Exists"
effect: "NoExecute"
tolerationSeconds: 1
initContainers: initContainers:
- name: get-version - name: get-version
image: {{ .Values.get_server.image }} image: {{ .Values.get_server.image }}

View File

@@ -5,7 +5,7 @@ metadata:
annotations: annotations:
"helm.sh/resource-policy": keep "helm.sh/resource-policy": keep
spec: spec:
storageClassName: zfs-iscsi-enc1 storageClassName: zfs-nfs-enc1
accessModes: accessModes:
- ReadWriteOnce - ReadWriteOnce
resources: resources:

View File

@@ -19,7 +19,7 @@ server_props: |
op-permission-level=4 op-permission-level=4
pvp=true pvp=true
snooper-enabled=true snooper-enabled=true
level-type=default level-type=amplified
hardcore=false hardcore=false
enable-command-block=false enable-command-block=false
max-players=20 max-players=20

View File

@@ -8,29 +8,40 @@
{{ define "DATABASE_HOST" }}{{ .Release.Name }}-postgres{{ end }} {{ define "DATABASE_HOST" }}{{ .Release.Name }}-postgres{{ end }}
{{ define "POSTGRES_USER" }}postgres{{ end }} {{ define "POSTGRES_USER" }}postgres{{ end }}
{{/* Postgres password lookup - uses existing password if possible */}}
{{ define "POSTGRES_PASSWORD" -}}
{{- $POSTGRES_SECRETS := (lookup "v1" "Secret" .Release.Namespace ( include "POSTGRES_NAME" . )).data -}}
{{- printf (ternary (dict "POSTGRES_PASSWORD" (randAlphaNum 64 | b64enc)) $POSTGRES_SECRETS (not $POSTGRES_SECRETS)).POSTGRES_PASSWORD -}}
{{- end }}
{{/* Generated Nextcloud Config */}} {{/* Generated Nextcloud Config */}}
{{ define "NEXTCLOUD_NAME" }}{{ printf "%s-nextcloud" .Release.Name | lower }}{{ end }} {{ define "NEXTCLOUD_NAME" }}{{ printf "%s-nextcloud" .Release.Name | lower }}{{ end }}
{{ define "ADMIN_USER" }}admin{{ end }} {{ define "ADMIN_USER" }}admin{{ end }}
{{/* Nextcloud admin password lookup - uses existing password if possible */}}
{{- define "NEXTCLOUD_ADMIN_PASSWORD" -}}
{{/* ternary (create a dict with random NEXTCLOUD_ADMIN_PASSWORD) (actual dictionary) (test whether NEXTCLOUD_SECRETS exists) */}}
{{- $NEXTCLOUD_SECRETS := (lookup "v1" "Secret" .Release.Namespace ( include "NEXTCLOUD_NAME" . )).data -}}
{{- printf (ternary (dict "NEXTCLOUD_ADMIN_PASSWORD" (randAlphaNum 64 | b64enc)) $NEXTCLOUD_SECRETS (not $NEXTCLOUD_SECRETS)).NEXTCLOUD_ADMIN_PASSWORD -}}
{{- end -}}
{{/* Generated Redis Config */}} {{/* Generated Redis Config */}}
{{ define "REDIS_NAME" }}{{ printf "%s-redis" .Release.Name | lower }}{{ end }} {{ define "REDIS_NAME" }}{{ printf "%s-redis" .Release.Name | lower }}{{ end }}
{{ define "REDIS_HOST" }}{{ .Release.Name }}-redis{{ end }} {{ define "REDIS_HOST" }}{{ .Release.Name }}-redis{{ end }}
{{/* Redis password lookup - uses existing password if possible */}} {{/* Postgres password lookup - uses existing password if possible */}}
{{- define "REDIS_PASSWORD" -}} {{ define "POSTGRES_PASSWORD" -}}
{{- $REDIS_SECRETS := (lookup "v1" "Secret" .Release.Namespace ( include "REDIS_NAME" . )).data -}} {{- $POSTGRES_SECRET := (lookup "v1" "Secret" .Release.Namespace ( include "POSTGRES_NAME" . )).data -}}
{{- printf (ternary (dict "REDIS_PASSWORD" (randAlphaNum 64 | b64enc)) $REDIS_SECRETS (not $REDIS_SECRETS)).REDIS_PASSWORD -}} {{- if $POSTGRES_SECRET -}}
{{- printf $POSTGRES_SECRET.POSTGRES_PASSWORD | b64enc -}}
{{- else -}}
{{- printf (required ".Values.postgres.password is required" .Values.postgres.password) | b64enc -}}
{{- end -}} {{- end -}}
{{- end }}
{{/* Nextcloud admin password lookup - uses existing password if possible */}}
{{ define "NEXTCLOUD_ADMIN_PASSWORD" -}}
{{- $NEXTCLOUD_SECRETS := (lookup "v1" "Secret" .Release.Namespace ( include "NEXTCLOUD_NAME" . )).data -}}
{{- if $NEXTCLOUD_SECRETS -}}
{{- printf $NEXTCLOUD_SECRETS.NEXTCLOUD_ADMIN_PASSWORD | b64enc -}}
{{- else -}}
{{- printf (required ".Values.nextcloud.admin.password is required" .Values.nextcloud.admin.password) | b64enc -}}
{{- end -}}
{{- end }}
{{/* Redis password lookup - uses existing password if possible */}}
{{ define "REDIS_PASSWORD" -}}
{{- $REDIS_SECRETS := (lookup "v1" "Secret" .Release.Namespace ( include "REDIS_NAME" . )).data -}}
{{- if $REDIS_SECRETS -}}
{{- printf $REDIS_SECRETS.REDIS_PASSWORD | b64enc -}}
{{- else -}}
{{- printf (required ".Values.redis.password is required" .Values.redis.password) | b64enc -}}
{{- end -}}
{{- end }}

View File

@@ -13,6 +13,15 @@ spec:
labels: labels:
app.kubernetes.io/name: nextcloud app.kubernetes.io/name: nextcloud
spec: spec:
tolerations:
- key: "node.kubernetes.io/unreachable"
operator: "Exists"
effect: "NoExecute"
tolerationSeconds: 1
- key: "node.kubernetes.io/not-ready"
operator: "Exists"
effect: "NoExecute"
tolerationSeconds: 1
containers: containers:
- name: nextcloud - name: nextcloud
image: {{ .Values.nextcloud.image }} image: {{ .Values.nextcloud.image }}
@@ -94,8 +103,10 @@ spec:
persistentVolumeClaim: persistentVolumeClaim:
claimName: {{ .Release.Name }}-postgres-iops claimName: {{ .Release.Name }}-postgres-iops
- name: redis - name: redis
persistentVolumeClaim: emptyDir:
claimName: {{ .Release.Name }}-redis-iops sizeLimit: 2Gi
# persistentVolumeClaim:
# claimName: {{ .Release.Name }}-redis-iops
- name: postgres-init - name: postgres-init
secret: secret:
secretName: {{ .Release.Name }}-postgres-init secretName: {{ .Release.Name }}-postgres-init

View File

@@ -5,9 +5,9 @@ metadata:
annotations: annotations:
"helm.sh/resource-policy": keep "helm.sh/resource-policy": keep
spec: spec:
storageClassName: zfs-iscsi-enc0 storageClassName: {{ .Values.nextcloud.data.storageClassName }}
accessModes: accessModes:
- ReadWriteOnce - ReadWriteOnce
resources: resources:
requests: requests:
storage: 2Ti storage: {{ .Values.nextcloud.data.storage }}

View File

@@ -5,9 +5,9 @@ metadata:
annotations: annotations:
"helm.sh/resource-policy": keep "helm.sh/resource-policy": keep
spec: spec:
storageClassName: zfs-iscsi-enc1 storageClassName: {{ .Values.nextcloud.html.storageClassName }}
accessModes: accessModes:
- ReadWriteOnce - ReadWriteOnce
resources: resources:
requests: requests:
storage: 16Gi storage: {{ .Values.nextcloud.html.storage }}

View File

@@ -9,4 +9,3 @@ data:
NEXTCLOUD_ADMIN_PASSWORD: {{ include "NEXTCLOUD_ADMIN_PASSWORD" . | quote }} NEXTCLOUD_ADMIN_PASSWORD: {{ include "NEXTCLOUD_ADMIN_PASSWORD" . | quote }}
POSTGRES_PASSWORD: {{ include "POSTGRES_PASSWORD" . | quote }} POSTGRES_PASSWORD: {{ include "POSTGRES_PASSWORD" . | quote }}
REDIS_HOST_PASSWORD: {{ include "REDIS_PASSWORD" . | quote }} REDIS_HOST_PASSWORD: {{ include "REDIS_PASSWORD" . | quote }}
SMTP_PASSWORD: {{ .Values.SMTP_PASSWORD | b64enc | quote }}

View File

@@ -5,9 +5,9 @@ metadata:
annotations: annotations:
"helm.sh/resource-policy": keep "helm.sh/resource-policy": keep
spec: spec:
storageClassName: zfs-iscsi-enc1 storageClassName: {{ .Values.postgres.storageClassName }}
accessModes: accessModes:
- ReadWriteOnce - ReadWriteOnce
resources: resources:
requests: requests:
storage: 32Gi storage: {{ .Values.postgres.storage }}

View File

@@ -5,9 +5,9 @@ metadata:
annotations: annotations:
"helm.sh/resource-policy": keep "helm.sh/resource-policy": keep
spec: spec:
storageClassName: zfs-iscsi-enc1 storageClassName: {{ .Values.redis.storageClassName }}
accessModes: accessModes:
- ReadWriteOnce - ReadWriteOnce
resources: resources:
requests: requests:
storage: 32Gi storage: {{ .Values.redis.storage }}

View File

@@ -1,3 +1,21 @@
nextcloud: nextcloud:
image: nextcloud:26.0.7 image: nextcloud:26.0.7
domain: nextcloud.reeseapps.com domain: nextcloud.reeseapps.com
html:
storageClassName: zfs-iscsi-enc1
storage: 16Gi
data:
storageClassName: zfs-iscsi-enc1
storage: 2Ti
admin:
password: ""
postgres:
storageClassName: zfs-iscsi-enc1
storage: 32Gi
password: ""
redis:
storageClassName: zfs-iscsi-enc1
storage: 32Gi
password: ""

View File

@@ -0,0 +1,10 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-snapdrop
annotations:
"helm.sh/resource-policy": keep
data:
PUID: "1000"
PGID: "1000"
TZ: Etc/UTC

View File

@@ -0,0 +1,38 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}
spec:
selector:
matchLabels:
app.kubernetes.io/name: snapdrop
strategy:
type: Recreate
template:
metadata:
labels:
app.kubernetes.io/name: snapdrop
spec:
containers:
- name: snapdrop
image: {{ .Values.snapdrop.image }}
ports:
- containerPort: 80
name: http
envFrom:
- configMapRef:
name: {{ .Release.Name }}-snapdrop
volumeMounts:
- mountPath: /config
name: config
resources:
requests:
memory: "1Gi"
cpu: "1m"
limits:
memory: "4Gi"
cpu: "4"
volumes:
- name: config
persistentVolumeClaim:
claimName: {{ .Release.Name }}-config

View File

@@ -0,0 +1,25 @@
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ .Release.Name }}
annotations:
cert-manager.io/cluster-issuer: letsencrypt
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.org/client-max-body-size: "0"
spec:
rules:
- host: {{ .Values.snapdrop.domain }}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: snapdrop
port:
name: http
tls:
- hosts:
- {{ .Values.snapdrop.domain }}
secretName: snapdrop-tls-cert

View File

@@ -0,0 +1,13 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Release.Name }}-config
annotations:
"helm.sh/resource-policy": keep
spec:
storageClassName: zfs-iscsi-enc0
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi

View File

@@ -0,0 +1,13 @@
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: snapdrop
ports:
- name: http
protocol: TCP
port: 80
targetPort: http

View File

@@ -1,111 +0,0 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-snapdrop
annotations:
"helm.sh/resource-policy": keep
data:
PUID: "1000"
PGID: "1000"
TZ: Etc/UTC
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}
spec:
selector:
matchLabels:
app.kubernetes.io/name: snapdrop
strategy:
type: Recreate
template:
metadata:
labels:
app.kubernetes.io/name: snapdrop
spec:
containers:
- name: snapdrop
image: {{ .Values.snapdrop.image }}
ports:
- containerPort: 80
name: http
envFrom:
- configMapRef:
name: {{ .Release.Name }}-snapdrop
volumeMounts:
- mountPath: /config
name: config
resources:
requests:
memory: "1Gi"
cpu: "1m"
limits:
memory: "4Gi"
cpu: "4"
volumes:
- name: config
persistentVolumeClaim:
claimName: {{ .Release.Name }}-config
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Release.Name }}-config
annotations:
"helm.sh/resource-policy": keep
spec:
storageClassName: zfs-iscsi-enc0
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
---
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: snapdrop
ports:
- name: http
protocol: TCP
port: 80
targetPort: http
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ .Release.Name }}
annotations:
cert-manager.io/cluster-issuer: letsencrypt
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.org/client-max-body-size: "0"
spec:
rules:
- host: {{ .Values.snapdrop.domain }}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: snapdrop
port:
name: http
tls:
- hosts:
- {{ .Values.snapdrop.domain }}
secretName: snapdrop-tls-cert

View File

@@ -1,7 +1,23 @@
apiVersion: v1 apiVersion: v1
kind: PersistentVolumeClaim kind: PersistentVolumeClaim
metadata: metadata:
name: democratic-csi-test name: zfs-iscsi-enc0
annotations:
"helm.sh/resource-policy": keep
spec:
storageClassName: zfs-iscsi-enc0
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: zfs-iscsi-enc1
annotations: annotations:
"helm.sh/resource-policy": keep "helm.sh/resource-policy": keep
spec: spec:
@@ -14,6 +30,22 @@ spec:
--- ---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: zfs-nfs-enc1
annotations:
"helm.sh/resource-policy": keep
spec:
storageClassName: zfs-nfs-enc1
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
---
apiVersion: apps/v1 apiVersion: apps/v1
kind: Deployment kind: Deployment
metadata: metadata:
@@ -35,8 +67,12 @@ spec:
- 'sleep infinity' - 'sleep infinity'
name: democratic-csi-test name: democratic-csi-test
volumeMounts: volumeMounts:
- mountPath: /test - mountPath: /zfs_iscsi_enc0
name: test name: zfs-iscsi-enc0
- mountPath: /zfs_iscsi_enc1
name: zfs-iscsi-enc1
- mountPath: /zfs_nfs_enc1
name: zfs-nfs-enc1
resources: resources:
limits: limits:
memory: "4Gi" memory: "4Gi"
@@ -46,6 +82,12 @@ spec:
cpu: "1m" cpu: "1m"
restartPolicy: Always restartPolicy: Always
volumes: volumes:
- name: test - name: zfs-iscsi-enc0
persistentVolumeClaim: persistentVolumeClaim:
claimName: democratic-csi-test claimName: zfs-iscsi-enc0
- name: zfs-iscsi-enc1
persistentVolumeClaim:
claimName: zfs-iscsi-enc1
- name: zfs-nfs-enc1
persistentVolumeClaim:
claimName: zfs-nfs-enc1

View File

@@ -20,7 +20,6 @@ spec:
- containerPort: 80 - containerPort: 80
name: http name: http
--- ---
apiVersion: v1 apiVersion: v1

101
tests/metallb-test.yaml Normal file
View File

@@ -0,0 +1,101 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: ingress-nginx-demo-1
namespace: default
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx-demo-1
strategy:
type: Recreate
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx-demo-1
spec:
containers:
- name: httpd
image: httpd
ports:
- containerPort: 80
name: http
resources:
requests:
memory: "100Mi"
cpu: "1m"
limits:
memory: "256Mi"
cpu: "1"
---
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx-demo-1
namespace: default
annotations:
metallb.universe.tf/allow-shared-ip: "production"
metallb.universe.tf/address-pool: production
spec:
type: LoadBalancer
ports:
- name: http
protocol: TCP
port: 8001
targetPort: 80
selector:
app.kubernetes.io/name: ingress-nginx-demo-1
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ingress-nginx-demo-2
namespace: default
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx-demo-2
strategy:
type: Recreate
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx-demo-2
spec:
containers:
- name: httpd
image: httpd
ports:
- containerPort: 80
name: http
resources:
requests:
memory: "100Mi"
cpu: "1m"
limits:
memory: "256Mi"
cpu: "1"
---
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx-demo-2
namespace: default
annotations:
metallb.universe.tf/allow-shared-ip: "production"
metallb.universe.tf/address-pool: production
spec:
type: LoadBalancer
ports:
- name: http
protocol: TCP
port: 8002
targetPort: 80
selector:
app.kubernetes.io/name: ingress-nginx-demo-2

View File

@@ -0,0 +1,49 @@
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx # has to match .spec.template.metadata.labels
serviceName: "nginx"
replicas: 3 # by default is 1
minReadySeconds: 10 # by default is 0
template:
metadata:
labels:
app: nginx # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: registry.k8s.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-path"
resources:
requests:
storage: 1Gi