This commit is contained in:
ducoterra
2023-03-24 00:44:25 -04:00
commit 3c0b8ab008
15 changed files with 1512 additions and 0 deletions

123
FedoraServer.md Normal file
View File

@@ -0,0 +1,123 @@
# Fedora Server
Fedora server is an awesome container hosting OS. It has a lot built in, and setup is pretty
quick.
## Setup
```bash
scp .ssh/authorized_keys containers:~/.ssh/authorized_keys
```
```bash
sudo hostnamectl hostname containers
sudo dnf install vim
sudo vim /etc/ssh/sshd_config
sudo systemctl restart sshd
```
## Certbot for Cockpit
### Initial Setup
1. Create a "containers" user in AWS. Copy the permissions from Freenas
2. Create credentials
3. Add your credentials to root
4. Install the aws cli v2
5. Test your credentials with `aws route53 list-hosted-zones`
```bash
sudo dnf install certbot python3-certbot-dns-route53
sudo certbot certonly --dns-route53 -d containers.reeselink.com
sudo certbot certonly --dns-route53 -d containers.reeseapps.com
sudo cp /etc/letsencrypt/live/containers.reeselink.com/fullchain.pem /etc/cockpit/ws-certs.d/50-letsencrypt.cert
sudo cp /etc/letsencrypt/live/containers.reeselink.com/privkey.pem /etc/cockpit/ws-certs.d/50-letsencrypt.key
sudo cp /etc/letsencrypt/live/containers.reeseapps.com/fullchain.pem /etc/cockpit/ws-certs.d/60-letsencrypt.cert
sudo cp /etc/letsencrypt/live/containers.reeseapps.com/privkey.pem /etc/cockpit/ws-certs.d/60-letsencrypt.key
```
Test the renewal process with:
```bash
sudo certbot renew --cert-name containers.reeselink.com --dry-run
```
### Renewal
Create a renewal script in /usr/lib/scripts/certbot-renew.sh
/usr/lib/scripts/certbot-renew.sh (chmod +x)
```bash
#!/bin/bash
/usr/bin/certbot renew --cert-name containers.reeselink.com
cp /etc/letsencrypt/live/containers.reeselink.com/fullchain.pem /etc/cockpit/ws-certs.d/50-letsencrypt.cert
cp /etc/letsencrypt/live/containers.reeselink.com/privkey.pem /etc/cockpit/ws-certs.d/50-letsencrypt.key
```
Now create a systemd oneshot service to run the script
/etc/systemd/system/certbot-renew.service
```conf
[Unit]
Description=Certbot Renewal
[Service]
Type=oneshot
ExecStart=/usr/lib/scripts/certbot-renew.sh
```
/etc/systemd/system/certbot-renew.timer
```conf
[Unit]
Description=Timer for Certbot Renewal
[Timer]
OnBootSec=300
OnUnitActiveSec=1w
[Install]
WantedBy=multi-user.target
```
Enable the service
```bash
systemctl enable --now certbot-renew.timer
```
### Disable FirewallD
Firewalld conflicts with k3s. Disable it from the UI.
## Disable SELinux
SELinux interferes with ISCSI mounts. Disable it by editing `/etc/selinux/config`
```bash
SELINUX=permissive
```
### Allow ISCSI
```bash
# Install the following system packages
sudo dnf install -y lsscsi iscsi-initiator-utils sg3_utils device-mapper-multipath
# Enable multipathing
sudo mpathconf --enable --with_multipathd y
# Ensure that iscsid and multipathd are running
sudo systemctl enable iscsid multipathd
sudo systemctl start iscsid multipathd
# Start and enable iscsi
sudo systemctl enable iscsi
sudo systemctl start iscsi
```

220
README.md Normal file
View File

@@ -0,0 +1,220 @@
# Containers!
A project to store container-based hosting stuff.
## Platform
Before you being be sure to take a look at the [Fedora Server Config](FedoraServer.md) readme
which explains how to set up a basic fedora server hosting platform with certbot.
## K3S
### Install K3S
We're going to be tweaking some installation parameters so if you already have k3s
installed you can either uninstall it or skip these steps.
This installation disables traefik and local-storage (We don't really need either):
```bash
curl -sfL https://get.k3s.io | sh -s - \
"--disable" \
"traefik" \
"--disable" \
"local-storage"
```
Now you can change the ownership of (and copy) the k3s.yaml file:
```bash
chown ducoterra /etc/rancher/k3s/k3s.yaml
scp /etc/rancher/k3s/k3s.yaml ~/.kube/config
```
Edit ~/.kube/config and change 127.0.0.1 to containers.reeselink.com
### Database Backups
We're using SQLite (because it's all we really need). The db is stored at
`/var/lib/rancher/k3s/server/db/`. You can just copy that folder to backup the database
and restore it by copying it back. Note, you must also copy`/var/lib/rancher/k3s/server/token`
and use the contents as the token when restoring the backup as data is encrypted with that token.
### Nginx Ingress
Now we need an ingress solution (preferably with certs for https). We'll be using nginx since
it's a little bit more configurable than traefik (though don't sell traefik short, it's really
good. Just finnicky when you have use cases they haven't explicitly coded for).
1. Install nginx
```bash
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install \
ingress-nginx \
ingress-nginx/ingress-nginx \
--namespace ingress-nginx \
--create-namespace
```
2. Install cert-manager
```bash
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install \
cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--version v1.11.0 \
--set installCRDs=true
```
3. Create the let's encrypt issuer
```bash
kubectl apply -f letsencrypt-issuer.yaml
```
You can test if your ingress is working with `kubectl apply -f ingress-nginx-test.yaml`
Navigate to ingress-nginx-test.reeseapps.com
### Storage
We'll be installing democratic csi for our volume manager. Specifically, we'll be installing the
freenas-api-nfs driver. All configuration is stored in truenas-nfs.yaml.
The nfs driver will provision an nfs store owned by user 3000 (kube). You may have to make
that user on Truenas. The nfs share created will be world-read/write, so any user can write to
it. Users that write to the share will have their uid/gid mapped to Truenas, so if user 33 writes
a file to the nfs share it will show up as owned by user 33 on Truenas.
The iscsi driver will require a portal ID. This is NOT what is reflected in the UI. The most
reliable way (seriously) to get the real ID is to open the network monitor in the browser, reload
truenas and find the websocket connection, click on it, create the portal and click on the
server reseponse. It'll look something like:
```json
{"msg": "added", "collection": "iscsi.portal.query", "id": 7, "fields": {"id": 7, "tag": 1, "comment": "democratic-csi", "listen": [{"ip": "172.20.0.1", "port": 3260}], "discovery_authmethod": "NONE", "discovery_authgroup": null}}
```
The initiator group IDs seem to line up.
It's good practice to have separate hostnames for your share export and your truenas server. This
way you can have a direct link without worrying about changing the user-facing hostname.
For example: your truenas server might be driveripper.reeselink.com and your kube server might be
containers.reeselink.com. You should also have a democratic-csi-server.reeselink.com and a
democratic-csi-client-1.reeselink.com which might be on 172.20.0.1 and 172.20.0.2.
https://github.com/democratic-csi/democratic-csi
ISCSI requires a bit of server config before proceeding:
```bash
# Install the following system packages
sudo dnf install -y lsscsi iscsi-initiator-utils sg3_utils device-mapper-multipath
# Enable multipathing
sudo mpathconf --enable --with_multipathd y
# Ensure that iscsid and multipathd are running
sudo systemctl enable iscsid multipathd
sudo systemctl start iscsid multipathd
# Start and enable iscsi
sudo systemctl enable iscsi
sudo systemctl start iscsi
```
And now you can install the drivers:
```bash
helm repo add democratic-csi https://democratic-csi.github.io/charts/
helm repo update
# enc0 bulk storage (nfs)
helm upgrade \
--install \
--values truenas-nfs-enc0.yaml \
--namespace democratic-csi \
--create-namespace \
zfs-nfs-enc0 democratic-csi/democratic-csi
# enc1 fast storage (nfs)
helm upgrade \
--install \
--values truenas-nfs-enc1.yaml \
--namespace democratic-csi \
--create-namespace \
zfs-nfs-enc1 democratic-csi/democratic-csi
# enc0 bulk storage (iscsi)
helm upgrade \
--install \
--values truenas-iscsi-enc0.yaml \
--namespace democratic-csi \
--create-namespace \
zfs-iscsi-enc0 democratic-csi/democratic-csi
# enc1 bulk storage (iscsi)
helm upgrade \
--install \
--values truenas-iscsi-enc1.yaml \
--namespace democratic-csi \
--create-namespace \
zfs-iscsi-enc1 democratic-csi/democratic-csi
```
You can test that things worked with:
```bash
kubectl apply -f democratic-csi-pvc-test.yaml
```
### Nextcloud
The first chart we'll deploy is nextcloud.
```bash
helm upgrade --install \
nextcloud \
./helm/nextcloud \
--namespace nextcloud \
--create-namespace
```
Need to copy lots of files? Copy them to the user data
dir and then run
```bash
./occ files:scan --all
```
### Gitea
Gitea provides a helm chart [here](https://gitea.com/gitea/helm-chart/)
```bash
helm repo add gitea-charts https://dl.gitea.io/charts/
helm repo update
helm upgrade --install \
gitea \
gitea-charts/gitea \
--values gitea-values.yaml \
--namespace gitea \
--create-namespace
```
### Troubleshooting
Deleting a stuck namespace
```bash
NAMESPACE=nginx
kubectl proxy &
kubectl get namespace $NAMESPACE -o json |jq '.spec = {"finalizers":[]}' >temp.json
curl -k -H "Content-Type: application/json" -X PUT --data-binary @temp.json 127.0.0.1:8001/api/v1/namespaces/$NAMESPACE/finalize
```

44
gitea-values.yaml Normal file
View File

@@ -0,0 +1,44 @@
ingress:
enabled: true
annotations:
cert-manager.io/cluster-issuer: letsencrypt
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.org/client-max-body-size: "0"
hosts:
- host: gitea.reeseapps.com
paths:
- path: /
pathType: Prefix
tls:
- hosts:
- gitea.reeseapps.com
secretName: gitea-tls-cert
persistence:
enabled: true
storageClass: zfs-iscsi-enc0
annotations:
"helm.sh/resource-policy": keep
postgresql:
persistence:
enabled: true
storageClass: zfs-iscsi-enc0
annotations:
"helm.sh/resource-policy": keep
gitea:
admin:
username: "ducoterra"
password: "changeme"
email: "gitea@ducoterra.net"
config:
service:
DISABLE_REGISTRATION: true
service:
ssh:
port: 2222
type: LoadBalancer
externalTrafficPolicy: Local

23
helm/nextcloud/.helmignore Executable file
View File

@@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

23
helm/nextcloud/Chart.yaml Executable file
View File

@@ -0,0 +1,23 @@
apiVersion: v2
name: Nextcloud
description: A Simple Nextcloud Chart
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
appVersion: 1.16.0

View File

@@ -0,0 +1,399 @@
{{ define "helm_keep_annotation" }}
"helm.sh/resource-policy": keep
{{ end }}
{{/* Generated Postgres Config */}}
{{ define "POSTGRES_NAME" }}{{ printf "%s-postgres" .Release.Name | lower }}{{ end }}
{{ define "POSTGRES_DB" }}nextcloud{{ end }}
{{ define "DATABASE_HOST" }}{{ .Release.Name }}-postgres{{ end }}
{{ define "POSTGRES_USER" }}postgres{{ end }}
{{ $POSTGRES_SECRETS := (lookup "v1" "Secret" .Release.Namespace ( include "POSTGRES_NAME" . )).data }}
{{ $POSTGRES_PASSWORD := (ternary (dict "POSTGRES_PASSWORD" (randAlphaNum 64 | b64enc)) $POSTGRES_SECRETS (not $POSTGRES_SECRETS)).POSTGRES_PASSWORD }}
{{/* Generated Nextcloud Config */}}
{{ define "NEXTCLOUD_NAME" }}{{ printf "%s-nextcloud" .Release.Name | lower }}{{ end }}
{{ define "ADMIN_USER" }}admin{{ end }}
{{ $NEXTCLOUD_SECRETS := (lookup "v1" "Secret" .Release.Namespace ( include "NEXTCLOUD_NAME" . )).data }}
{{/* ternary (create a dict with random NEXTCLOUD_ADMIN_PASSWORD) (actual dictionary) (test whether NEXTCLOUD_SECRETS exists) */}}
{{ $NEXTCLOUD_ADMIN_PASSWORD := (ternary (dict "NEXTCLOUD_ADMIN_PASSWORD" (randAlphaNum 64 | b64enc)) $NEXTCLOUD_SECRETS (not $NEXTCLOUD_SECRETS)).NEXTCLOUD_ADMIN_PASSWORD }}
{{/* Generated Redis Config */}}
{{ define "REDIS_NAME" }}{{ printf "%s-redis" .Release.Name | lower }}{{ end }}
{{ define "REDIS_HOST" }}{{ .Release.Name }}-redis{{ end }}
{{ $REDIS_SECRETS := (lookup "v1" "Secret" .Release.Namespace ( include "REDIS_NAME" . )).data }}
{{ $REDIS_PASSWORD := (ternary (dict "REDIS_PASSWORD" (randAlphaNum 64 | b64enc)) $REDIS_SECRETS (not $REDIS_SECRETS)).REDIS_PASSWORD }}
{{/* Uncomment this and run with --debug to verify secrets are working
# NEXTCLOUD_ADMIN_PASSWORD: {{ $NEXTCLOUD_ADMIN_PASSWORD | quote }}
# POSTGRES_PASSWORD: {{ $POSTGRES_PASSWORD | quote }}
# REDIS_HOST_PASSWORD: {{ $REDIS_PASSWORD | quote }}
*/}}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-postgres
annotations:
{{ include "helm_keep_annotation" . | nindent 4 }}
data:
POSTGRES_USER: postgres
POSTGRES_DB: nextcloud
---
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-postgres
annotations:
{{ include "helm_keep_annotation" . | nindent 4 }}
type: generic
data:
POSTGRES_PASSWORD: {{ $POSTGRES_PASSWORD | quote }}
---
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-postgres-init
annotations:
{{ include "helm_keep_annotation" . | nindent 4 }}
stringData:
init-user-db.sh: |
#!/bin/bash
set -e
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL
CREATE USER nextcloud PASSWORD '{{ $POSTGRES_PASSWORD | b64dec }}';
GRANT ALL PRIVILEGES ON DATABASE nextcloud TO nextcloud;
GRANT USAGE, CREATE ON SCHEMA public TO nextcloud;
EOSQL
---
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-redis
annotations:
{{ include "helm_keep_annotation" . | nindent 4 }}
type: generic
data:
REDIS_PASSWORD: {{ $REDIS_PASSWORD | quote }}
---
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-nextcloud
annotations:
{{ include "helm_keep_annotation" . | nindent 4 }}
data:
NEXTCLOUD_TRUSTED_DOMAINS: {{ .Values.nextcloud.domain }}
OVERWRITEPROTOCOL: https
OVERWRITECLIURL: {{ .Values.nextcloud.domain }}
NEXTCLOUD_ADMIN_USER: admin
POSTGRES_USER: nextcloud
POSTGRES_HOST: {{ .Release.Name }}
POSTGRES_DB: nextcloud
REDIS_HOST: {{ .Release.Name }}
---
apiVersion: v1
kind: Secret
metadata:
name: {{ .Release.Name }}-nextcloud
annotations:
{{ include "helm_keep_annotation" . | nindent 4 }}
type: generic
data:
NEXTCLOUD_ADMIN_PASSWORD: {{ $NEXTCLOUD_ADMIN_PASSWORD | quote }}
POSTGRES_PASSWORD: {{ $POSTGRES_PASSWORD | quote }}
REDIS_HOST_PASSWORD: {{ $REDIS_PASSWORD | quote }}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}
spec:
selector:
matchLabels:
app.kubernetes.io/name: nextcloud
strategy:
type: Recreate
template:
metadata:
labels:
app.kubernetes.io/name: nextcloud
spec:
containers:
- name: nextcloud
image: {{ .Values.nextcloud.image }}
ports:
- containerPort: 80
name: http
envFrom:
- configMapRef:
name: {{ .Release.Name }}-nextcloud
- secretRef:
name: {{ .Release.Name }}-nextcloud
volumeMounts:
- mountPath: /var/www/html
name: html
- mountPath: /var/www/html/data
name: data
resources:
requests:
memory: "1Gi"
cpu: "1m"
limits:
memory: "4Gi"
cpu: "4"
- name: postgres
image: postgres:15
envFrom:
- configMapRef:
name: {{ .Release.Name }}-postgres
- secretRef:
name: {{ .Release.Name }}-postgres
volumeMounts:
- name: postgres
mountPath: /var/lib/postgresql/data
- name: postgres-init
mountPath: /docker-entrypoint-initdb.d/init-user-db.sh
subPath: init-user-db.sh
ports:
- containerPort: 5432
name: postgres
resources:
requests:
memory: "1Gi"
cpu: "1m"
limits:
memory: "4Gi"
cpu: "4"
- name: redis
image: redis:7
ports:
- containerPort: 6379
name: redis
volumeMounts:
- mountPath: /data
name: redis
command:
- redis-server
- --save
- "60"
- "1"
- --loglevel
- warning
- --requirepass
- {{ $REDIS_PASSWORD | b64dec | quote }}
resources:
requests:
memory: "1Gi"
cpu: "1m"
limits:
memory: "4Gi"
cpu: "4"
volumes:
- name: html
persistentVolumeClaim:
claimName: {{ .Release.Name }}-html
# emptyDir:
# sizeLimit: 1Gi
- name: data
persistentVolumeClaim:
claimName: {{ .Release.Name }}-data
# emptyDir:
# sizeLimit: 1Gi
- name: postgres
persistentVolumeClaim:
claimName: {{ .Release.Name }}-postgres
# emptyDir:
# sizeLimit: 1Gi
- name: redis
persistentVolumeClaim:
claimName: {{ .Release.Name }}-redis
# emptyDir:
# sizeLimit: 1Gi
- name: postgres-init
secret:
secretName: {{ .Release.Name }}-postgres-init
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: {{ .Release.Name }}-cron
spec:
schedule: "*/5 * * * *"
failedJobsHistoryLimit: 1
successfulJobsHistoryLimit: 0
jobTemplate:
spec:
template:
spec:
securityContext:
runAsUser: 33
runAsGroup: 33
containers:
- name: nextcloud
image: {{ .Values.nextcloud.image }}
command:
- php
- -f
- cron.php
volumeMounts:
- mountPath: /var/www/html
name: html
- mountPath: /var/www/html/data
name: data
envFrom:
- configMapRef:
name: {{ .Release.Name }}-nextcloud
- secretRef:
name: {{ .Release.Name }}-nextcloud
resources:
requests:
memory: "1Gi"
cpu: "1m"
limits:
memory: "4Gi"
cpu: "4"
volumes:
- name: html
persistentVolumeClaim:
claimName: {{ .Release.Name }}-html
# emptyDir:
# sizeLimit: 1Gi
- name: data
persistentVolumeClaim:
claimName: {{ .Release.Name }}-data
# emptyDir:
# sizeLimit: 1Gi
restartPolicy: OnFailure
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Release.Name }}-html
annotations:
"helm.sh/resource-policy": keep
spec:
storageClassName: zfs-iscsi-enc0
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 16Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Release.Name }}-data
annotations:
"helm.sh/resource-policy": keep
spec:
storageClassName: zfs-iscsi-enc0
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 512Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Release.Name }}-postgres
annotations:
"helm.sh/resource-policy": keep
spec:
storageClassName: zfs-iscsi-enc0
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 32Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Release.Name }}-redis
annotations:
"helm.sh/resource-policy": keep
spec:
storageClassName: zfs-iscsi-enc0
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: nextcloud
ports:
- name: http
protocol: TCP
port: 80
targetPort: http
- name: postgres
protocol: TCP
port: 5432
targetPort: postgres
- name: redis
protocol: TCP
port: 6379
targetPort: redis
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ .Release.Name }}
annotations:
cert-manager.io/cluster-issuer: letsencrypt
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.org/client-max-body-size: "0"
spec:
rules:
- host: {{ .Values.nextcloud.domain }}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nextcloud
port:
name: http
tls:
- hosts:
- {{ .Values.nextcloud.domain }}
secretName: nextcloud-tls-cert

3
helm/nextcloud/values.yaml Executable file
View File

@@ -0,0 +1,3 @@
nextcloud:
image: nextcloud:26
domain: nextcloud.reeseapps.com

18
letsencrypt-issuer.yaml Executable file
View File

@@ -0,0 +1,18 @@
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt
spec:
acme:
# The ACME server URL
server: https://acme-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: nginx@ducoterra.net
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt
# Enable the HTTP-01 challenge provider
solvers:
- http01:
ingress:
class: nginx

View File

@@ -0,0 +1,51 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: democratic-csi-test
annotations:
"helm.sh/resource-policy": keep
spec:
storageClassName: zfs-iscsi-enc0
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: democratic-csi-test
spec:
selector:
matchLabels:
app: democratic-csi-test
template:
metadata:
labels:
app: democratic-csi-test
spec:
containers:
- image: debian
command:
- bash
- -c
- 'sleep infinity'
name: democratic-csi-test
volumeMounts:
- mountPath: /test
name: test
resources:
limits:
memory: "4Gi"
cpu: "2"
requests:
memory: "1Mi"
cpu: "1m"
restartPolicy: Always
volumes:
- name: test
persistentVolumeClaim:
claimName: democratic-csi-test

View File

@@ -0,0 +1,66 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: ingress-nginx-demo
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx-demo
strategy:
type: Recreate
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx-demo
spec:
containers:
- name: httpd
image: httpd
ports:
- containerPort: 80
name: http
---
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx-demo
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: ingress-nginx-demo
ports:
- name: ingress-nginx-demo
protocol: TCP
port: 80
targetPort: http
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-nginx-demo
annotations:
cert-manager.io/cluster-issuer: letsencrypt
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.org/client-max-body-size: "0"
spec:
rules:
- host: ingress-nginx-demo.reeseapps.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ingress-nginx-demo
port:
number: 80
tls:
- hosts:
- ingress-nginx-demo.reeseapps.com
secretName: ingress-nginx-demo-tls-cert

188
tests/nextcloud-test.yaml Normal file
View File

@@ -0,0 +1,188 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres
data:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: nextcloud
---
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-init
data:
init-user-db.sh: |
#!/bin/bash
set -e
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL
CREATE USER nextcloud PASSWORD 'nextcloud';
GRANT ALL PRIVILEGES ON DATABASE nextcloud TO nextcloud;
GRANT USAGE, CREATE ON SCHEMA public TO nextcloud;
EOSQL
---
apiVersion: v1
kind: ConfigMap
metadata:
name: nextcloud
data:
NEXTCLOUD_TRUSTED_DOMAINS: "nextcloud.reeseapps.com"
OVERWRITEPROTOCOL: https
NEXTCLOUD_ADMIN_USER: admin
NEXTCLOUD_ADMIN_PASSWORD: 9quirky4
POSTGRES_USER: nextcloud
POSTGRES_PASSWORD: nextcloud
POSTGRES_HOST: localhost
POSTGRES_DB: nextcloud
REDIS_HOST: localhost
REDIS_HOST_PASSWORD: redis
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nextcloud
spec:
selector:
matchLabels:
app.kubernetes.io/name: nextcloud
strategy:
type: Recreate
template:
metadata:
labels:
app.kubernetes.io/name: nextcloud
spec:
containers:
- name: nextcloud
image: nextcloud:26
ports:
- containerPort: 80
name: http
envFrom:
- configMapRef:
name: nextcloud
volumeMounts:
- mountPath: /var/www/html
name: html
- mountPath: /var/www/html/data
name: data
resources:
requests:
memory: "1Gi"
cpu: "1m"
limits:
memory: "1Gi"
cpu: "1"
- name: postgres
image: postgres:15
envFrom:
- configMapRef:
name: postgres
volumeMounts:
- name: postgres
mountPath: /var/lib/postgresql/data
- name: postgres-init
mountPath: /docker-entrypoint-initdb.d/init-user-db.sh
subPath: init-user-db.sh
ports:
- containerPort: 5432
name: postgres
resources:
requests:
memory: "1Gi"
cpu: "1m"
limits:
memory: "1Gi"
cpu: "1"
- name: redis
image: redis:7
ports:
- containerPort: 6379
name: redis
volumeMounts:
- mountPath: /data
name: redis
command:
- redis-server
- --save
- "60"
- "1"
- --loglevel
- warning
- --requirepass
- redis
resources:
requests:
memory: "1Gi"
cpu: "1m"
limits:
memory: "1Gi"
cpu: "1"
volumes:
- name: html
emptyDir:
sizeLimit: 500Mi
- name: data
emptyDir:
sizeLimit: 500Mi
- name: postgres
emptyDir:
sizeLimit: 500Mi
- name: redis
emptyDir:
sizeLimit: 500Mi
- name: postgres-init
configMap:
name: postgres-init
---
apiVersion: v1
kind: Service
metadata:
name: nextcloud
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: nextcloud
ports:
- name: http
protocol: TCP
port: 80
targetPort: http
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nextcloud
annotations:
cert-manager.io/cluster-issuer: letsencrypt
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.org/client-max-body-size: "0"
spec:
rules:
- host: nextcloud.reeseapps.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nextcloud
port:
name: http
tls:
- hosts:
- nextcloud.reeseapps.com
secretName: nextcloud-tls-cert

94
truenas-iscsi-enc0.yaml Normal file
View File

@@ -0,0 +1,94 @@
csiDriver:
name: "driveripper.zfs-iscsi-enc0"
# add note here about volume expansion requirements
storageClasses:
- name: zfs-iscsi-enc0
defaultClass: false
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
parameters:
# for block-based storage can be ext3, ext4, xfs
fsType: xfs
mountOptions: []
secrets:
provisioner-secret:
controller-publish-secret:
node-stage-secret:
node-publish-secret:
controller-expand-secret:
volumeSnapshotClasses: []
driver:
config:
driver: freenas-api-iscsi
instance_id:
httpConnection:
protocol: https
host: driveripper.reeseapps.com
port: 8443
# use only 1 of apiKey or username/password
# if both are present, apiKey is preferred
# apiKey is only available starting in TrueNAS-12
apiKey: <key>
# username: root
# password:
allowInsecure: false
# use apiVersion 2 for TrueNAS-12 and up (will work on 11.x in some scenarios as well)
# leave unset for auto-detection
apiVersion: 2
zfs:
# can be used to set arbitrary values on the dataset/zvol
# can use handlebars templates with the parameters from the storage class/CO
datasetProperties:
"org.freenas:description": "{{ parameters.[csi.storage.k8s.io/pvc/namespace] }}/{{ parameters.[csi.storage.k8s.io/pvc/name] }}"
datasetParentName: enc0/dcsi/apps
# do NOT make datasetParentName and detachedSnapshotsDatasetParentName overlap
# they may be siblings, but neither should be nested in the other
detachedSnapshotsDatasetParentName: enc0/dcsi/snaps
zvolCompression:
# "" (inherit), on, off, verify
zvolDedup:
zvolEnableReservation: false
# 512, 1K, 2K, 4K, 8K, 16K, 64K, 128K default is 16K
zvolBlocksize:
iscsi:
targetPortal: "democratic-csi-server.reeselink.com"
# for multipath
# targetPortals: [] # [ "server[:port]", "server[:port]", ... ]
# leave empty to omit usage of -I with iscsiadm
interface:
# MUST ensure uniqueness
# full iqn limit is 223 bytes, plan accordingly
# default is "{{ name }}"
nameTemplate: "{{ parameters.[csi.storage.k8s.io/pvc/namespace] }}-{{ parameters.[csi.storage.k8s.io/pvc/name] }}"
namePrefix: "dcsi-"
nameSuffix: "-enc0"
# add as many as needed
targetGroups:
# get the correct ID from the "portal" section in the UI
- targetGroupPortalGroup: 7
# get the correct ID from the "initiators" section in the UI
targetGroupInitiatorGroup: 1
# None, CHAP, or CHAP Mutual
targetGroupAuthType: None
# get the correct ID from the "Authorized Access" section of the UI
# only required if using Chap
targetGroupAuthGroup:
extentCommentTemplate: "{{ parameters.[csi.storage.k8s.io/pvc/namespace] }}/{{ parameters.[csi.storage.k8s.io/pvc/name] }}"
extentInsecureTpc: true
extentXenCompat: false
extentDisablePhysicalBlocksize: true
# 512, 1024, 2048, or 4096,
extentBlocksize: 512
# "" (let FreeNAS decide, currently defaults to SSD), Unknown, SSD, 5400, 7200, 10000, 15000
extentRpm: "SSD"
# 0-100 (0 == ignore)
extentAvailThreshold: 0

94
truenas-iscsi-enc1.yaml Normal file
View File

@@ -0,0 +1,94 @@
csiDriver:
name: "driveripper.zfs-iscsi-enc1"
# add note here about volume expansion requirements
storageClasses:
- name: zfs-iscsi-enc1
defaultClass: false
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
parameters:
# for block-based storage can be ext3, ext4, xfs
fsType: xfs
mountOptions: []
secrets:
provisioner-secret:
controller-publish-secret:
node-stage-secret:
node-publish-secret:
controller-expand-secret:
volumeSnapshotClasses: []
driver:
config:
driver: freenas-api-iscsi
instance_id:
httpConnection:
protocol: https
host: driveripper.reeseapps.com
port: 8443
# use only 1 of apiKey or username/password
# if both are present, apiKey is preferred
# apiKey is only available starting in TrueNAS-12
apiKey: <key>
# username: root
# password:
allowInsecure: false
# use apiVersion 2 for TrueNAS-12 and up (will work on 11.x in some scenarios as well)
# leave unset for auto-detection
apiVersion: 2
zfs:
# can be used to set arbitrary values on the dataset/zvol
# can use handlebars templates with the parameters from the storage class/CO
datasetProperties:
"org.freenas:description": "{{ parameters.[csi.storage.k8s.io/pvc/namespace] }}/{{ parameters.[csi.storage.k8s.io/pvc/name] }}"
datasetParentName: enc1/dcsi/apps
# do NOT make datasetParentName and detachedSnapshotsDatasetParentName overlap
# they may be siblings, but neither should be nested in the other
detachedSnapshotsDatasetParentName: enc1/dcsi/snaps
zvolCompression:
# "" (inherit), on, off, verify
zvolDedup:
zvolEnableReservation: false
# 512, 1K, 2K, 4K, 8K, 16K, 64K, 128K default is 16K
zvolBlocksize:
iscsi:
targetPortal: "democratic-csi-server.reeselink.com"
# for multipath
# targetPortals: [] # [ "server[:port]", "server[:port]", ... ]
# leave empty to omit usage of -I with iscsiadm
interface:
# MUST ensure uniqueness
# full iqn limit is 223 bytes, plan accordingly
# default is "{{ name }}"
nameTemplate: "{{ parameters.[csi.storage.k8s.io/pvc/namespace] }}-{{ parameters.[csi.storage.k8s.io/pvc/name] }}"
namePrefix: "dcsi-"
nameSuffix: "-enc1"
# add as many as needed
targetGroups:
# get the correct ID from the "portal" section in the UI
- targetGroupPortalGroup: 7
# get the correct ID from the "initiators" section in the UI
targetGroupInitiatorGroup: 1
# None, CHAP, or CHAP Mutual
targetGroupAuthType: None
# get the correct ID from the "Authorized Access" section of the UI
# only required if using Chap
targetGroupAuthGroup:
extentCommentTemplate: "{{ parameters.[csi.storage.k8s.io/pvc/namespace] }}/{{ parameters.[csi.storage.k8s.io/pvc/name] }}"
extentInsecureTpc: true
extentXenCompat: false
extentDisablePhysicalBlocksize: true
# 512, 1024, 2048, or 4096,
extentBlocksize: 512
# "" (let FreeNAS decide, currently defaults to SSD), Unknown, SSD, 5400, 7200, 10000, 15000
extentRpm: "SSD"
# 0-100 (0 == ignore)
extentAvailThreshold: 0

83
truenas-nfs-enc0.yaml Normal file
View File

@@ -0,0 +1,83 @@
csiDriver:
name: "driveripper.zfs-nfs-enc0"
storageClasses:
- name: zfs-nfs-enc0
defaultClass: true
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
parameters:
fsType: nfs
mountOptions:
- nfsvers=4
secrets:
provisioner-secret:
controller-publish-secret:
node-stage-secret:
node-publish-secret:
controller-expand-secret:
volumeSnapshotClasses: []
driver:
config:
driver: freenas-api-nfs
instance_id:
httpConnection:
protocol: https
host: driveripper.reeselink.com
port: 443
# use only 1 of apiKey or username/password
# if both are present, apiKey is preferred
# apiKey is only available starting in TrueNAS-12
apiKey: <key>
# username: root
# password:
allowInsecure: false
# use apiVersion 2 for TrueNAS-12 and up (will work on 11.x in some scenarios as well)
# leave unset for auto-detection
apiVersion: 2
zfs:
# can be used to override defaults if necessary
# the example below is useful for TrueNAS 12
#cli:
# sudoEnabled: true
#
# leave paths unset for auto-detection
# paths:
# zfs: /usr/local/sbin/zfs
# zpool: /usr/local/sbin/zpool
# sudo: /usr/local/bin/sudo
# chroot: /usr/sbin/chroot
# can be used to set arbitrary values on the dataset/zvol
# can use handlebars templates with the parameters from the storage class/CO
datasetProperties:
"org.freenas:description": "{{ parameters.[csi.storage.k8s.io/pvc/namespace] }}/{{ parameters.[csi.storage.k8s.io/pvc/name] }}"
datasetParentName: enc0/democratic-csi/apps
# do NOT make datasetParentName and detachedSnapshotsDatasetParentName overlap
# they may be siblings, but neither should be nested in the other
detachedSnapshotsDatasetParentName: enc0/democratic-csi/snaps
datasetEnableQuotas: true
datasetEnableReservation: false
datasetPermissionsMode: "0777"
datasetPermissionsUser: 3000
datasetPermissionsGroup: 3000
# not supported yet
#datasetPermissionsAcls:
#- "-m everyone@:full_set:allow"
#- "-m u:kube:full_set:allow"
nfs:
shareCommentTemplate: "{{ parameters.[csi.storage.k8s.io/pvc/namespace] }}-{{ parameters.[csi.storage.k8s.io/pvc/name] }}"
shareHost: democratic-csi-server.reeselink.com
shareAlldirs: false
shareAllowedHosts: [democratic-csi-client-1.reeselink.com]
shareAllowedNetworks: []
shareMaprootUser: kube
shareMaprootGroup: kube
shareMapallUser: ""
shareMapallGroup: ""

83
truenas-nfs-enc1.yaml Normal file
View File

@@ -0,0 +1,83 @@
csiDriver:
name: "driveripper.zfs-nfs-enc1"
storageClasses:
- name: zfs-nfs-enc1
defaultClass: false
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
parameters:
fsType: nfs
mountOptions:
- nfsvers=4
secrets:
provisioner-secret:
controller-publish-secret:
node-stage-secret:
node-publish-secret:
controller-expand-secret:
volumeSnapshotClasses: []
driver:
config:
driver: freenas-api-nfs
instance_id:
httpConnection:
protocol: https
host: driveripper.reeselink.com
port: 443
# use only 1 of apiKey or username/password
# if both are present, apiKey is preferred
# apiKey is only available starting in TrueNAS-12
apiKey: <key>
# username: root
# password:
allowInsecure: false
# use apiVersion 2 for TrueNAS-12 and up (will work on 11.x in some scenarios as well)
# leave unset for auto-detection
apiVersion: 2
zfs:
# can be used to override defaults if necessary
# the example below is useful for TrueNAS 12
#cli:
# sudoEnabled: true
#
# leave paths unset for auto-detection
# paths:
# zfs: /usr/local/sbin/zfs
# zpool: /usr/local/sbin/zpool
# sudo: /usr/local/bin/sudo
# chroot: /usr/sbin/chroot
# can be used to set arbitrary values on the dataset/zvol
# can use handlebars templates with the parameters from the storage class/CO
datasetProperties:
"org.freenas:description": "{{ parameters.[csi.storage.k8s.io/pvc/namespace] }}/{{ parameters.[csi.storage.k8s.io/pvc/name] }}"
datasetParentName: enc1/democratic-csi/apps
# do NOT make datasetParentName and detachedSnapshotsDatasetParentName overlap
# they may be siblings, but neither should be nested in the other
detachedSnapshotsDatasetParentName: enc1/democratic-csi/snaps
datasetEnableQuotas: true
datasetEnableReservation: false
datasetPermissionsMode: "0777"
datasetPermissionsUser: 3000
datasetPermissionsGroup: 3000
# not supported yet
#datasetPermissionsAcls:
#- "-m everyone@:full_set:allow"
#- "-m u:kube:full_set:allow"
nfs:
shareCommentTemplate: "{{ parameters.[csi.storage.k8s.io/pvc/namespace] }}-{{ parameters.[csi.storage.k8s.io/pvc/name] }}"
shareHost: democratic-csi-server.reeselink.com
shareAlldirs: false
shareAllowedHosts: [democratic-csi-client-1.reeselink.com]
shareAllowedNetworks: []
shareMaprootUser: kube
shareMaprootGroup: kube
shareMapallUser: ""
shareMapallGroup: ""