remove metallb, update charts and readme

This commit is contained in:
2023-09-07 08:51:02 -04:00
parent 813d7e0e5b
commit 6791df6e32
16 changed files with 619 additions and 108 deletions

View File

@@ -126,6 +126,7 @@ Test the renewal process with:
```bash
sudo certbot renew --cert-name containers.reeselink.com --dry-run
sudo certbot renew --cert-name containers.reeseapps.com --dry-run
```
### Renewal
@@ -138,8 +139,12 @@ Create a renewal script in /usr/lib/scripts/certbot-renew.sh
#!/bin/bash
/usr/bin/certbot renew --cert-name containers.reeselink.com
cp /etc/letsencrypt/live/containers.reeselink.com/fullchain.pem /etc/cockpit/ws-certs.d/50-letsencrypt.cert
cp /etc/letsencrypt/live/containers.reeselink.com/privkey.pem /etc/cockpit/ws-certs.d/50-letsencrypt.key
/usr/bin/cp -f /etc/letsencrypt/live/containers.reeselink.com/fullchain.pem /etc/cockpit/ws-certs.d/50-letsencrypt.cert
/usr/bin/cp -f /etc/letsencrypt/live/containers.reeselink.com/privkey.pem /etc/cockpit/ws-certs.d/50-letsencrypt.key
/usr/bin/certbot renew --cert-name containers.reeseapps.com
/usr/bin/cp -f /etc/letsencrypt/live/containers.reeseapps.com/fullchain.pem /etc/cockpit/ws-certs.d/60-letsencrypt.cert
/usr/bin/cp -f /etc/letsencrypt/live/containers.reeseapps.com/privkey.pem /etc/cockpit/ws-certs.d/60-letsencrypt.key
```
Now create a systemd oneshot service to run the script

109
README.md
View File

@@ -1,4 +1,4 @@
# Containers!
# Containers
A project to store container-based hosting stuff.
@@ -24,8 +24,6 @@ curl -sfL https://get.k3s.io | sh -s - \
"local-storage" \
"--disable" \
"coredns" \
"--disable" \
"servicelb" \
"--cluster-dns" \
"10.43.0.10"
```
@@ -70,6 +68,12 @@ kubectl run -it --rm --restart=Never --image=infoblox/dnstools:latest dnstools
### Metal LB
The problem with metallb is when a service needs the real IP address of a client. You can
get the real IP with `externalTrafficPolicy: Local` but that prevents shared IP addresses even
if services are running on different ports. Klipper, on the other hand, seems to handle this
just fine. MetalLB isn't great for a local installation for this reason, but I'm leaving
the docs here just in case.
We'll be swapping K3S's default load balancer with Metal LB for more flexibility. ServiceLB was
struggling to allocate IP addresses for load balanced services. MetallLB does make things a little
more complicated- you'll need special annotations (see below) but it's otherwise a well-tested,
@@ -191,9 +195,9 @@ For example: your truenas server might be driveripper.reeselink.com and your kub
containers.reeselink.com. You should also have a democratic-csi-server.reeselink.com and a
democratic-csi-client-1.reeselink.com which might be on 172.20.0.1 and 172.20.0.2.
https://github.com/democratic-csi/democratic-csi
<https://github.com/democratic-csi/democratic-csi>
ISCSI requires a bit of server config before proceeding:
ISCSI requires a bit of server config before proceeding. Run the following on the kubernetes node.
```bash
# Install the following system packages
@@ -211,24 +215,67 @@ sudo systemctl enable iscsi
sudo systemctl start iscsi
```
And now you can install the drivers:
Now you can install the drivers. Note we won't be using the API drivers for Truenas
scale. These have stability issues that happen intermittently (especially when deleting
volumes... as in it won't delete volumes). As of 6/13/23 I don't recommend it.
Note: you can switch between driver types after install so there's no risk in using the
stable driver first and then experimenting with the API driver.
Before we begin you'll need to create a new "democratic" user on Truenas. First you should
create an SSH key for the user:
```bash
ssh-keygen -t rsa -N '' -f secrets/democratic_rsa.prod
chmod 600 secrets/democratic_rsa.prod
```
Now in the web console, use the following options:
| Field | Value |
|----------------------------------------|------------------------------------------------|
| Full Name | democratic |
| Username | democratic |
| Email | blank |
| Disable Password | True |
| Create New Primary Group | True |
| Auxiliary Groups | None |
| Create Home Directory | True |
| Authorized Keys | paste the generated ".pub" key here |
| Shell | bash |
| Allowed sudo commands | /usr/sbin/zfs /usr/sbin/zpool /usr/sbin/chroot |
| Allowed sudo commands with no password | /usr/sbin/zfs /usr/sbin/zpool /usr/sbin/chroot |
| Samba Authentication | False |
Save the user and verify SSH works with
```bash
ssh -i secrets/democratic_rsa.prod democratic@driveripper.reeselink.com
# test forbidden sudo command, should require a password
sudo ls
# test allowed sudo command
sudo zfs list
```
Copy `truenas-iscsi-enc0-stable.yaml` to `secrets/` and populate the secrets. Then
run the following to install it.
```bash
helm repo add democratic-csi https://democratic-csi.github.io/charts/
helm repo update
# enc0 bulk storage (iscsi)
# enc0 stable storage (iscsi)
helm upgrade \
--install \
--values truenas-iscsi-enc0.yaml \
--values secrets/truenas-iscsi-enc0-stable.yaml \
--namespace democratic-csi \
--create-namespace \
zfs-iscsi-enc0 democratic-csi/democratic-csi
# enc1 fast storage (iscsi)
# enc1 stable storage (iscsi)
helm upgrade \
--install \
--values truenas-iscsi-enc1.yaml \
--values secrets/truenas-iscsi-enc1-stable.yaml \
--namespace democratic-csi \
--create-namespace \
zfs-iscsi-enc1 democratic-csi/democratic-csi
@@ -237,13 +284,16 @@ zfs-iscsi-enc1 democratic-csi/democratic-csi
You can test that things worked with:
```bash
kubectl apply -f democratic-csi-pvc-test.yaml
kubectl apply -f tests/democratic-csi-pvc-test.yaml
kubectl delete -f tests/democratic-csi-pvc-test.yaml
```
Because iscsi will mount block devices, troubleshooting mounting issues, data corruption,
and exploring pvc contents must happen on the client device. Here are a few cheat-sheet
commands to make things easier:
Note with iscsi login: set the node.session.auth.username NOT node.session.auth.username_in
```bash
# discover all targets on the server
iscsiadm --mode discovery \
@@ -286,10 +336,10 @@ iscsiadm --mode node \
--targetname $ISCSI_TARGET
# view all connected disks
ls /dev/disk/by-path/
ls /dev/zvol/
# mount a disk
mount -t xfs /dev/disk/by-path/... /mnt/iscsi
mount -t xfs /dev/zvol/... /mnt/iscsi
# emergency - by-path isn't available
# (look for "Attached scsi disk")
@@ -337,7 +387,8 @@ helm upgrade --install \
nextcloud \
./helm/nextcloud \
--namespace nextcloud \
--create-namespace
--create-namespace \
--values secrets/nextcloud-values.yaml
```
Need to copy lots of files? Copy them to the user data dir and then run
@@ -346,6 +397,10 @@ Need to copy lots of files? Copy them to the user data dir and then run
./occ files:scan --all
```
Set up SES with the following links:
https://docs.aws.amazon.com/general/latest/gr/ses.html
### Gitea
Gitea provides a helm chart [here](https://gitea.com/gitea/helm-chart/). We're not
@@ -409,6 +464,30 @@ helm upgrade --install \
--create-namespace
```
### Snapdrop
Snapdrop is a file sharing app that allows airdrop-like functionality over the web
```bash
helm upgrade --install \
snapdrop \
./helm/snapdrop \
--namespace snapdrop \
--create-namespace
```
### Jellyfin
This assumes you have a media NFS share.
```bash
helm upgrade --install \
jellyfin \
./helm/jellyfin \
--namespace jellyfin \
--create-namespace
```
### Troubleshooting
Deleting a stuck namespace
@@ -432,7 +511,7 @@ Mounting an ix-application volume from truenas
# set the mountpoint
zfs set mountpoint=/ix_pvc enc1/ix-applications/releases/gitea/volumes/pvc-40e27277-71e3-4469-88a3-a39f53435a8b
#"unset" the mountpoing (back to legacy)
#"unset" the mountpoint (back to legacy)
zfs set mountpoint=legacy enc1/ix-applications/releases/gitea/volumes/pvc-40e27277-71e3-4469-88a3-a39f53435a8b
```

View File

@@ -0,0 +1,111 @@
csiDriver:
name: "driveripper.zfs-iscsi-enc0"
# add note here about volume expansion requirements
storageClasses:
- name: zfs-iscsi-enc0
defaultClass: false
reclaimPolicy: Delete
volumeBindingMode: Immediate
allowVolumeExpansion: true
parameters:
# for block-based storage can be ext3, ext4, xfs
fsType: xfs
mountOptions: []
secrets:
provisioner-secret:
controller-publish-secret:
node-stage-secret:
node-publish-secret:
controller-expand-secret:
volumeSnapshotClasses: []
driver:
config:
driver: freenas-iscsi
instance_id:
httpConnection:
protocol: https
host: driveripper.reeseapps.com
port: 8443
allowInsecure: false
# use only 1 of apiKey or username/password
# if both are present, apiKey is preferred
# apiKey is only available starting in TrueNAS-12
apiKey:
# username:
# password:
# use apiVersion 2 for TrueNAS-12 and up (will work on 11.x in some scenarios as well)
# leave unset for auto-detection
apiVersion: 2
sshConnection:
host: democratic-csi-server.reeselink.com
port: 22
username: democratic
# use either password or key
# password: ""
privateKey: |
-----BEGIN OPENSSH PRIVATE KEY-----
...
-----END OPENSSH PRIVATE KEY-----
zfs:
cli:
sudoEnabled: true
paths:
zfs: /usr/sbin/zfs
zpool: /usr/sbin/zpool
sudo: /usr/bin/sudo
chroot: /usr/sbin/chroot
# can be used to set arbitrary values on the dataset/zvol
# can use handlebars templates with the parameters from the storage class/CO
datasetProperties:
"org.freenas:description": "{{ parameters.[csi.storage.k8s.io/pvc/namespace] }}/{{ parameters.[csi.storage.k8s.io/pvc/name] }}"
datasetParentName: enc0/dcsi/apps
# do NOT make datasetParentName and detachedSnapshotsDatasetParentName overlap
# they may be siblings, but neither should be nested in the other
detachedSnapshotsDatasetParentName: enc0/dcsi/snaps
zvolCompression:
# "" (inherit), on, off, verify
zvolDedup:
zvolEnableReservation: false
# 512, 1K, 2K, 4K, 8K, 16K, 64K, 128K default is 16K
zvolBlocksize:
iscsi:
targetPortal: "democratic-csi-server.reeselink.com"
# for multipath
# targetPortals: [] # [ "server[:port]", "server[:port]", ... ]
# leave empty to omit usage of -I with iscsiadm
interface:
# MUST ensure uniqueness
# full iqn limit is 223 bytes, plan accordingly
# default is "{{ name }}"
nameTemplate: "{{ parameters.[csi.storage.k8s.io/pvc/namespace] }}-{{ parameters.[csi.storage.k8s.io/pvc/name] }}"
namePrefix: "dcsi-"
nameSuffix: "-enc0"
# add as many as needed
targetGroups:
# get the correct ID from the "portal" section in the UI
- targetGroupPortalGroup: 7
# get the correct ID from the "initiators" section in the UI
targetGroupInitiatorGroup: 1
# None, CHAP, or CHAP Mutual
targetGroupAuthType: None
# get the correct ID from the "Authorized Access" section of the UI
# only required if using Chap
targetGroupAuthGroup:
extentCommentTemplate: "{{ parameters.[csi.storage.k8s.io/pvc/namespace] }}/{{ parameters.[csi.storage.k8s.io/pvc/name] }}"
extentInsecureTpc: true
extentXenCompat: false
extentDisablePhysicalBlocksize: true
# 512, 1024, 2048, or 4096,
extentBlocksize: 512
# "" (let FreeNAS decide, currently defaults to SSD), Unknown, SSD, 5400, 7200, 10000, 15000
extentRpm: "SSD"
# 0-100 (0 == ignore)
extentAvailThreshold: 0

23
helm/jellyfin/.helmignore Executable file
View File

@@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

23
helm/jellyfin/Chart.yaml Executable file
View File

@@ -0,0 +1,23 @@
apiVersion: v2
name: Nextcloud
description: A Simple Nextcloud Chart
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
appVersion: 1.16.0

View File

@@ -0,0 +1,141 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}
spec:
selector:
matchLabels:
app.kubernetes.io/name: jellyfin
strategy:
type: Recreate
template:
metadata:
labels:
app.kubernetes.io/name: jellyfin
spec:
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
containers:
- name: jellyfin
image: {{ .Values.jellyfin.image }}
ports:
- containerPort: 8096
name: http
volumeMounts:
- mountPath: /config
name: config
- mountPath: /cache
name: cache
- mountPath: /movies
name: movies
- mountPath: /shows
name: shows
- mountPath: /videos
name: videos
resources:
requests:
memory: "1Gi"
cpu: "1m"
limits:
memory: "8Gi"
cpu: "24"
volumes:
- name: config
persistentVolumeClaim:
claimName: {{ .Release.Name }}-config
- name: cache
persistentVolumeClaim:
claimName: {{ .Release.Name }}-cache
- name: movies
nfs:
server: democratic-csi-server.reeselink.com
path: /mnt/enc0/media/Movies
readOnly: true
- name: shows
nfs:
server: democratic-csi-server.reeselink.com
path: /mnt/enc0/media/Shows
readOnly: true
- name: videos
nfs:
server: democratic-csi-server.reeselink.com
path: /mnt/enc0/media/Videos
readOnly: true
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Release.Name }}-config
annotations:
"helm.sh/resource-policy": keep
spec:
storageClassName: zfs-iscsi-enc0
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Release.Name }}-cache
annotations:
"helm.sh/resource-policy": keep
spec:
storageClassName: zfs-iscsi-enc1
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 128Gi
---
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: jellyfin
ports:
- name: http
protocol: TCP
port: 80
targetPort: http
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ .Release.Name }}
annotations:
cert-manager.io/cluster-issuer: letsencrypt
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.org/client-max-body-size: "0"
spec:
rules:
- host: {{ .Values.jellyfin.domain }}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: jellyfin
port:
name: http
tls:
- hosts:
- {{ .Values.jellyfin.domain }}
secretName: jellyfin-tls-cert

3
helm/jellyfin/values.yaml Executable file
View File

@@ -0,0 +1,3 @@
jellyfin:
image: jellyfin/jellyfin:latest
domain: jellyfin.reeseapps.com

View File

@@ -1,7 +1,7 @@
image: ducoterra/minecraft:latest
get_server:
image: ducoterra/get-minecraft:2.0.2
server_version: "1.19.4"
server_version: "1.20.1"
port: 25565
max_cpu: 4
max_ram: 8

View File

@@ -104,13 +104,7 @@ data:
PHP_UPLOAD_LIMIT: 1000000M
PHP_MEMORY_LIMIT: 2048M
TRUSTED_PROXIES: 10.42.0.1/24
SMTP_HOST: email-smtp.us-east-2.amazonaws.com
SMTP_SECURE: tls
SMTP_PORT: "587"
SMTP_AUTHTYPE: LOGIN
SMTP_NAME: AKIA47PLXC3AOBYQITO2
MAIL_FROM_ADDRESS: server@nextcloud.reeseapps.com
MAIL_DOMAIN: nextcloud.reeseapps.com
APACHE_DISABLE_REWRITE_IP: "1"
---
@@ -218,23 +212,15 @@ spec:
- name: html
persistentVolumeClaim:
claimName: {{ .Release.Name }}-html-iops
# emptyDir:
# sizeLimit: 1Gi
- name: data
persistentVolumeClaim:
claimName: {{ .Release.Name }}-data
# emptyDir:
# sizeLimit: 1Gi
- name: postgres
persistentVolumeClaim:
claimName: {{ .Release.Name }}-postgres-iops
# emptyDir:
# sizeLimit: 1Gi
- name: redis
persistentVolumeClaim:
claimName: {{ .Release.Name }}-redis-iops
# emptyDir:
# sizeLimit: 1Gi
- name: postgres-init
secret:
secretName: {{ .Release.Name }}-postgres-init
@@ -284,33 +270,13 @@ spec:
- name: html
persistentVolumeClaim:
claimName: {{ .Release.Name }}-html-iops
# emptyDir:
# sizeLimit: 1Gi
- name: data
persistentVolumeClaim:
claimName: {{ .Release.Name }}-data
# emptyDir:
# sizeLimit: 1Gi
restartPolicy: OnFailure
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Release.Name }}-html
annotations:
"helm.sh/resource-policy": keep
spec:
storageClassName: zfs-iscsi-enc0
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 16Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
@@ -323,39 +289,7 @@ spec:
- ReadWriteOnce
resources:
requests:
storage: 512Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Release.Name }}-postgres
annotations:
"helm.sh/resource-policy": keep
spec:
storageClassName: zfs-iscsi-enc0
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 32Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Release.Name }}-redis
annotations:
"helm.sh/resource-policy": keep
spec:
storageClassName: zfs-iscsi-enc0
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storage: 2Ti
---
@@ -440,22 +374,6 @@ metadata:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.org/client-max-body-size: "0"
nginx.org/server-snippets: |
location ^~ /.well-known {
# The rules in this block are an adaptation of the rules
# in `.htaccess` that concern `/.well-known`.
location = /.well-known/carddav { return 301 /remote.php/dav/; }
location = /.well-known/caldav { return 301 /remote.php/dav/; }
location /.well-known/acme-challenge { try_files $uri $uri/ =404; }
location /.well-known/pki-validation { try_files $uri $uri/ =404; }
# Let Nextcloud's API for `/.well-known` URIs handle all other
# requests by passing them to the front-end controller.
return 301 /index.php$request_uri;
}
spec:
rules:
- host: {{ .Values.nextcloud.domain }}

23
helm/snapdrop/.helmignore Executable file
View File

@@ -0,0 +1,23 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/

23
helm/snapdrop/Chart.yaml Executable file
View File

@@ -0,0 +1,23 @@
apiVersion: v2
name: Nextcloud
description: A Simple Nextcloud Chart
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
appVersion: 1.16.0

View File

@@ -0,0 +1,111 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-snapdrop
annotations:
"helm.sh/resource-policy": keep
data:
PUID: "1000"
PGID: "1000"
TZ: Etc/UTC
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}
spec:
selector:
matchLabels:
app.kubernetes.io/name: snapdrop
strategy:
type: Recreate
template:
metadata:
labels:
app.kubernetes.io/name: snapdrop
spec:
containers:
- name: snapdrop
image: {{ .Values.snapdrop.image }}
ports:
- containerPort: 80
name: http
envFrom:
- configMapRef:
name: {{ .Release.Name }}-snapdrop
volumeMounts:
- mountPath: /config
name: config
resources:
requests:
memory: "1Gi"
cpu: "1m"
limits:
memory: "4Gi"
cpu: "4"
volumes:
- name: config
persistentVolumeClaim:
claimName: {{ .Release.Name }}-config
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: {{ .Release.Name }}-config
annotations:
"helm.sh/resource-policy": keep
spec:
storageClassName: zfs-iscsi-enc0
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
---
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: snapdrop
ports:
- name: http
protocol: TCP
port: 80
targetPort: http
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ .Release.Name }}
annotations:
cert-manager.io/cluster-issuer: letsencrypt
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.org/client-max-body-size: "0"
spec:
rules:
- host: {{ .Values.snapdrop.domain }}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: snapdrop
port:
name: http
tls:
- hosts:
- {{ .Values.snapdrop.domain }}
secretName: snapdrop-tls-cert

3
helm/snapdrop/values.yaml Executable file
View File

@@ -0,0 +1,3 @@
snapdrop:
image: linuxserver/snapdrop:latest
domain: snapdrop.reeseapps.com

View File

@@ -1,5 +1,8 @@
controller:
service:
annotations:
metallb.universe.tf/allow-shared-ip: "containers"
externalTrafficPolicy: Cluster
# service:
# externalTrafficPolicy: Local
config:
enable-real-ip: "true"
use-forwarded-headers: "true"
compute-full-forwarded-for: "true"
proxy-real-ip-cidr: "0.0.0.0/0"

View File

@@ -5,7 +5,7 @@ metadata:
annotations:
"helm.sh/resource-policy": keep
spec:
storageClassName: zfs-iscsi-enc0
storageClassName: zfs-iscsi-enc1
accessModes:
- ReadWriteOnce
resources:

45
tests/ffmpeg.yaml Normal file
View File

@@ -0,0 +1,45 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: ffmpeg
spec:
selector:
matchLabels:
app: ffmpeg
template:
metadata:
labels:
app: ffmpeg
spec:
volumes:
- name: data
persistentVolumeClaim:
claimName: ffmpeg
containers:
- name: ffmpeg
image: linuxserver/ffmpeg:latest
volumeMounts:
- mountPath: /config
name: data
command:
- /bin/bash
- -c
- 'sleep infinity'
resources:
limits:
memory: "2Gi"
cpu: "8"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ffmpeg
spec:
storageClassName: zfs-iscsi-enc0-ext4
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 64Gi