kube transfer to single-node host

This commit is contained in:
2024-07-08 10:39:56 -04:00
parent d1afa569cc
commit 887df21477
69 changed files with 1675 additions and 2009 deletions

View File

@@ -1,328 +0,0 @@
# Fedora Server
Fedora server is an awesome container hosting OS. It has a lot built in, and setup is pretty
quick.
- [Fedora Server](#fedora-server)
- [Initialization](#initialization)
- [Disable swap](#disable-swap)
- [Network](#network)
- [Enable ISCSI](#enable-iscsi)
- [Disable Firewalld](#disable-firewalld)
- [Set SELinux to Permissive](#set-selinux-to-permissive)
- [Install K3S](#install-k3s)
- [Database Backups](#database-backups)
- [Expanding Root Partition](#expanding-root-partition)
- [Arping IP Address](#arping-ip-address)
- [Optional Steps](#optional-steps)
- [Certbot for Cockpit](#certbot-for-cockpit)
## Initialization
1. `dnf install vim pwgen wireguard-tools`
2. `hostnamectl hostname node1`
3. Set a static IP through the web interface
4. Allow wheel group members to sudo without password
## Disable swap
```bash
swapoff -a
dnf remove zram-generator-defaults
```
mask <systemd-zram-setup@zram0.service>
## Network
1. Set MTU to 9000
If your network supports it, use 9000 as your mtu to allow more data per packet between
servers. Note! For bridge interfaces you must set both the physical interface and bridge
interface to 9000 - setting one but not the other can cause connectivity problems.
## Enable ISCSI
```bash
# Install the following system packages
dnf install -y lsscsi iscsi-initiator-utils sg3_utils device-mapper-multipath
# Enable multipathing
mpathconf --enable --with_multipathd y
# Ensure that iscsid and multipathd are running
systemctl enable --now iscsid multipathd
# Test that discovery works
iscsiadm -m discovery -t st -p democratic-csi-server.reeselink.com
# Remove them - democratic-csi will populate this
rm -rf /var/lib/iscsi/nodes/
# Start and enable iscsi
systemctl enable --now iscsi
```
## Disable Firewalld
<https://docs.k3s.io/advanced#red-hat-enterprise-linux--centos--fedora>
Disable firewalld. You could add rules for each service but every time you open a port
from a container you'd need to run a firewalld rule.
You can disable firewalld from the web interface.
## Set SELinux to Permissive
K3S is more than capable of running with SELinux set to enforcing. We won't be doing
that, however. We'll set it to permissive and you can reenable it once you've added all
the rules you need to keep your services running.
Set SELinux to permissive by editing `/etc/selinux/config`
SELINUX=permissive
## Install K3S
<https://docs.k3s.io/installation/requirements>
We're going to be tweaking some installation parameters so if you already have k3s
installed you can either uninstall it or skip these steps.
This installation disables Traefik, local-storage, and Klipper. We'll replace them with
our own components.
1. Generate a secure token for each node to use when connecting
umask 077
echo -n $(pwgen 16 4) | sed 's/ /-/g' > token.txt
2. Create the cluster
export SECRET=$(cat token.txt)
curl -sfL https://get.k3s.io | K3S_TOKEN=$SECRET sh -s - \
"--cluster-init" \
"--flannel-backend=wireguard-native" \
"--disable" \
"traefik" \
"--disable" \
"local-storage" \
"--disable" \
"servicelb" \
"--disable" \
"coredns" \
"--cluster-dns" \
"10.43.0.10" \
"--tls-san" \
"kube.reeselink.com" \
"--tls-san" \
"kube.reeseapps.com" \
3. Join each server node
export SECRET=$(cat token.txt)
curl -sfL https://get.k3s.io | K3S_TOKEN=$SECRET sh -s - server \
--server https://node1.reeselink.com:6443 \
--flannel-backend=wireguard-native \
"--disable" \
"traefik" \
"--disable" \
"local-storage" \
"--disable" \
"coredns" \
"--disable" \
"servicelb" \
"--cluster-dns" \
"10.43.0.10" \
"--tls-san" \
"kube.reeselink.com" \
Now you can change the ownership of (and copy) the k3s.yaml file:
chown ducoterra /etc/rancher/k3s/k3s.yaml
scp /etc/rancher/k3s/k3s.yaml ~/.kube/config
Edit ~/.kube/config and change 127.0.0.1 to containers.reeselink.com
### Database Backups
<https://docs.k3s.io/cli/etcd-snapshot>
Note, you must backup `/var/lib/rancher/k3s/server/token`
and use the contents as the token when restoring the backup as data is encrypted with that token.
## Expanding Root Partition
```bash
dnf install cloud-utils-growpart
growpart /dev/sda 3
lvextend -l +100%FREE fedora
xfs_growfs /dev/mapper/fedora-root
```
## Arping IP Address
```bash
echo 1 > /proc/sys/net/ipv4/ip_nonlocal_bind
arping -I bridge0 -U -s 10.1.2.102 10.1.0.1
```
## Optional Steps
### Certbot for Cockpit
During this process you'll pick one node to act as your manager for your other nodes.
You'll only need to cert a single node and then it will connect via ssh over your local
network to the other nodes.
Create an AWS user which will have route53 access. This is required for certbot's route53
validation.
```bash
aws iam create-user --user-name replicator
```
You'll also need a policy which allows the user to modify the selected hosted zone:
(list with `aws route53 list-hosted-zones`)
```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"route53:ListHostedZones",
"route53:GetChange"
],
"Resource": [
"*"
]
},
{
"Effect" : "Allow",
"Action" : [
"route53:ChangeResourceRecordSets"
],
"Resource" : [
"arn:aws:route53:::hostedzone/Z012820733346FJ0U4FUF",
"arn:aws:route53:::hostedzone/Z0092652G7L97DSINN18",
"arn:aws:route53:::hostedzone/Z04612891U5Q2JRHUZ11T"
]
}
]
}
```
Attach the policy to the user:
```bash
aws iam attach-user-policy \
--user-name replicator \
--policy-arn arn:aws:iam::892236928704:policy/certbot-route53-reeseapps
```
Generate credentials:
```bash
aws iam create-access-key --user-name replicator
```
On the host machine:
```bash
mkdir ~/.aws
vim ~/.aws/config
```
```conf
[profile default]
region=us-east-2
```
```bash
vim ~/.aws/credentials
```
```conf
[default]
aws_access_key_id=<key>
aws_secret_access_key=<key>
```
Install the aws cli v2 on the manager node:
```bash
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
```
Test your credentials with `aws route53 list-hosted-zones`. You should see as list of your
hosted zones.
Now install certbot and acquire a cert using those credentials:
```bash
sudo dnf install certbot python3-certbot-dns-route53
sudo certbot certonly --dns-route53 -d containers.reeselink.com
sudo cp /etc/letsencrypt/live/containers.reeselink.com/fullchain.pem /etc/cockpit/ws-certs.d/50-letsencrypt.cert
sudo cp /etc/letsencrypt/live/containers.reeselink.com/privkey.pem /etc/cockpit/ws-certs.d/50-letsencrypt.key
```
Test the renewal process with:
```bash
sudo certbot renew --cert-name containers.reeselink.com --dry-run
```
Create a renewal script in /usr/lib/scripts/certbot-renew.sh
/usr/lib/scripts/certbot-renew.sh (chmod +x)
```bash
#!/bin/bash
/usr/bin/certbot renew --cert-name containers.reeselink.com
/usr/bin/cp -f /etc/letsencrypt/live/containers.reeselink.com/fullchain.pem /etc/cockpit/ws-certs.d/50-letsencrypt.cert
/usr/bin/cp -f /etc/letsencrypt/live/containers.reeselink.com/privkey.pem /etc/cockpit/ws-certs.d/50-letsencrypt.key
```
Now create a systemd oneshot service to run the script
/etc/systemd/system/certbot-renew.service
```conf
[Unit]
Description=Certbot Renewal
[Service]
Type=oneshot
ExecStart=/usr/lib/scripts/certbot-renew.sh
```
/etc/systemd/system/certbot-renew.timer
```conf
[Unit]
Description=Timer for Certbot Renewal
[Timer]
OnBootSec=300
OnUnitActiveSec=1w
[Install]
WantedBy=multi-user.target
```
Enable the service
```bash
systemctl enable --now certbot-renew.timer
```
Cockpit now has a valid TLS certificate that auto-renews!

555
k3s/README.md Normal file
View File

@@ -0,0 +1,555 @@
# K3S
- [K3S](#k3s)
- [Guide](#guide)
- [Disable Firewalld](#disable-firewalld)
- [Set SELinux to Permissive](#set-selinux-to-permissive)
- [Install K3S (Single Node)](#install-k3s-single-node)
- [Kube Credentials](#kube-credentials)
- [Storage](#storage)
- [Coredns](#coredns)
- [Metal LB](#metal-lb)
- [External DNS](#external-dns)
- [Credentials](#credentials)
- [Annotation](#annotation)
- [Nginx Ingress](#nginx-ingress)
- [Cert Manager](#cert-manager)
- [Test Minecraft Server](#test-minecraft-server)
- [Automatic Updates](#automatic-updates)
- [Manual Updates](#manual-updates)
- [Create a Userspace](#create-a-userspace)
- [Quickstart](#quickstart)
- [Userspace](#userspace)
- [Namespace](#namespace)
- [Roles](#roles)
- [Rolebinding](#rolebinding)
- [Manual Steps](#manual-steps)
- [Create a kubernetes certsigner pod](#create-a-kubernetes-certsigner-pod)
- [Create the certsigner secret](#create-the-certsigner-secret)
- [Set up the certsigner pod](#set-up-the-certsigner-pod)
- [Generate a cert](#generate-a-cert)
- [Create a new Userspace](#create-a-new-userspace)
- [Sign the cert](#sign-the-cert)
- [Add to the config](#add-to-the-config)
- [Delete](#delete)
- [Signing a user cert - detailed notes](#signing-a-user-cert---detailed-notes)
- [Help](#help)
- [Troubleshooting](#troubleshooting)
- [Deleting a stuck namespace](#deleting-a-stuck-namespace)
- [Fixing a bad volume](#fixing-a-bad-volume)
- [Mounting an ix-application volume from truenas](#mounting-an-ix-application-volume-from-truenas)
- [Mounting a volume](#mounting-a-volume)
- [Database Backups](#database-backups)
- [Uninstall](#uninstall)
## Guide
1. Configure Host
2. Install CoreDNS for inter-container discovery
3. Install Metal LB for load balancer IP address assignment
4. install External DNS for laod balancer IP and ingress DNS records
5. Install Nginx Ingress for http services
6. Install Cert Manager for automatic Let's Encrypt certificates for Ingress nginx
7. Install longhorn storage for automatic PVC creation and management
8. Set up automatic database backups
## Disable Firewalld
<https://docs.k3s.io/advanced#red-hat-enterprise-linux--centos--fedora>
Disable firewalld. You could add rules for each service but every time you open a port
from a container you'd need to run a firewalld rule.
You can disable firewalld from the web interface.
## Set SELinux to Permissive
K3S is more than capable of running with SELinux set to enforcing. We won't be doing
that, however. We'll set it to permissive and you can reenable it once you've added all
the rules you need to keep your services running.
Set SELinux to permissive by editing `/etc/selinux/config`
SELINUX=permissive
## Install K3S (Single Node)
```bash
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.30.2+k3s2 sh -s - \
"--flannel-ipv6-masq" \
"--disable" \
"traefik" \
"--disable" \
"servicelb" \
"--disable" \
"coredns" \
"--tls-san" \
"kube.reeselink.com" \
"--cluster-cidr" \
"10.42.0.0/16,fd02:c91e:56f4::/56" \
"--service-cidr" \
"10.43.0.0/16,fd02:c91e:56f5::/112" \
"--cluster-dns" \
"fd02:c91e:56f5::10"
```
## Kube Credentials
On the operator
```bash
# Copy the kube config down
scp kube:/etc/rancher/k3s/k3s.yaml ~/.kube/admin-kube-config
# Edit the server to match the remote address.
```
## Storage
1. `mkdir /var/lib/rancher/k3s/storage`
2. Edit fstab to mount your drive to `/var/lib/rancher/k3s/storage`
3. `systemctl daemon-reload`
4. `mount -a`
## Coredns
1. Edit `coredns/values.yaml` to ensure the forward nameserver is correct.
```bash
# Install CoreDNS
helm upgrade --install \
--namespace=kube-system \
--values coredns/values.yaml \
coredns coredns/coredns
# Test DNS works
kubectl run -it --rm \
--restart=Never \
--image=infoblox/dnstools:latest \
dnstools
```
## Metal LB
We'll be swapping K3S's default load balancer with Metal LB for more flexibility. ServiceLB was
struggling to allocate IP addresses for load balanced services. MetallLB does make things a little
more complicated- you'll need special annotations (see below) but it's otherwise a well-tested,
stable load balancing service with features to grow into.
Metallb is pretty cool. It works via l2 advertisement or BGP. We won't be using BGP, so let's
focus on l2.
When we connect our nodes to a network we give them an IP address range: ex. `192.168.122.20/24`.
This range represents all the available addresses the node could be assigned. Usually we assign
a single "static" IP address for our node and direct traffic to it by port forwarding from our
router. This is fine for single nodes - but what if we have a cluster of nodes and we don't want
our service to disappear just because one node is down for maintenance?
This is where l2 advertising comes in. Metallb will assign a static IP address from a given
pool to any arbitrary node - then advertise that node's mac address as the location for the
IP. When that node goes down metallb simply advertises a new mac address for the same IP
address, effectively moving the IP to another node. This isn't really "load balancing" but
"failover". Fortunately, that's exactly what we're looking for.
```bash
helm repo add metallb https://metallb.github.io/metallb
helm repo update
helm upgrade --install metallb \
--namespace kube-system \
metallb/metallb
```
MetalLB doesn't know what IP addresses are available for it to allocate so we'll have
to provide it with a list. The `metallb-addresspool.yaml` has one IP address (we'll get to
IP address sharing in a second) which is an unassigned IP address not allocated to any of our
nodes. Note if you have many public IPs which all point to the same router or virtual network
you can list them. We're only going to use one because we want to port forward from our router.
```bash
# create the metallb allocation pool
kubectl apply -f metallb/addresspool.yaml
```
You'll need to annotate your service as follows if you want an external IP:
```yaml
metadata:
annotations:
metallb.universe.tf/address-pool: "external"
# or
metallb.universe.tf/address-pool: "internal"
spec:
ipFamilyPolicy: SingleStack
ipFamilies:
- IPv6
```
## External DNS
<https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/aws.md>
### Credentials
1. Generate credentials for the cluster
```bash
aws iam create-user --user-name "externaldns"
aws iam attach-user-policy --user-name "externaldns" --policy-arn arn:aws:iam::892236928704:policy/update-reeseapps
SECRET_ACCESS_KEY=$(aws iam create-access-key --user-name "externaldns")
ACCESS_KEY_ID=$(echo $SECRET_ACCESS_KEY | jq -r '.AccessKey.AccessKeyId')
cat <<-EOF > secrets/externaldns-credentials
[default]
aws_access_key_id = $(echo $ACCESS_KEY_ID)
aws_secret_access_key = $(echo $SECRET_ACCESS_KEY | jq -r '.AccessKey.SecretAccessKey')
EOF
kubectl create secret generic external-dns \
--namespace kube-system --from-file secrets/externaldns-credentials
kubectl apply -f external-dns/sa.yaml
kubectl apply -f external-dns/deploy.yaml
```
### Annotation
```yaml
metadata:
annotations:
external-dns.alpha.kubernetes.io/hostname: example.com
```
## Nginx Ingress
Now we need an ingress solution (preferably with certs for https). We'll be using nginx since
it's a little bit more configurable than traefik (though don't sell traefik short, it's really
good. Just finnicky when you have use cases they haven't explicitly coded for).
```bash
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm upgrade --install \
ingress-nginx \
ingress-nginx/ingress-nginx \
--values ingress-nginx/values.yaml \
--namespace kube-system
```
## Cert Manager
Install cert-manager
```bash
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm upgrade --install \
cert-manager jetstack/cert-manager \
--namespace kube-system \
--set crds.enabled=true
```
Create the let's encrypt issuer (Route53 DNS)
```bash
kubectl apply -f certmanager/letsencrypt-issuer.yaml
```
You can test if your ingress is working with:
```bash
# Navigate to demo.reeseapps.com
kubectl apply -f k3s/tests/ingress-nginx-test.yaml
# Cleanup
kubectl delete -f k3s/tests/ingress-nginx-test.yaml
```
## Test Minecraft Server
```bash
helm upgrade --install minecraft ./helm/minecraft -n minecraft --create-namespace
helm upgrade --install minecraft1 ./helm/minecraft -n minecraft --create-namespace
```
## Automatic Updates
<https://docs.k3s.io/upgrades/automated>
```bash
kubectl apply -f https://github.com/rancher/system-upgrade-controller/releases/latest/download/system-upgrade-controller.yaml
kubectl apply -f https://github.com/rancher/system-upgrade-controller/releases/latest/download/crd.yaml
kubectl apply -f k3s/upgrade-plan.yaml
```
## Manual Updates
<https://docs.k3s.io/upgrades/manual#manually-upgrade-k3s-using-the-binary>
```bash
sudo su -
wget https://github.com/k3s-io/k3s/releases/download/v1.28.3%2Bk3s1/k3s
systemctl stop k3s
chmod +x k3s
mv k3s /usr/local/bin/k3s
systemctl start k3s
```
## Create a Userspace
This creates a user, namespace, and permissions with a simple script.
### Quickstart
```bash
# Create certsigner pod for all other operations
./setup.sh <server_fqdn>
# Create a user, use "admin" to create an admin user
./upsertuser.sh <ssh_address> <server_fqdn (for kubectl)> <user>
# Remove a user, their namespace, and their access
./removeuserspace <server_fqdn> <user>
```
### Userspace
#### Namespace
```yaml
apiVersion: v1
kind: Namespace
metadata:
name: {{ .Release.Name }}
```
#### Roles
```yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: namespace-manager
namespace: {{ .Release.Name }}
rules:
- apiGroups:
- ""
- extensions
- apps
- batch
- autoscaling
- networking.k8s.io
- traefik.containo.us
- rbac.authorization.k8s.io
- metrics.k8s.io
resources:
- deployments
- replicasets
- pods
- pods/exec
- pods/log
- pods/attach
- daemonsets
- statefulsets
- replicationcontrollers
- horizontalpodautoscalers
- services
- ingresses
- persistentvolumeclaims
- jobs
- cronjobs
- secrets
- configmaps
- serviceaccounts
- rolebindings
- ingressroutes
- middlewares
- endpoints
verbs:
- "*"
- apiGroups:
- ""
- metrics.k8s.io
- rbac.authorization.k8s.io
resources:
- resourcequotas
- roles
verbs:
- list
```
#### Rolebinding
```yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
namespace: {{ .Release.Name }}
name: namespace-manager
subjects:
- kind: User
name: {{ .Release.Name }}
apiGroup: ""
roleRef:
kind: ClusterRole
name: namespace-manager
apiGroup: ""
```
### Manual Steps
#### Create a kubernetes certsigner pod
This keeps the client-ca crt and key secret and allows the cert to be signed and stored on the pod
#### Create the certsigner secret
```bash
kubectl -n kube-system create secret generic certsigner --from-file /var/lib/rancher/k3s/server/tls/client-ca.crt --from-file /var/lib/rancher/k3s/server/tls/client-ca.key
```
#### Set up the certsigner pod
```bash
scp certsigner.yaml <server>:~/certsigner.yaml
kubectl apply -f certsigner.yaml
```
#### Generate a cert
```bash
export USER=<user>
docker run -it -v $(pwd)/users/$USER:/$USER python:latest openssl genrsa -out /$USER/$USER.key 2048
docker run -it -v $(pwd)/users/$USER:/$USER python:latest openssl req -new -key /$USER/$USER.key -out /$USER/$USER.csr -subj "/CN=$USER/O=user"
```
#### Create a new Userspace
```bash
helm template $USER ./namespace | kubectl --context admin apply -f -
```
#### Sign the cert
```bash
export USER=<user>
kubectl --context admin cp $(pwd)/users/$USER/$USER.csr certsigner:/certs/$USER.csr
kubectl --context admin exec -it --context admin certsigner -- openssl x509 -in /certs/$USER.csr -req -CA /keys/client-ca.crt -CAkey /keys/client-ca.key -CAcreateserial -out /certs/$USER.crt -days 5000
kubectl --context admin cp certsigner:/certs/$USER.crt $(pwd)/users/$USER/$USER.crt
```
#### Add to the config
```bash
kubectl config set-credentials $USER --client-certificate=$USER.crt --client-key=$USER.key
kubectl config set-context $USER --cluster=mainframe --namespace=$USER --user=$USER
```
#### Delete
```bash
kubectl config delete-context $USER
helm template $USER ./namespace | kubectl --context admin delete -f -
```
### Signing a user cert - detailed notes
NOTE: ca.crt and ca.key are in /var/lib/rancher/k3s/server/tls/client-ca.*
```bash
# First we create the credentials
# /CN=<username> - the user
# /O=<group> - the group
# Navigate to the user directory
export USER=<username>
cd $USER
# Generate a private key
openssl genrsa -out $USER.key 2048
# Check the key
# openssl pkey -in ca.key -noout -text
# Generate and send me the CSR
# The "user" group is my default group
openssl req -new -key $USER.key -out $USER.csr -subj "/CN=$USER/O=user"
# Check the CSR
# openssl req -in $USER.csr -noout -text
# If satisfactory, sign the CSR
# Copy from /var/lib/rancher/k3s/server/tls/client-ca.crt and client-ca.key
openssl x509 -req -in $USER.csr -CA ../client-ca.crt -CAkey ../client-ca.key -CAcreateserial -out $USER.crt -days 5000
# Review the certificate
# openssl x509 -in $USER.crt -text -noout
# Send back the crt
# cp $USER.crt $USER.key ../server-ca.crt ~/.kube/
kubectl config set-credentials $USER --client-certificate=$USER.crt --client-key=$USER.key
kubectl config set-context $USER --cluster=mainframe --namespace=$USER --user=$USER
# Now we create the namespace, rolebindings, and resource quotas
# kubectl apply -f k8s/
# Add the cluster
# CA file can be found at https://3.14.3.100:6443/cacerts
- cluster:
certificate-authority: server-ca.crt
server: https://3.14.3.100:6443
name: mainframe
# Test if everything worked
kubectl --context=$USER-context get pods
```
## Help
### Troubleshooting
#### Deleting a stuck namespace
```bash
NAMESPACE=nginx
kubectl proxy &
kubectl get namespace $NAMESPACE -o json |jq '.spec = {"finalizers":[]}' >temp.json
curl -k -H "Content-Type: application/json" -X PUT --data-binary @temp.json 127.0.0.1:8001/api/v1/namespaces/$NAMESPACE/finalize
```
#### Fixing a bad volume
```bash
xfs_repair -L /dev/sdg
```
#### Mounting an ix-application volume from truenas
```bash
# set the mountpoint
zfs set mountpoint=/ix_pvc enc1/ix-applications/releases/gitea/volumes/pvc-40e27277-71e3-4469-88a3-a39f53435a8b
#"unset" the mountpoint (back to legacy)
zfs set mountpoint=legacy enc1/ix-applications/releases/gitea/volumes/pvc-40e27277-71e3-4469-88a3-a39f53435a8b
```
#### Mounting a volume
```bash
# mount
mount -t xfs /dev/zvol/enc0/dcsi/apps/pvc-d5090258-cf20-4f2e-a5cf-330ac00d0049 /mnt/dcsi_pvc
# unmount
umount /mnt/dcsi_pvc
```
## Database Backups
<https://docs.k3s.io/cli/etcd-snapshot>
Note, you must backup `/var/lib/rancher/k3s/server/token`
and use the contents as the token when restoring the backup as data is encrypted with that token.
## Uninstall
```bash
/usr/local/bin/k3s-uninstall.sh
```

View File

@@ -1,9 +0,0 @@
# Kubernetes Node Ansible
Updates /etc/hosts on each kubernetes node with the correct IP for democratic-csi-server.reeselink.com
## Update Hosts
```bash
ansible-playbook -i ansible/inventory.yaml k3s/hosts/update_hosts.yaml
```

View File

@@ -1,4 +0,0 @@
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
# 172.20.0.1 democratic-csi-server.reeselink.com
fd00:fd41:d0f1:1010::6 democratic-csi-server.reeselink.com

View File

@@ -1,20 +0,0 @@
- name: Update /etc/hosts
hosts: kubernetes
become: true
become_user: root
become_method: sudo
tasks:
- name: Copy /etc/hosts
ansible.builtin.copy:
src: ./hosts
dest: /etc/hosts
owner: root
group: root
mode: '0644'
# - name: Add IP address of all hosts to /etc/hosts
# lineinfile:
# dest: /etc/hosts
# regexp: '.*{{ item.value.address }}$'
# line: "{{ item.value.address }} {{ item.value.hostname }}"
# state: present
# loop: "{{ ip | dict2items }}"

View File

@@ -0,0 +1,93 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: zfs-iscsi-enc0
annotations:
"helm.sh/resource-policy": keep
spec:
storageClassName: zfs-iscsi-enc0
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: zfs-iscsi-enc1
annotations:
"helm.sh/resource-policy": keep
spec:
storageClassName: zfs-iscsi-enc1
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: zfs-nfs-enc1
annotations:
"helm.sh/resource-policy": keep
spec:
storageClassName: zfs-nfs-enc1
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 8Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: democratic-csi-test
spec:
selector:
matchLabels:
app: democratic-csi-test
template:
metadata:
labels:
app: democratic-csi-test
spec:
containers:
- image: debian
command:
- bash
- -c
- 'sleep infinity'
name: democratic-csi-test
volumeMounts:
- mountPath: /zfs_iscsi_enc0
name: zfs-iscsi-enc0
- mountPath: /zfs_iscsi_enc1
name: zfs-iscsi-enc1
- mountPath: /zfs_nfs_enc1
name: zfs-nfs-enc1
resources:
limits:
memory: "4Gi"
cpu: "2"
requests:
memory: "1Mi"
cpu: "1m"
restartPolicy: Always
volumes:
- name: zfs-iscsi-enc0
persistentVolumeClaim:
claimName: zfs-iscsi-enc0
- name: zfs-iscsi-enc1
persistentVolumeClaim:
claimName: zfs-iscsi-enc1
- name: zfs-nfs-enc1
persistentVolumeClaim:
claimName: zfs-nfs-enc1

45
k3s/tests/ffmpeg.yaml Normal file
View File

@@ -0,0 +1,45 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: ffmpeg
spec:
selector:
matchLabels:
app: ffmpeg
template:
metadata:
labels:
app: ffmpeg
spec:
volumes:
- name: data
persistentVolumeClaim:
claimName: ffmpeg
containers:
- name: ffmpeg
image: linuxserver/ffmpeg:latest
volumeMounts:
- mountPath: /config
name: data
command:
- /bin/bash
- -c
- 'sleep infinity'
resources:
limits:
memory: "2Gi"
cpu: "8"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ffmpeg
spec:
storageClassName: zfs-iscsi-enc0-ext4
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 64Gi

View File

@@ -0,0 +1,66 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: ingress-nginx-demo
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx-demo
strategy:
type: Recreate
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx-demo
spec:
containers:
- name: httpd
image: httpd
ports:
- containerPort: 80
name: http
---
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx-demo
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: ingress-nginx-demo
ports:
- name: ingress-nginx-demo
protocol: TCP
port: 80
targetPort: http
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-nginx-demo
annotations:
cert-manager.io/cluster-issuer: letsencrypt
external-dns.alpha.kubernetes.io/ttl: "60"
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.org/client-max-body-size: "0"
spec:
ingressClassName: nginx
rules:
- host: demo.reeseapps.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: ingress-nginx-demo
port:
number: 80
tls:
- hosts:
- demo.reeseapps.com
secretName: ingress-nginx-demo-tls-cert

101
k3s/tests/metallb-test.yaml Normal file
View File

@@ -0,0 +1,101 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: ingress-nginx-demo-1
namespace: default
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx-demo-1
strategy:
type: Recreate
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx-demo-1
spec:
containers:
- name: httpd
image: httpd
ports:
- containerPort: 80
name: http
resources:
requests:
memory: "100Mi"
cpu: "1m"
limits:
memory: "256Mi"
cpu: "1"
---
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx-demo-1
namespace: default
annotations:
metallb.universe.tf/allow-shared-ip: "production"
metallb.universe.tf/address-pool: production
spec:
type: LoadBalancer
ports:
- name: http
protocol: TCP
port: 8001
targetPort: 80
selector:
app.kubernetes.io/name: ingress-nginx-demo-1
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ingress-nginx-demo-2
namespace: default
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx-demo-2
strategy:
type: Recreate
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx-demo-2
spec:
containers:
- name: httpd
image: httpd
ports:
- containerPort: 80
name: http
resources:
requests:
memory: "100Mi"
cpu: "1m"
limits:
memory: "256Mi"
cpu: "1"
---
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx-demo-2
namespace: default
annotations:
metallb.universe.tf/allow-shared-ip: "production"
metallb.universe.tf/address-pool: production
spec:
type: LoadBalancer
ports:
- name: http
protocol: TCP
port: 8002
targetPort: 80
selector:
app.kubernetes.io/name: ingress-nginx-demo-2

View File

@@ -0,0 +1,49 @@
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: web
spec:
selector:
matchLabels:
app: nginx # has to match .spec.template.metadata.labels
serviceName: "nginx"
replicas: 3 # by default is 1
minReadySeconds: 10 # by default is 0
template:
metadata:
labels:
app: nginx # has to match .spec.selector.matchLabels
spec:
terminationGracePeriodSeconds: 10
containers:
- name: nginx
image: registry.k8s.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: www
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "local-path"
resources:
requests:
storage: 1Gi

View File

@@ -16,27 +16,4 @@ spec:
serviceAccountName: system-upgrade
upgrade:
image: rancher/k3s-upgrade
channel: https://update.k3s.io/v1-release/channels/latest
---
# Agent plan
apiVersion: upgrade.cattle.io/v1
kind: Plan
metadata:
name: agent-plan
namespace: system-upgrade
spec:
concurrency: 1
cordon: true
nodeSelector:
matchExpressions:
- key: node-role.kubernetes.io/control-plane
operator: DoesNotExist
prepare:
args:
- prepare
- server-plan
image: rancher/k3s-upgrade
serviceAccountName: system-upgrade
upgrade:
image: rancher/k3s-upgrade
channel: https://update.k3s.io/v1-release/channels/latest
channel: https://update.k3s.io/v1-release/channels/latest