kube transfer to single-node host
This commit is contained in:
738
README.md
738
README.md
@@ -6,16 +6,6 @@ A project to store homelab stuff.
|
|||||||
|
|
||||||
- [Homelab](#homelab)
|
- [Homelab](#homelab)
|
||||||
- [Table of Contents](#table-of-contents)
|
- [Table of Contents](#table-of-contents)
|
||||||
- [Platforms](#platforms)
|
|
||||||
- [Reverse Proxy](#reverse-proxy)
|
|
||||||
- [Service Mesh](#service-mesh)
|
|
||||||
- [Data Storage](#data-storage)
|
|
||||||
- [Adding a new host](#adding-a-new-host)
|
|
||||||
- [Components](#components)
|
|
||||||
- [CoreDNS](#coredns)
|
|
||||||
- [Metal LB](#metal-lb)
|
|
||||||
- [Nginx Ingress](#nginx-ingress)
|
|
||||||
- [Storage](#storage)
|
|
||||||
- [Apps](#apps)
|
- [Apps](#apps)
|
||||||
- [Dashboard](#dashboard)
|
- [Dashboard](#dashboard)
|
||||||
- [Nextcloud](#nextcloud)
|
- [Nextcloud](#nextcloud)
|
||||||
@@ -32,450 +22,6 @@ A project to store homelab stuff.
|
|||||||
- [Iperf3](#iperf3)
|
- [Iperf3](#iperf3)
|
||||||
- [Wordpress](#wordpress)
|
- [Wordpress](#wordpress)
|
||||||
- [Grafana](#grafana)
|
- [Grafana](#grafana)
|
||||||
- [Upgrading](#upgrading)
|
|
||||||
- [Nodes](#nodes)
|
|
||||||
- [K3S](#k3s)
|
|
||||||
- [Automated Upgrades](#automated-upgrades)
|
|
||||||
- [Manual Upgrades](#manual-upgrades)
|
|
||||||
- [Create a Userspace](#create-a-userspace)
|
|
||||||
- [Quickstart](#quickstart)
|
|
||||||
- [Userspace](#userspace)
|
|
||||||
- [Namespace](#namespace)
|
|
||||||
- [Roles](#roles)
|
|
||||||
- [Rolebinding](#rolebinding)
|
|
||||||
- [Manual Steps](#manual-steps)
|
|
||||||
- [Create a kubernetes certsigner pod](#create-a-kubernetes-certsigner-pod)
|
|
||||||
- [Create the certsigner secret](#create-the-certsigner-secret)
|
|
||||||
- [Set up the certsigner pod](#set-up-the-certsigner-pod)
|
|
||||||
- [Generate a cert](#generate-a-cert)
|
|
||||||
- [Create a new Userspace](#create-a-new-userspace)
|
|
||||||
- [Sign the cert](#sign-the-cert)
|
|
||||||
- [Add to the config](#add-to-the-config)
|
|
||||||
- [Delete](#delete)
|
|
||||||
- [Signing a user cert - detailed notes](#signing-a-user-cert---detailed-notes)
|
|
||||||
- [Help](#help)
|
|
||||||
- [Troubleshooting](#troubleshooting)
|
|
||||||
|
|
||||||
## Platforms
|
|
||||||
|
|
||||||
### Reverse Proxy
|
|
||||||
|
|
||||||
We will use a reverse proxy / load balancer as our single point of entry for all services.
|
|
||||||
This helps control inbound and outbound traffic and TLS certificate termination. This will
|
|
||||||
be installed on bare metal machine(s) via ansible to ensure max performance and ipv6 compatibility.
|
|
||||||
Each machine that acts as a reverse proxy will add its public ipv4 and ipv6 address(es) to
|
|
||||||
the public domains used for external and internal access (*.reeseapps.com).
|
|
||||||
|
|
||||||
### Service Mesh
|
|
||||||
|
|
||||||
All devices will be connected via wireguard and will talk over the wireguard connection. See
|
|
||||||
the wireguard folder for more details. It's advisable to create DNS records internally pointing
|
|
||||||
to the wireguard-assigned IP addresses.
|
|
||||||
|
|
||||||
### Data Storage
|
|
||||||
|
|
||||||
All servers will use ISCSI.
|
|
||||||
|
|
||||||
## Adding a new host
|
|
||||||
|
|
||||||
1. Set static IP in Unifi
|
|
||||||
2. Add to .ssh/config
|
|
||||||
3. Add to ansible inventory (`ansible/`)
|
|
||||||
4. Establish DNS records (`dns/`)
|
|
||||||
1. Both `-wg` records and `reeselink` records
|
|
||||||
5. Create reverse proxy(s) (`nginx/`)
|
|
||||||
1. (If removing) Delete any unused certs with `certbot delete`
|
|
||||||
2. Run the ansible certbot and nginx role
|
|
||||||
6. Create service mesh (`mesh/`)
|
|
||||||
1. Make sure to edit both `peers` and `ip` in `vars.yaml`
|
|
||||||
2. If you need to delete unused peers, add them to the `peers.yaml` delete job
|
|
||||||
7. Install services
|
|
||||||
8. Set up port forwarding in Unifi if applicable
|
|
||||||
|
|
||||||
## Components
|
|
||||||
|
|
||||||
### CoreDNS
|
|
||||||
|
|
||||||
We'll use our own coredns server so we can add custom hosts. This prevents the server from collapsing
|
|
||||||
if the internet drops out (something that apparently happens quite frequently)
|
|
||||||
|
|
||||||
One key entry in the coredns config is `driveripper.reeselink.com` pointing to the internal
|
|
||||||
IP `172.20.0.1`. This ensures democratic-csi can access the truenas server without internet
|
|
||||||
or DNS.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
helm repo add coredns https://coredns.github.io/helm
|
|
||||||
helm repo update
|
|
||||||
helm upgrade --install \
|
|
||||||
--namespace=coredns \
|
|
||||||
--create-namespace \
|
|
||||||
--values coredns/coredns-values.yaml \
|
|
||||||
coredns \
|
|
||||||
coredns/coredns
|
|
||||||
```
|
|
||||||
|
|
||||||
You can test your dns config with
|
|
||||||
|
|
||||||
```bash
|
|
||||||
kubectl run -it --rm --restart=Never --image=infoblox/dnstools:latest dnstools
|
|
||||||
```
|
|
||||||
|
|
||||||
### Metal LB
|
|
||||||
|
|
||||||
We'll be swapping K3S's default load balancer with Metal LB for more flexibility. ServiceLB was
|
|
||||||
struggling to allocate IP addresses for load balanced services. MetallLB does make things a little
|
|
||||||
more complicated- you'll need special annotations (see below) but it's otherwise a well-tested,
|
|
||||||
stable load balancing service with features to grow into.
|
|
||||||
|
|
||||||
Metallb is pretty cool. It works via l2 advertisement or BGP. We won't be using BGP, so let's
|
|
||||||
focus on l2.
|
|
||||||
|
|
||||||
When we connect our nodes to a network we give them an IP address range: ex. `192.168.122.20/24`.
|
|
||||||
This range represents all the available addresses the node could be assigned. Usually we assign
|
|
||||||
a single "static" IP address for our node and direct traffic to it by port forwarding from our
|
|
||||||
router. This is fine for single nodes - but what if we have a cluster of nodes and we don't want
|
|
||||||
our service to disappear just because one node is down for maintenance?
|
|
||||||
|
|
||||||
This is where l2 advertising comes in. Metallb will assign a static IP address from a given
|
|
||||||
pool to any arbitrary node - then advertise that node's mac address as the location for the
|
|
||||||
IP. When that node goes down metallb simply advertises a new mac address for the same IP
|
|
||||||
address, effectively moving the IP to another node. This isn't really "load balancing" but
|
|
||||||
"failover". Fortunately, that's exactly what we're looking for.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
helm repo add metallb https://metallb.github.io/metallb
|
|
||||||
helm repo update
|
|
||||||
helm upgrade --install metallb \
|
|
||||||
--namespace metallb \
|
|
||||||
--create-namespace \
|
|
||||||
metallb/metallb
|
|
||||||
```
|
|
||||||
|
|
||||||
MetalLB doesn't know what IP addresses are available for it to allocate so we'll have
|
|
||||||
to provide it with a list. The `metallb-addresspool.yaml` has one IP address (we'll get to
|
|
||||||
IP address sharing in a second) which is an unassigned IP address not allocated to any of our
|
|
||||||
nodes. Note if you have many public IPs which all point to the same router or virtual network
|
|
||||||
you can list them. We're only going to use one because we want to port forward from our router.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# create the metallb allocation pool
|
|
||||||
kubectl apply -f metallb-addresspool.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
Now we need to create the l2 advertisement. This is handled with a custom resource definition
|
|
||||||
which specifies that all nodes listed are eligible to be assigned, and advertise, our
|
|
||||||
"production" IP addresses.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
kubectl apply -f metallb-l2advertisement.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
We now have a problem. We only have a signle production IP address and Metallb
|
|
||||||
really doesn't want to share it. In order to allow services to allocate the
|
|
||||||
same IP address (on different ports) we'll need to annotate them as such.
|
|
||||||
MetalLB will allow services to allocate the same IP if:
|
|
||||||
|
|
||||||
- They both have the same sharing key.
|
|
||||||
- They request the use of different ports (e.g. tcp/80 for one and tcp/443 for the other).
|
|
||||||
- They both use the Cluster external traffic policy, or they both point to the exact same set of pods (i.e. the pod selectors are identical).
|
|
||||||
|
|
||||||
See <https://metallb.org/usage/#ip-address-sharing> for more info.
|
|
||||||
|
|
||||||
You'll need to annotate your service as follows if you want an external IP:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
apiVersion: v1
|
|
||||||
kind: Service
|
|
||||||
metadata:
|
|
||||||
name: {{ .Release.Name }}
|
|
||||||
annotations:
|
|
||||||
metallb.universe.tf/address-pool: "production"
|
|
||||||
metallb.universe.tf/allow-shared-ip: "production"
|
|
||||||
spec:
|
|
||||||
externalTrafficPolicy: Cluster
|
|
||||||
selector:
|
|
||||||
app: {{ .Release.Name }}
|
|
||||||
ports:
|
|
||||||
- port: {{ .Values.ports.containerPort }}
|
|
||||||
targetPort: {{ .Values.ports.targetPort }}
|
|
||||||
name: {{ .Release.Name }}
|
|
||||||
type: LoadBalancer
|
|
||||||
```
|
|
||||||
|
|
||||||
### Nginx Ingress
|
|
||||||
|
|
||||||
Now we need an ingress solution (preferably with certs for https). We'll be using nginx since
|
|
||||||
it's a little bit more configurable than traefik (though don't sell traefik short, it's really
|
|
||||||
good. Just finnicky when you have use cases they haven't explicitly coded for).
|
|
||||||
|
|
||||||
1. Install nginx
|
|
||||||
|
|
||||||
```bash
|
|
||||||
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
|
|
||||||
helm repo update
|
|
||||||
helm upgrade --install \
|
|
||||||
ingress-nginx \
|
|
||||||
ingress-nginx/ingress-nginx \
|
|
||||||
--values ingress-nginx-values.yaml \
|
|
||||||
--namespace ingress-nginx \
|
|
||||||
--create-namespace
|
|
||||||
```
|
|
||||||
|
|
||||||
2. Install cert-manager
|
|
||||||
|
|
||||||
```bash
|
|
||||||
helm repo add jetstack https://charts.jetstack.io
|
|
||||||
helm repo update
|
|
||||||
helm upgrade --install \
|
|
||||||
cert-manager jetstack/cert-manager \
|
|
||||||
--namespace cert-manager \
|
|
||||||
--create-namespace \
|
|
||||||
--version v1.12.4 \
|
|
||||||
--set installCRDs=true
|
|
||||||
```
|
|
||||||
|
|
||||||
3. Create the let's encrypt issuer
|
|
||||||
|
|
||||||
```bash
|
|
||||||
kubectl apply -f letsencrypt-issuer.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
You can test if your ingress is working with `kubectl apply -f ingress-nginx-test.yaml`
|
|
||||||
|
|
||||||
Navigate to ingress-nginx-test.reeseapps.com
|
|
||||||
|
|
||||||
### Storage
|
|
||||||
|
|
||||||
<https://github.com/democratic-csi/democratic-csi/blob/master/examples/freenas-nfs.yaml>
|
|
||||||
|
|
||||||
Use nfsv4. It works without rpcbind which makes it lovely.
|
|
||||||
|
|
||||||
We'll be installing democratic csi for our volume manager. Specifically, we'll be installing the
|
|
||||||
freenas-api-nfs driver. All configuration is stored in truenas-nfs.yaml.
|
|
||||||
|
|
||||||
The nfs driver will provision an nfs store owned by user 3000 (kube). You may have to make
|
|
||||||
that user on Truenas. The nfs share created will be world-read/write, so any user can write to
|
|
||||||
it. Users that write to the share will have their uid/gid mapped to Truenas, so if user 33 writes
|
|
||||||
a file to the nfs share it will show up as owned by user 33 on Truenas.
|
|
||||||
|
|
||||||
The iscsi driver will require a portal ID. This is NOT what is reflected in the UI. The most
|
|
||||||
reliable way (seriously) to get the real ID is to open the network monitor in the browser, reload
|
|
||||||
truenas and find the websocket connection, click on it, create the portal and click on the
|
|
||||||
server reseponse. It'll look something like:
|
|
||||||
|
|
||||||
```json
|
|
||||||
{"msg": "added", "collection": "iscsi.portal.query", "id": 7, "fields": {"id": 7, "tag": 1, "comment": "democratic-csi", "listen": [{"ip": "172.20.0.1", "port": 3260}], "discovery_authmethod": "NONE", "discovery_authgroup": null}}
|
|
||||||
```
|
|
||||||
|
|
||||||
The initiator group IDs seem to line up.
|
|
||||||
|
|
||||||
It's good practice to have separate hostnames for your share export and your truenas server. This
|
|
||||||
way you can have a direct link without worrying about changing the user-facing hostname.
|
|
||||||
For example: your truenas server might be driveripper.reeselink.com and your kube server might be
|
|
||||||
containers.reeselink.com. You should also have a democratic-csi-server.reeselink.com and a
|
|
||||||
democratic-csi-client-1.reeselink.com which might be on 172.20.0.1 and 172.20.0.2.
|
|
||||||
|
|
||||||
<https://github.com/democratic-csi/democratic-csi>
|
|
||||||
|
|
||||||
ISCSI requires a bit of server config before proceeding. Run the following on the kubernetes node.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Install the following system packages
|
|
||||||
sudo dnf install -y lsscsi iscsi-initiator-utils sg3_utils device-mapper-multipath
|
|
||||||
|
|
||||||
# Enable multipathing
|
|
||||||
sudo mpathconf --enable --with_multipathd y
|
|
||||||
|
|
||||||
# Ensure that iscsid and multipathd are running
|
|
||||||
sudo systemctl enable iscsid multipathd
|
|
||||||
sudo systemctl start iscsid multipathd
|
|
||||||
|
|
||||||
# Start and enable iscsi
|
|
||||||
sudo systemctl enable iscsi
|
|
||||||
sudo systemctl start iscsi
|
|
||||||
```
|
|
||||||
|
|
||||||
Now you can install the drivers. Note we won't be using the API drivers for Truenas
|
|
||||||
scale. These have stability issues that happen intermittently (especially when deleting
|
|
||||||
volumes... as in it won't delete volumes). As of 6/13/23 I don't recommend it.
|
|
||||||
|
|
||||||
Note: you can switch between driver types after install so there's no risk in using the
|
|
||||||
stable driver first and then experimenting with the API driver.
|
|
||||||
|
|
||||||
Before we begin you'll need to create a new "democratic" user on Truenas. First you should
|
|
||||||
create an SSH key for the user:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
ssh-keygen -t rsa -N '' -f secrets/democratic_rsa.prod
|
|
||||||
chmod 600 secrets/democratic_rsa.prod
|
|
||||||
```
|
|
||||||
|
|
||||||
Now in the web console, use the following options:
|
|
||||||
|
|
||||||
| Field | Value |
|
|
||||||
|----------------------------------------|------------------------------------------------|
|
|
||||||
| Full Name | democratic |
|
|
||||||
| Username | democratic |
|
|
||||||
| Email | blank |
|
|
||||||
| Disable Password | True |
|
|
||||||
| Create New Primary Group | True |
|
|
||||||
| Auxiliary Groups | None |
|
|
||||||
| Create Home Directory | True |
|
|
||||||
| Authorized Keys | paste the generated ".pub" key here |
|
|
||||||
| Shell | bash |
|
|
||||||
| Allowed sudo commands | /usr/sbin/zfs /usr/sbin/zpool /usr/sbin/chroot |
|
|
||||||
| Allowed sudo commands with no password | /usr/sbin/zfs /usr/sbin/zpool /usr/sbin/chroot |
|
|
||||||
| Samba Authentication | False |
|
|
||||||
|
|
||||||
Save the user and verify SSH works with
|
|
||||||
|
|
||||||
```bash
|
|
||||||
ssh -i secrets/democratic_rsa.prod democratic@driveripper.reeselink.com
|
|
||||||
# test forbidden sudo command, should require a password
|
|
||||||
sudo ls
|
|
||||||
# test allowed sudo command
|
|
||||||
sudo zfs list
|
|
||||||
```
|
|
||||||
|
|
||||||
Next you'll need an API key. Save it to a file called `secrets/truenas-api-key`:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
echo 'api-key-here' > secrets/truenas-api-key
|
|
||||||
```
|
|
||||||
|
|
||||||
Now we can proceed with the install
|
|
||||||
|
|
||||||
```bash
|
|
||||||
helm repo add democratic-csi https://democratic-csi.github.io/charts/
|
|
||||||
helm repo update
|
|
||||||
|
|
||||||
# enc0 storage (iscsi)
|
|
||||||
helm upgrade \
|
|
||||||
--install \
|
|
||||||
--values democratic-csi/truenas-iscsi-enc0.yaml \
|
|
||||||
--namespace democratic-csi \
|
|
||||||
--create-namespace \
|
|
||||||
--set driver.config.httpConnection.apiKey=$(cat secrets/truenas-api-key) \
|
|
||||||
zfs-iscsi-enc0 democratic-csi/democratic-csi
|
|
||||||
|
|
||||||
# enc1 storage (iscsi)
|
|
||||||
helm upgrade \
|
|
||||||
--install \
|
|
||||||
--values democratic-csi/truenas-iscsi-enc1.yaml \
|
|
||||||
--namespace democratic-csi \
|
|
||||||
--create-namespace \
|
|
||||||
--set driver.config.httpConnection.apiKey=$(cat secrets/truenas-api-key) \
|
|
||||||
zfs-iscsi-enc1 democratic-csi/democratic-csi
|
|
||||||
|
|
||||||
# enc1 storage (nfs)
|
|
||||||
helm upgrade \
|
|
||||||
--install \
|
|
||||||
--values democratic-csi/truenas-nfs-enc1.yaml \
|
|
||||||
--namespace democratic-csi \
|
|
||||||
--create-namespace \
|
|
||||||
--set driver.config.httpConnection.apiKey=$(cat secrets/truenas-api-key) \
|
|
||||||
zfs-nfs-enc1 democratic-csi/democratic-csi
|
|
||||||
```
|
|
||||||
|
|
||||||
You can test that things worked with:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
kubectl apply -f tests/democratic-csi-pvc-test.yaml
|
|
||||||
kubectl delete -f tests/democratic-csi-pvc-test.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
And run some performance tests. You can use network and disk monitoring tools
|
|
||||||
to see performance during the tests.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Big writes
|
|
||||||
count=0
|
|
||||||
start_time=$EPOCHREALTIME
|
|
||||||
while true; do
|
|
||||||
dd if=/dev/zero of=test.dat bs=1M count=100 1> /dev/null 2> /dev/null
|
|
||||||
current=$(echo "$EPOCHREALTIME - $start_time" | bc)
|
|
||||||
current_gt_one=$(echo "$current > 10" | bc)
|
|
||||||
if [ $current_gt_one -eq 0 ]; then
|
|
||||||
count=$((count + 1))
|
|
||||||
echo -e '\e[1A\e[K'$count
|
|
||||||
else
|
|
||||||
break
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
# Lots of little writes
|
|
||||||
count=0
|
|
||||||
start_time=$EPOCHREALTIME
|
|
||||||
while true; do
|
|
||||||
dd if=/dev/zero of=test.dat bs=1K count=1 1> /dev/null 2> /dev/null
|
|
||||||
current=$(echo "$EPOCHREALTIME - $start_time" | bc)
|
|
||||||
current_gt_one=$(echo "$current > 1" | bc)
|
|
||||||
if [ $current_gt_one -eq 0 ]; then
|
|
||||||
count=$((count + 1))
|
|
||||||
echo -e '\e[1A\e[K'$count
|
|
||||||
else
|
|
||||||
break
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
```
|
|
||||||
|
|
||||||
Because iscsi will mount block devices, troubleshooting mounting issues, data corruption,
|
|
||||||
and exploring pvc contents must happen on the client device. Here are a few cheat-sheet
|
|
||||||
commands to make things easier:
|
|
||||||
|
|
||||||
Note with iscsi login: set the node.session.auth.username NOT node.session.auth.username_in
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# discover all targets on the server
|
|
||||||
iscsiadm --mode discovery \
|
|
||||||
--type sendtargets \
|
|
||||||
--portal democratic-csi-server.reeselink.com:3260
|
|
||||||
|
|
||||||
export ISCSI_TARGET=
|
|
||||||
|
|
||||||
# delete the discovered targets
|
|
||||||
iscsiadm --mode discovery \
|
|
||||||
--portal democratic-csi-server.reeselink.com:3260 \
|
|
||||||
--op delete
|
|
||||||
|
|
||||||
# view discovered targets
|
|
||||||
iscsiadm --mode node
|
|
||||||
|
|
||||||
# view current session
|
|
||||||
iscsiadm --mode session
|
|
||||||
|
|
||||||
# prevent automatic login
|
|
||||||
iscsiadm --mode node \
|
|
||||||
--portal democratic-csi-server.reeselink.com:3260 \
|
|
||||||
--op update \
|
|
||||||
--name node.startup \
|
|
||||||
--value manual
|
|
||||||
|
|
||||||
# connect a target
|
|
||||||
iscsiadm --mode node \
|
|
||||||
--login \
|
|
||||||
--portal democratic-csi-server.reeselink.com:3260 \
|
|
||||||
--targetname $ISCSI_TARGET
|
|
||||||
|
|
||||||
# disconnect a target
|
|
||||||
# you might have to do this if pods can't mount their volumes.
|
|
||||||
# manually connecting a target tends to make it unavailable for the pods since there
|
|
||||||
# will be two targets with the same name.
|
|
||||||
iscsiadm --mode node \
|
|
||||||
--logout \
|
|
||||||
--portal democratic-csi-server.reeselink.com:3260 \
|
|
||||||
--targetname $ISCSI_TARGET
|
|
||||||
|
|
||||||
# view all connected disks
|
|
||||||
ls /dev/zvol/
|
|
||||||
|
|
||||||
# mount a disk
|
|
||||||
mount -t xfs /dev/zvol/... /mnt/iscsi
|
|
||||||
|
|
||||||
# emergency - by-path isn't available
|
|
||||||
# (look for "Attached scsi disk")
|
|
||||||
iscsiadm --mode session -P 3 | grep Target -A 2 -B 2
|
|
||||||
```
|
|
||||||
|
|
||||||
## Apps
|
## Apps
|
||||||
|
|
||||||
@@ -588,9 +134,11 @@ especially since Gitea tends to change how `values.yaml` is structured.
|
|||||||
First we need to create the gitea admin secret
|
First we need to create the gitea admin secret
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
kubectl create namespace gitea
|
||||||
kubectl create secret generic gitea-admin-secret \
|
kubectl create secret generic gitea-admin-secret \
|
||||||
--from-literal=username='' \
|
-n gitea \
|
||||||
--from-literal=password='' \
|
--from-literal=username='gitea-admin' \
|
||||||
|
--from-literal=password="$(pwgen -c -s 64 | head -n 1)" \
|
||||||
--from-literal=email=''
|
--from-literal=email=''
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -735,281 +283,3 @@ Grafana has a kubernetes yaml they prefer you use. See `kubectl/grafana.yaml`.
|
|||||||
```bash
|
```bash
|
||||||
kubectl apply -f kubectl/grafana.yaml
|
kubectl apply -f kubectl/grafana.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
## Upgrading
|
|
||||||
|
|
||||||
### Nodes
|
|
||||||
|
|
||||||
```bash
|
|
||||||
kubectl drain node1 --ignore-daemonsets --delete-emptydir-data
|
|
||||||
watch -n 3 kubectl get pod --all-namespaces -w
|
|
||||||
```
|
|
||||||
|
|
||||||
### K3S
|
|
||||||
|
|
||||||
#### Automated Upgrades
|
|
||||||
|
|
||||||
<https://docs.k3s.io/upgrades/automated>
|
|
||||||
|
|
||||||
```bash
|
|
||||||
kubectl apply -f https://github.com/rancher/system-upgrade-controller/releases/latest/download/system-upgrade-controller.yaml
|
|
||||||
kubectl apply -f upgrade-plan.yaml
|
|
||||||
kubectl get pod -w -n system-upgrade
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Manual Upgrades
|
|
||||||
|
|
||||||
<https://docs.k3s.io/upgrades/manual#manually-upgrade-k3s-using-the-binary>
|
|
||||||
|
|
||||||
```bash
|
|
||||||
sudo su -
|
|
||||||
wget https://github.com/k3s-io/k3s/releases/download/v1.28.3%2Bk3s1/k3s
|
|
||||||
systemctl stop k3s
|
|
||||||
chmod +x k3s
|
|
||||||
mv k3s /usr/local/bin/k3s
|
|
||||||
systemctl start k3s
|
|
||||||
```
|
|
||||||
|
|
||||||
## Create a Userspace
|
|
||||||
|
|
||||||
This creates a user, namespace, and permissions with a simple script.
|
|
||||||
|
|
||||||
### Quickstart
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Create certsigner pod for all other operations
|
|
||||||
./setup.sh <server_fqdn>
|
|
||||||
|
|
||||||
# Create a user, use "admin" to create an admin user
|
|
||||||
./upsertuser.sh <ssh_address> <server_fqdn (for kubectl)> <user>
|
|
||||||
|
|
||||||
# Remove a user, their namespace, and their access
|
|
||||||
./removeuserspace <server_fqdn> <user>
|
|
||||||
```
|
|
||||||
|
|
||||||
### Userspace
|
|
||||||
|
|
||||||
#### Namespace
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
apiVersion: v1
|
|
||||||
kind: Namespace
|
|
||||||
metadata:
|
|
||||||
name: {{ .Release.Name }}
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Roles
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
kind: Role
|
|
||||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
|
||||||
metadata:
|
|
||||||
name: namespace-manager
|
|
||||||
namespace: {{ .Release.Name }}
|
|
||||||
rules:
|
|
||||||
- apiGroups:
|
|
||||||
- ""
|
|
||||||
- extensions
|
|
||||||
- apps
|
|
||||||
- batch
|
|
||||||
- autoscaling
|
|
||||||
- networking.k8s.io
|
|
||||||
- traefik.containo.us
|
|
||||||
- rbac.authorization.k8s.io
|
|
||||||
- metrics.k8s.io
|
|
||||||
resources:
|
|
||||||
- deployments
|
|
||||||
- replicasets
|
|
||||||
- pods
|
|
||||||
- pods/exec
|
|
||||||
- pods/log
|
|
||||||
- pods/attach
|
|
||||||
- daemonsets
|
|
||||||
- statefulsets
|
|
||||||
- replicationcontrollers
|
|
||||||
- horizontalpodautoscalers
|
|
||||||
- services
|
|
||||||
- ingresses
|
|
||||||
- persistentvolumeclaims
|
|
||||||
- jobs
|
|
||||||
- cronjobs
|
|
||||||
- secrets
|
|
||||||
- configmaps
|
|
||||||
- serviceaccounts
|
|
||||||
- rolebindings
|
|
||||||
- ingressroutes
|
|
||||||
- middlewares
|
|
||||||
- endpoints
|
|
||||||
verbs:
|
|
||||||
- "*"
|
|
||||||
- apiGroups:
|
|
||||||
- ""
|
|
||||||
- metrics.k8s.io
|
|
||||||
- rbac.authorization.k8s.io
|
|
||||||
resources:
|
|
||||||
- resourcequotas
|
|
||||||
- roles
|
|
||||||
verbs:
|
|
||||||
- list
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Rolebinding
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
kind: RoleBinding
|
|
||||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
|
||||||
metadata:
|
|
||||||
namespace: {{ .Release.Name }}
|
|
||||||
name: namespace-manager
|
|
||||||
subjects:
|
|
||||||
- kind: User
|
|
||||||
name: {{ .Release.Name }}
|
|
||||||
apiGroup: ""
|
|
||||||
roleRef:
|
|
||||||
kind: ClusterRole
|
|
||||||
name: namespace-manager
|
|
||||||
apiGroup: ""
|
|
||||||
```
|
|
||||||
|
|
||||||
### Manual Steps
|
|
||||||
|
|
||||||
#### Create a kubernetes certsigner pod
|
|
||||||
|
|
||||||
This keeps the client-ca crt and key secret and allows the cert to be signed and stored on the pod
|
|
||||||
|
|
||||||
#### Create the certsigner secret
|
|
||||||
|
|
||||||
```bash
|
|
||||||
kubectl -n kube-system create secret generic certsigner --from-file /var/lib/rancher/k3s/server/tls/client-ca.crt --from-file /var/lib/rancher/k3s/server/tls/client-ca.key
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Set up the certsigner pod
|
|
||||||
|
|
||||||
```bash
|
|
||||||
scp certsigner.yaml <server>:~/certsigner.yaml
|
|
||||||
kubectl apply -f certsigner.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Generate a cert
|
|
||||||
|
|
||||||
```bash
|
|
||||||
export USER=<user>
|
|
||||||
docker run -it -v $(pwd)/users/$USER:/$USER python:latest openssl genrsa -out /$USER/$USER.key 2048
|
|
||||||
docker run -it -v $(pwd)/users/$USER:/$USER python:latest openssl req -new -key /$USER/$USER.key -out /$USER/$USER.csr -subj "/CN=$USER/O=user"
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Create a new Userspace
|
|
||||||
|
|
||||||
```bash
|
|
||||||
helm template $USER ./namespace | kubectl --context admin apply -f -
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Sign the cert
|
|
||||||
|
|
||||||
```bash
|
|
||||||
export USER=<user>
|
|
||||||
kubectl --context admin cp $(pwd)/users/$USER/$USER.csr certsigner:/certs/$USER.csr
|
|
||||||
kubectl --context admin exec -it --context admin certsigner -- openssl x509 -in /certs/$USER.csr -req -CA /keys/client-ca.crt -CAkey /keys/client-ca.key -CAcreateserial -out /certs/$USER.crt -days 5000
|
|
||||||
kubectl --context admin cp certsigner:/certs/$USER.crt $(pwd)/users/$USER/$USER.crt
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Add to the config
|
|
||||||
|
|
||||||
```bash
|
|
||||||
kubectl config set-credentials $USER --client-certificate=$USER.crt --client-key=$USER.key
|
|
||||||
kubectl config set-context $USER --cluster=mainframe --namespace=$USER --user=$USER
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Delete
|
|
||||||
|
|
||||||
```bash
|
|
||||||
kubectl config delete-context $USER
|
|
||||||
helm template $USER ./namespace | kubectl --context admin delete -f -
|
|
||||||
```
|
|
||||||
|
|
||||||
### Signing a user cert - detailed notes
|
|
||||||
|
|
||||||
NOTE: ca.crt and ca.key are in /var/lib/rancher/k3s/server/tls/client-ca.*
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# First we create the credentials
|
|
||||||
# /CN=<username> - the user
|
|
||||||
# /O=<group> - the group
|
|
||||||
|
|
||||||
# Navigate to the user directory
|
|
||||||
export USER=<username>
|
|
||||||
cd $USER
|
|
||||||
|
|
||||||
# Generate a private key
|
|
||||||
openssl genrsa -out $USER.key 2048
|
|
||||||
# Check the key
|
|
||||||
# openssl pkey -in ca.key -noout -text
|
|
||||||
# Generate and send me the CSR
|
|
||||||
# The "user" group is my default group
|
|
||||||
openssl req -new -key $USER.key -out $USER.csr -subj "/CN=$USER/O=user"
|
|
||||||
|
|
||||||
# Check the CSR
|
|
||||||
# openssl req -in $USER.csr -noout -text
|
|
||||||
# If satisfactory, sign the CSR
|
|
||||||
# Copy from /var/lib/rancher/k3s/server/tls/client-ca.crt and client-ca.key
|
|
||||||
openssl x509 -req -in $USER.csr -CA ../client-ca.crt -CAkey ../client-ca.key -CAcreateserial -out $USER.crt -days 5000
|
|
||||||
# Review the certificate
|
|
||||||
# openssl x509 -in $USER.crt -text -noout
|
|
||||||
|
|
||||||
# Send back the crt
|
|
||||||
# cp $USER.crt $USER.key ../server-ca.crt ~/.kube/
|
|
||||||
kubectl config set-credentials $USER --client-certificate=$USER.crt --client-key=$USER.key
|
|
||||||
kubectl config set-context $USER --cluster=mainframe --namespace=$USER --user=$USER
|
|
||||||
|
|
||||||
# Now we create the namespace, rolebindings, and resource quotas
|
|
||||||
# kubectl apply -f k8s/
|
|
||||||
|
|
||||||
# Add the cluster
|
|
||||||
# CA file can be found at https://3.14.3.100:6443/cacerts
|
|
||||||
- cluster:
|
|
||||||
certificate-authority: server-ca.crt
|
|
||||||
server: https://3.14.3.100:6443
|
|
||||||
name: mainframe
|
|
||||||
|
|
||||||
# Test if everything worked
|
|
||||||
kubectl --context=$USER-context get pods
|
|
||||||
```
|
|
||||||
|
|
||||||
## Help
|
|
||||||
|
|
||||||
### Troubleshooting
|
|
||||||
|
|
||||||
Deleting a stuck namespace
|
|
||||||
|
|
||||||
```bash
|
|
||||||
NAMESPACE=nginx
|
|
||||||
kubectl proxy &
|
|
||||||
kubectl get namespace $NAMESPACE -o json |jq '.spec = {"finalizers":[]}' >temp.json
|
|
||||||
curl -k -H "Content-Type: application/json" -X PUT --data-binary @temp.json 127.0.0.1:8001/api/v1/namespaces/$NAMESPACE/finalize
|
|
||||||
```
|
|
||||||
|
|
||||||
Fixing a bad volume
|
|
||||||
|
|
||||||
```bash
|
|
||||||
xfs_repair -L /dev/sdg
|
|
||||||
```
|
|
||||||
|
|
||||||
Mounting an ix-application volume from truenas
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# set the mountpoint
|
|
||||||
zfs set mountpoint=/ix_pvc enc1/ix-applications/releases/gitea/volumes/pvc-40e27277-71e3-4469-88a3-a39f53435a8b
|
|
||||||
|
|
||||||
#"unset" the mountpoint (back to legacy)
|
|
||||||
zfs set mountpoint=legacy enc1/ix-applications/releases/gitea/volumes/pvc-40e27277-71e3-4469-88a3-a39f53435a8b
|
|
||||||
```
|
|
||||||
|
|
||||||
Mounting a volume
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# mount
|
|
||||||
mount -t xfs /dev/zvol/enc0/dcsi/apps/pvc-d5090258-cf20-4f2e-a5cf-330ac00d0049 /mnt/dcsi_pvc
|
|
||||||
|
|
||||||
# unmount
|
|
||||||
umount /mnt/dcsi_pvc
|
|
||||||
```
|
|
||||||
|
|||||||
@@ -7,5 +7,5 @@ pacman -S ansible ansible-core python-kubernetes
|
|||||||
## Setup
|
## Setup
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
ansible kubernetes -m ping -i inventory.yaml
|
ansible fedora -m ping -i ansible/inventory.yaml
|
||||||
```
|
```
|
||||||
|
|||||||
@@ -1,18 +1,21 @@
|
|||||||
kubernetes:
|
arch:
|
||||||
hosts:
|
hosts:
|
||||||
node1:
|
gamebox:
|
||||||
nodename: node1
|
|
||||||
node2:
|
|
||||||
nodename: node2
|
|
||||||
node3:
|
|
||||||
nodename: node3
|
|
||||||
|
|
||||||
colors:
|
fedora:
|
||||||
hosts:
|
hosts:
|
||||||
yellow:
|
nextcloud:
|
||||||
|
kube:
|
||||||
|
|
||||||
nextcloud-aio:
|
ubuntu:
|
||||||
unifi-external:
|
hosts:
|
||||||
gamebox:
|
unifi-external:
|
||||||
homeassistant:
|
|
||||||
driveripper:
|
raspbian:
|
||||||
|
hosts:
|
||||||
|
pivpn:
|
||||||
|
|
||||||
|
unmanaged:
|
||||||
|
hosts:
|
||||||
|
driveripper:
|
||||||
|
homeassistant:
|
||||||
|
|||||||
@@ -1,21 +1,43 @@
|
|||||||
# AWS Credentials
|
# AWS Credentials
|
||||||
|
|
||||||
Distributes aws credentials to all machines that need them.
|
## Aws Policies
|
||||||
|
|
||||||
## Access Key
|
Example Policy:
|
||||||
|
|
||||||
|
secrets/aws/policies/route53_reeselink.json
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"Version": "2012-10-17",
|
||||||
|
"Statement": [
|
||||||
|
{
|
||||||
|
"Effect": "Allow",
|
||||||
|
"Action": [
|
||||||
|
"route53:ListHostedZones",
|
||||||
|
"route53:GetChange"
|
||||||
|
],
|
||||||
|
"Resource": [
|
||||||
|
"*"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"Effect": "Allow",
|
||||||
|
"Action": [
|
||||||
|
"route53:ChangeResourceRecordSets",
|
||||||
|
"route53:ListResourceRecordSets"
|
||||||
|
],
|
||||||
|
"Resource": [
|
||||||
|
"arn:aws:route53:::hostedzone/<zone_id>"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Delete previous access key
|
# Allow updating route53 records for reeselink.com
|
||||||
aws iam delete-access-key --user-name route53 --access-key-id "$(aws iam list-access-keys --user-name route53 --output json | jq -r '.AccessKeyMetadata[0].AccessKeyId')"
|
aws iam create-policy --policy-name update-reeselink --policy-document file://secrets/aws/policies/route53_reeselink.json
|
||||||
|
|
||||||
# Create new access key
|
# Allow updating route53 records for reeseapps.com
|
||||||
aws iam create-access-key --user-name route53 | jq -r '.AccessKey.AccessKeyId,.AccessKey.SecretAccessKey' | {read AWS_ACCESS_KEY_ID; read AWS_SECRET_ACCESS_KEY;}
|
aws iam create-policy --policy-name update-reeseapps --policy-document file://secrets/aws/policies/route53_reeseapps.json
|
||||||
|
|
||||||
# Send access keys to all servers
|
|
||||||
ansible-playbook \
|
|
||||||
-i ansible/inventory.yaml aws/distribute_aws_creds.yaml \
|
|
||||||
--extra-vars "access_key_id=$AWS_ACCESS_KEY_ID secret_access_key=$AWS_SECRET_ACCESS_KEY"
|
|
||||||
|
|
||||||
# List existing access keys
|
|
||||||
aws iam list-access-keys --user-name route53 --output json
|
|
||||||
```
|
```
|
||||||
|
|||||||
@@ -1,2 +0,0 @@
|
|||||||
[profile default]
|
|
||||||
region={{ region }}
|
|
||||||
@@ -1,3 +0,0 @@
|
|||||||
[default]
|
|
||||||
aws_access_key_id={{ access_key_id }}
|
|
||||||
aws_secret_access_key={{ secret_access_key }}
|
|
||||||
@@ -1,27 +0,0 @@
|
|||||||
- name: Update nginx stream configuration
|
|
||||||
hosts: colors:kubernetes
|
|
||||||
become: true
|
|
||||||
become_user: root
|
|
||||||
become_method: sudo
|
|
||||||
vars_files:
|
|
||||||
- vars.yaml
|
|
||||||
tasks:
|
|
||||||
- name: Create .aws dir
|
|
||||||
ansible.builtin.file:
|
|
||||||
path: /root/.aws
|
|
||||||
state: directory
|
|
||||||
mode: '0700'
|
|
||||||
- name: Copy credentials
|
|
||||||
template:
|
|
||||||
src: creds_template
|
|
||||||
dest: /root/.aws/credentials
|
|
||||||
owner: root
|
|
||||||
group: root
|
|
||||||
mode: '0600'
|
|
||||||
- name: Copy config
|
|
||||||
template:
|
|
||||||
src: config_template
|
|
||||||
dest: /root/.aws/config
|
|
||||||
owner: root
|
|
||||||
group: root
|
|
||||||
mode: '0600'
|
|
||||||
@@ -1,35 +0,0 @@
|
|||||||
- name: Update nginx stream configuration
|
|
||||||
hosts: colors:kubernetes
|
|
||||||
become: true
|
|
||||||
become_user: root
|
|
||||||
become_method: sudo
|
|
||||||
vars_files:
|
|
||||||
- vars.yaml
|
|
||||||
tasks:
|
|
||||||
- name: Ensure curl, unzip installed
|
|
||||||
ansible.builtin.dnf:
|
|
||||||
name:
|
|
||||||
- curl
|
|
||||||
- unzip
|
|
||||||
state: present
|
|
||||||
- name: Download aws cli zip
|
|
||||||
ansible.builtin.get_url:
|
|
||||||
url: https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip
|
|
||||||
dest: /tmp/awscliv2.zip
|
|
||||||
mode: '0600'
|
|
||||||
- name: Remove /tmp/aws before unzipping
|
|
||||||
file:
|
|
||||||
path: /tmp/aws
|
|
||||||
state: absent
|
|
||||||
- name: Unzip aws cli
|
|
||||||
ansible.builtin.unarchive:
|
|
||||||
src: /tmp/awscliv2.zip
|
|
||||||
dest: /tmp
|
|
||||||
remote_src: yes
|
|
||||||
- name: Run aws installer
|
|
||||||
ansible.builtin.shell: /tmp/aws/install
|
|
||||||
register: result
|
|
||||||
ignore_errors: true
|
|
||||||
- name: Run aws updater
|
|
||||||
ansible.builtin.shell: /tmp/aws/install -u
|
|
||||||
when: result is failed
|
|
||||||
@@ -1,3 +0,0 @@
|
|||||||
region: us-east-2
|
|
||||||
access_key_id: ""
|
|
||||||
secret_access_key: ""
|
|
||||||
@@ -1,39 +0,0 @@
|
|||||||
apiVersion: apps/v1
|
|
||||||
kind: Deployment
|
|
||||||
metadata:
|
|
||||||
name: certsigner
|
|
||||||
namespace: kube-system
|
|
||||||
spec:
|
|
||||||
replicas: 1
|
|
||||||
selector:
|
|
||||||
matchLabels:
|
|
||||||
app: certsigner
|
|
||||||
template:
|
|
||||||
metadata:
|
|
||||||
labels:
|
|
||||||
app: certsigner
|
|
||||||
spec:
|
|
||||||
containers:
|
|
||||||
- name: certsigner
|
|
||||||
image: python:latest
|
|
||||||
command: ["cat"]
|
|
||||||
tty: true
|
|
||||||
resources:
|
|
||||||
requests:
|
|
||||||
memory: 1Mi
|
|
||||||
cpu: 1m
|
|
||||||
limits:
|
|
||||||
memory: 100Mi
|
|
||||||
cpu: 100m
|
|
||||||
volumeMounts:
|
|
||||||
- mountPath: /keys
|
|
||||||
name: keys
|
|
||||||
- mountPath: /certs
|
|
||||||
name: certs
|
|
||||||
volumes:
|
|
||||||
- name: keys
|
|
||||||
secret:
|
|
||||||
secretName: certsigner
|
|
||||||
- name: certs
|
|
||||||
emptyDir: {}
|
|
||||||
restartPolicy: Always
|
|
||||||
@@ -42,23 +42,29 @@ prometheus:
|
|||||||
annotations:
|
annotations:
|
||||||
prometheus.io/scrape: "true"
|
prometheus.io/scrape: "true"
|
||||||
prometheus.io/port: "9153"
|
prometheus.io/port: "9153"
|
||||||
|
selector: {}
|
||||||
monitor:
|
monitor:
|
||||||
enabled: false
|
enabled: false
|
||||||
additionalLabels: {}
|
additionalLabels: {}
|
||||||
namespace: ""
|
namespace: ""
|
||||||
interval: ""
|
interval: ""
|
||||||
|
selector: {}
|
||||||
|
|
||||||
service:
|
service:
|
||||||
clusterIP: "10.43.0.10"
|
clusterIP: fd02:c91e:56f5::10
|
||||||
# clusterIPs: []
|
clusterIPs:
|
||||||
|
- fd02:c91e:56f5::10
|
||||||
|
- 10.43.0.10
|
||||||
# loadBalancerIP: ""
|
# loadBalancerIP: ""
|
||||||
# externalIPs: []
|
# externalIPs: []
|
||||||
# externalTrafficPolicy: ""
|
# externalTrafficPolicy: ""
|
||||||
# ipFamilyPolicy: ""
|
ipFamilyPolicy: "RequireDualStack"
|
||||||
# The name of the Service
|
# The name of the Service
|
||||||
# If not set, a name is generated using the fullname template
|
# If not set, a name is generated using the fullname template
|
||||||
name: ""
|
name: ""
|
||||||
annotations: {}
|
annotations: {}
|
||||||
|
# Pod selector
|
||||||
|
selector: {}
|
||||||
|
|
||||||
serviceAccount:
|
serviceAccount:
|
||||||
create: false
|
create: false
|
||||||
@@ -120,7 +126,7 @@ servers:
|
|||||||
- name: prometheus
|
- name: prometheus
|
||||||
parameters: 0.0.0.0:9153
|
parameters: 0.0.0.0:9153
|
||||||
- name: forward
|
- name: forward
|
||||||
parameters: . 10.1.0.1
|
parameters: . 2606:4700:4700::1111
|
||||||
- name: cache
|
- name: cache
|
||||||
parameters: 30
|
parameters: 30
|
||||||
- name: loop
|
- name: loop
|
||||||
@@ -217,14 +223,15 @@ tolerations: []
|
|||||||
podDisruptionBudget: {}
|
podDisruptionBudget: {}
|
||||||
|
|
||||||
# configure custom zone files as per https://coredns.io/2017/05/08/custom-dns-entries-for-kubernetes/
|
# configure custom zone files as per https://coredns.io/2017/05/08/custom-dns-entries-for-kubernetes/
|
||||||
# zoneFiles:
|
zoneFiles: []
|
||||||
# - filename: local.db
|
# - filename: example.db
|
||||||
# domain: driveripper.reeselink.com
|
# domain: example.com
|
||||||
# contents: |
|
# contents: |
|
||||||
# driveripper.reeselink.com. IN SOA sns.dns.icann.com. noc.dns.icann.com. 2015082541 7200 3600 1209600 3600
|
# example.com. IN SOA sns.dns.icann.com. noc.dns.icann.com. 2015082541 7200 3600 1209600 3600
|
||||||
# driveripper.reeselink.com. IN NS b.iana-servers.net.
|
# example.com. IN NS b.iana-servers.net.
|
||||||
# driveripper.reeselink.com. IN NS a.iana-servers.net.
|
# example.com. IN NS a.iana-servers.net.
|
||||||
# driveripper.reeselink.com. IN A 172.20.0.1
|
# example.com. IN A 192.168.99.102
|
||||||
|
# *.example.com. IN A 192.168.99.102
|
||||||
|
|
||||||
# optional array of sidecar containers
|
# optional array of sidecar containers
|
||||||
extraContainers: []
|
extraContainers: []
|
||||||
@@ -376,3 +383,5 @@ deployment:
|
|||||||
name: ""
|
name: ""
|
||||||
## Annotations for the coredns deployment
|
## Annotations for the coredns deployment
|
||||||
annotations: {}
|
annotations: {}
|
||||||
|
## Pod selector
|
||||||
|
selector: {}
|
||||||
@@ -1,6 +1,14 @@
|
|||||||
# DDNS Service
|
# DDNS Service
|
||||||
|
|
||||||
|
Since we occasionally need an ipv4 address we'll make one.
|
||||||
|
|
||||||
|
This creates and keeps updated `ipv4.reeselink.com`.
|
||||||
|
|
||||||
This requires the aws cli to be installed on each node with credentials that can modify
|
This requires the aws cli to be installed on each node with credentials that can modify
|
||||||
records in route53.
|
records in route53.
|
||||||
|
|
||||||
<https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html>
|
<https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html>
|
||||||
|
|
||||||
|
```bash
|
||||||
|
ansible-playbook -i ansible/inventory.yaml ddns/install_ddns.yaml
|
||||||
|
```
|
||||||
|
|||||||
@@ -1,5 +1,5 @@
|
|||||||
[Unit]
|
[Unit]
|
||||||
Description=Updates the {{ fqdn }} and *.{{ fqdn }} record with the current public IPV4 address
|
Description=Updates the {{ fqdn }} record with the current public IPV4 address
|
||||||
|
|
||||||
[Service]
|
[Service]
|
||||||
ExecStart=/usr/local/scripts/ddns.sh
|
ExecStart=/usr/local/scripts/ddns.sh
|
||||||
|
|||||||
@@ -5,7 +5,8 @@ PUBLIC_IP=$(curl -4 ifconfig.me)
|
|||||||
# Update *.{{ fqdn }} and {{ fqdn }}
|
# Update *.{{ fqdn }} and {{ fqdn }}
|
||||||
cat /etc/ddns/record_template.json \
|
cat /etc/ddns/record_template.json \
|
||||||
| jq '.Changes[0].ResourceRecordSet.ResourceRecords[0].Value = "'$PUBLIC_IP'"' \
|
| jq '.Changes[0].ResourceRecordSet.ResourceRecords[0].Value = "'$PUBLIC_IP'"' \
|
||||||
| jq '.Changes[1].ResourceRecordSet.ResourceRecords[0].Value = "'$PUBLIC_IP'"' \
|
|
||||||
> /etc/ddns/record.json
|
> /etc/ddns/record.json
|
||||||
# aws cli to update a record
|
# aws cli to update a record
|
||||||
aws route53 change-resource-record-sets --hosted-zone-id {{ hosted_zone_id }} --change-batch file:///etc/ddns/record.json
|
aws route53 change-resource-record-sets --hosted-zone-id {{ hosted_zone_id }} --change-batch file:///etc/ddns/record.json
|
||||||
|
|
||||||
|
PUBLIC_IPV6=$(dig -t aaaa +short myip.opendns.com @resolver1.opendns.com)
|
||||||
|
|||||||
@@ -1,19 +1,6 @@
|
|||||||
{
|
{
|
||||||
"Comment": "Update Public IPV4 Address",
|
"Comment": "Update Public IPV4 Address",
|
||||||
"Changes": [
|
"Changes": [
|
||||||
{
|
|
||||||
"Action": "UPSERT",
|
|
||||||
"ResourceRecordSet": {
|
|
||||||
"Name": "*.{{ fqdn }}.",
|
|
||||||
"Type": "A",
|
|
||||||
"TTL": 300,
|
|
||||||
"ResourceRecords": [
|
|
||||||
{
|
|
||||||
"Value": ""
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
{
|
{
|
||||||
"Action": "UPSERT",
|
"Action": "UPSERT",
|
||||||
"ResourceRecordSet": {
|
"ResourceRecordSet": {
|
||||||
|
|||||||
@@ -1,2 +1,2 @@
|
|||||||
hosted_zone_id: Z012820733346FJ0U4FUF
|
hosted_zone_id: Z0092652G7L97DSINN18
|
||||||
fqdn: reeseapps.com
|
fqdn: ipv4.reeselink.com
|
||||||
|
|||||||
@@ -1,39 +1,34 @@
|
|||||||
# Network Management
|
# Network Management
|
||||||
|
|
||||||
- [Network Management](#network-management)
|
- [Network Management](#network-management)
|
||||||
|
- [Reeseapps vs Reeselink](#reeseapps-vs-reeselink)
|
||||||
- [DNS Caching](#dns-caching)
|
- [DNS Caching](#dns-caching)
|
||||||
- [Route53](#route53)
|
|
||||||
- [Reeselink Addresses](#reeselink-addresses)
|
- [Reeselink Addresses](#reeselink-addresses)
|
||||||
- [Reeseapps Addresses](#reeseapps-addresses)
|
- [Reeseapps Addresses](#reeseapps-addresses)
|
||||||
- [Duconet WG Addresses](#duconet-wg-addresses)
|
- [Duconet WG Addresses](#duconet-wg-addresses)
|
||||||
|
|
||||||
|
## Reeseapps vs Reeselink
|
||||||
|
|
||||||
|
.reeseapps domains are for hosted service that do something. They are usually accessible via the
|
||||||
|
web and are usually public. Web apps, Minecraft servers, other game servers, etc. are all reeseapps
|
||||||
|
domains.
|
||||||
|
|
||||||
|
.reeselink domains are for linking machines together. They are for SSH, Cockpit, NFS, SMB, ISCSI,
|
||||||
|
and other machine to machine connections. They can be public or private and are mostly for
|
||||||
|
convenience.
|
||||||
|
|
||||||
## DNS Caching
|
## DNS Caching
|
||||||
|
|
||||||
Use unifi to cache important DNS records. The following are critical:
|
Use unifi to cache important DNS records. The following are critical:
|
||||||
|
|
||||||
- `driveripper-wg.reeselink.com` `Host (AAAA)` `fd00:fd41:d0f1:1010::6`
|
- `driveripper-wg.reeselink.com` `Host (AAAA)` `fd00:fd41:d0f1:1010::6`
|
||||||
- `democratic-csi-server.reeselink.com` `Host (A)` `fd00:fd41:d0f1:1010::6`
|
- `democratic-csi-server.reeselink.com` `Host (A)` `fd00:fd41:d0f1:1010::6`
|
||||||
- `driveripper.reeseapps.com` `Host (A)` `10.1.200.253`
|
|
||||||
- `driveripper.reeseapps.com` `Host (A)` `10.1.203.197`
|
|
||||||
- `driveripper.reeseapps.com` `Host (AAAA)` `2600:1700:1e6c:a81f:153e:9c35:8ff3:fa3`
|
- `driveripper.reeseapps.com` `Host (AAAA)` `2600:1700:1e6c:a81f:153e:9c35:8ff3:fa3`
|
||||||
- `driveripper.reeseapps.com` `Host (AAAA)` `2600:1700:1e6c:a81f:793d:7abf:e94d:9bc4`
|
- `driveripper.reeseapps.com` `Host (AAAA)` `2600:1700:1e6c:a81f:793d:7abf:e94d:9bc4`
|
||||||
|
|
||||||
## Route53
|
|
||||||
|
|
||||||
```bash
|
|
||||||
aws route53 list-hosted-zones
|
|
||||||
|
|
||||||
# reeselink
|
|
||||||
aws route53 change-resource-record-sets --hosted-zone-id Z0092652G7L97DSINN18 --change-batch file://
|
|
||||||
|
|
||||||
# reeseapps
|
|
||||||
aws route53 change-resource-record-sets --hosted-zone-id Z012820733346FJ0U4FUF --change-batch file://
|
|
||||||
```
|
|
||||||
|
|
||||||
## Reeselink Addresses
|
## Reeselink Addresses
|
||||||
|
|
||||||
These are convenience dns records so you don't have to remember every ip address. IPV6 and IPV4.
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
aws route53 change-resource-record-sets --hosted-zone-id Z0092652G7L97DSINN18 --change-batch file://dns/reeselink.json
|
aws route53 change-resource-record-sets --hosted-zone-id Z0092652G7L97DSINN18 --change-batch file://dns/reeselink.json
|
||||||
```
|
```
|
||||||
|
|||||||
22
dns/ipv4.txt
22
dns/ipv4.txt
@@ -1,22 +0,0 @@
|
|||||||
gamebox.reeselink.com
|
|
||||||
10.1.235.45
|
|
||||||
driveripper.reeselink.com
|
|
||||||
10.1.2.10
|
|
||||||
yellow.reeselink.com
|
|
||||||
10.1.203.197
|
|
||||||
node1.reeselink.com
|
|
||||||
10.1.2.13
|
|
||||||
node2.reeselink.com
|
|
||||||
10.1.2.14
|
|
||||||
node3.reeselink.com
|
|
||||||
10.1.2.15
|
|
||||||
homeassistant.reeselink.com
|
|
||||||
10.1.27.89
|
|
||||||
nextcloud-aio.reeselink.com
|
|
||||||
10.1.175.237
|
|
||||||
unifi-external.reeselink.com
|
|
||||||
10.1.241.139
|
|
||||||
e3s1plus.reeselink.com
|
|
||||||
10.1.224.78
|
|
||||||
cr10se.reeselink.com
|
|
||||||
10.2.165.70
|
|
||||||
34
dns/ipv6.txt
34
dns/ipv6.txt
@@ -1,34 +0,0 @@
|
|||||||
gamebox.reeselink.com
|
|
||||||
2600:1700:1e6c:a81f:5d69:2d:101a:7aef
|
|
||||||
driveripper.reeselink.com
|
|
||||||
2600:1700:1e6c:a81f:94bb:b8ff:fe9f:1c63
|
|
||||||
yellow.reeselink.com
|
|
||||||
2600:1700:1e6c:a81f:793d:7abf:e94d:9bc4
|
|
||||||
node1.reeselink.com
|
|
||||||
2600:1700:1e6c:a81f:2a0:98ff:fe6c:eca7
|
|
||||||
node2.reeselink.com
|
|
||||||
2600:1700:1e6c:a81f:2a0:98ff:fe47:6498
|
|
||||||
node3.reeselink.com
|
|
||||||
2600:1700:1e6c:a81f:2a0:98ff:fe0f:aba3
|
|
||||||
homeassistant.reeselink.com
|
|
||||||
2600:1700:1e6c:a81f:19:a563:8600:2db6
|
|
||||||
nextcloud-aio.reeselink.com
|
|
||||||
2600:1700:1e6c:a81f:5054:ff:fe03:880
|
|
||||||
unifi-external.reeselink.com
|
|
||||||
2600:1700:1e6c:a81f:5054:ff:fea0:200c
|
|
||||||
e3s1plus.reeselink.com
|
|
||||||
2600:1700:1e6c:a81f:19a4:37de:9672:1f76
|
|
||||||
yellow-wg.reeselink.com
|
|
||||||
fd00:fd41:d0f1:1010::1
|
|
||||||
node1-wg.reeselink.com
|
|
||||||
fd00:fd41:d0f1:1010::3
|
|
||||||
node2-wg.reeselink.com
|
|
||||||
fd00:fd41:d0f1:1010::4
|
|
||||||
node3-wg.reeselink.com
|
|
||||||
fd00:fd41:d0f1:1010::5
|
|
||||||
driveripper-wg.reeselink.com
|
|
||||||
fd00:fd41:d0f1:1010::6
|
|
||||||
unifi-external-wg.reeselink.com
|
|
||||||
fd00:fd41:d0f1:1010::7
|
|
||||||
nextcloud-aio-wg.reeselink.com
|
|
||||||
fd00:fd41:d0f1:1010::8
|
|
||||||
@@ -4,12 +4,25 @@
|
|||||||
{
|
{
|
||||||
"Action": "UPSERT",
|
"Action": "UPSERT",
|
||||||
"ResourceRecordSet": {
|
"ResourceRecordSet": {
|
||||||
"Name": "*.reeseapps.com",
|
"Name": "nextcloud.reeseapps.com",
|
||||||
"Type": "AAAA",
|
"Type": "AAAA",
|
||||||
"TTL": 300,
|
"TTL": 300,
|
||||||
"ResourceRecords": [
|
"ResourceRecords": [
|
||||||
{
|
{
|
||||||
"Value": "2600:1700:1e6c:a81f:793d:7abf:e94d:9bc4"
|
"Value": "2600:1700:1e6c:a81f:2a0:98ff:fe14:1bbd"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"Action": "UPSERT",
|
||||||
|
"ResourceRecordSet": {
|
||||||
|
"Name": "homeassistant.reeseapps.com",
|
||||||
|
"Type": "AAAA",
|
||||||
|
"TTL": 300,
|
||||||
|
"ResourceRecords": [
|
||||||
|
{
|
||||||
|
"Value": "2600:1700:1e6c:a81f:42:acff:fe1e:2101"
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -4,12 +4,12 @@
|
|||||||
{
|
{
|
||||||
"Action": "UPSERT",
|
"Action": "UPSERT",
|
||||||
"ResourceRecordSet": {
|
"ResourceRecordSet": {
|
||||||
"Name": "gamebox.reeselink.com",
|
"Name": "kube.reeselink.com",
|
||||||
"Type": "AAAA",
|
"Type": "AAAA",
|
||||||
"TTL": 300,
|
"TTL": 300,
|
||||||
"ResourceRecords": [
|
"ResourceRecords": [
|
||||||
{
|
{
|
||||||
"Value": "2600:1700:1e6c:a81f:5d69:2d:101a:7aef"
|
"Value": "2600:1700:1e6c:a81f:2a0:98ff:fe39:9b5"
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
@@ -17,142 +17,12 @@
|
|||||||
{
|
{
|
||||||
"Action": "UPSERT",
|
"Action": "UPSERT",
|
||||||
"ResourceRecordSet": {
|
"ResourceRecordSet": {
|
||||||
"Name": "gamebox.reeselink.com",
|
"Name": "nextcloud.reeselink.com",
|
||||||
"Type": "A",
|
|
||||||
"TTL": 300,
|
|
||||||
"ResourceRecords": [
|
|
||||||
{
|
|
||||||
"Value": "10.1.235.45"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"Action": "UPSERT",
|
|
||||||
"ResourceRecordSet": {
|
|
||||||
"Name": "driveripper.reeselink.com",
|
|
||||||
"Type": "AAAA",
|
"Type": "AAAA",
|
||||||
"TTL": 300,
|
"TTL": 300,
|
||||||
"ResourceRecords": [
|
"ResourceRecords": [
|
||||||
{
|
{
|
||||||
"Value": "2600:1700:1e6c:a81f:94bb:b8ff:fe9f:1c63"
|
"Value": "2600:1700:1e6c:a81f:2a0:98ff:fe14:1bbd"
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"Action": "UPSERT",
|
|
||||||
"ResourceRecordSet": {
|
|
||||||
"Name": "driveripper.reeselink.com",
|
|
||||||
"Type": "A",
|
|
||||||
"TTL": 300,
|
|
||||||
"ResourceRecords": [
|
|
||||||
{
|
|
||||||
"Value": "10.1.2.10"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"Action": "UPSERT",
|
|
||||||
"ResourceRecordSet": {
|
|
||||||
"Name": "yellow.reeselink.com",
|
|
||||||
"Type": "AAAA",
|
|
||||||
"TTL": 300,
|
|
||||||
"ResourceRecords": [
|
|
||||||
{
|
|
||||||
"Value": "2600:1700:1e6c:a81f:793d:7abf:e94d:9bc4"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"Action": "UPSERT",
|
|
||||||
"ResourceRecordSet": {
|
|
||||||
"Name": "yellow.reeselink.com",
|
|
||||||
"Type": "A",
|
|
||||||
"TTL": 300,
|
|
||||||
"ResourceRecords": [
|
|
||||||
{
|
|
||||||
"Value": "10.1.203.197"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"Action": "UPSERT",
|
|
||||||
"ResourceRecordSet": {
|
|
||||||
"Name": "node1.reeselink.com",
|
|
||||||
"Type": "AAAA",
|
|
||||||
"TTL": 300,
|
|
||||||
"ResourceRecords": [
|
|
||||||
{
|
|
||||||
"Value": "2600:1700:1e6c:a81f:2a0:98ff:fe6c:eca7"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"Action": "UPSERT",
|
|
||||||
"ResourceRecordSet": {
|
|
||||||
"Name": "node1.reeselink.com",
|
|
||||||
"Type": "A",
|
|
||||||
"TTL": 300,
|
|
||||||
"ResourceRecords": [
|
|
||||||
{
|
|
||||||
"Value": "10.1.2.13"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"Action": "UPSERT",
|
|
||||||
"ResourceRecordSet": {
|
|
||||||
"Name": "node2.reeselink.com",
|
|
||||||
"Type": "AAAA",
|
|
||||||
"TTL": 300,
|
|
||||||
"ResourceRecords": [
|
|
||||||
{
|
|
||||||
"Value": "2600:1700:1e6c:a81f:2a0:98ff:fe47:6498"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"Action": "UPSERT",
|
|
||||||
"ResourceRecordSet": {
|
|
||||||
"Name": "node2.reeselink.com",
|
|
||||||
"Type": "A",
|
|
||||||
"TTL": 300,
|
|
||||||
"ResourceRecords": [
|
|
||||||
{
|
|
||||||
"Value": "10.1.2.14"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"Action": "UPSERT",
|
|
||||||
"ResourceRecordSet": {
|
|
||||||
"Name": "node3.reeselink.com",
|
|
||||||
"Type": "AAAA",
|
|
||||||
"TTL": 300,
|
|
||||||
"ResourceRecords": [
|
|
||||||
{
|
|
||||||
"Value": "2600:1700:1e6c:a81f:2a0:98ff:fe0f:aba3"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"Action": "UPSERT",
|
|
||||||
"ResourceRecordSet": {
|
|
||||||
"Name": "node3.reeselink.com",
|
|
||||||
"Type": "A",
|
|
||||||
"TTL": 300,
|
|
||||||
"ResourceRecords": [
|
|
||||||
{
|
|
||||||
"Value": "10.1.2.15"
|
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
@@ -165,202 +35,7 @@
|
|||||||
"TTL": 300,
|
"TTL": 300,
|
||||||
"ResourceRecords": [
|
"ResourceRecords": [
|
||||||
{
|
{
|
||||||
"Value": "2600:1700:1e6c:a81f:19:a563:8600:2db6"
|
"Value": "2600:1700:1e6c:a81f:42:acff:fe1e:2101"
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"Action": "UPSERT",
|
|
||||||
"ResourceRecordSet": {
|
|
||||||
"Name": "homeassistant.reeselink.com",
|
|
||||||
"Type": "A",
|
|
||||||
"TTL": 300,
|
|
||||||
"ResourceRecords": [
|
|
||||||
{
|
|
||||||
"Value": "10.1.27.89"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"Action": "UPSERT",
|
|
||||||
"ResourceRecordSet": {
|
|
||||||
"Name": "nextcloud-aio.reeselink.com",
|
|
||||||
"Type": "AAAA",
|
|
||||||
"TTL": 300,
|
|
||||||
"ResourceRecords": [
|
|
||||||
{
|
|
||||||
"Value": "2600:1700:1e6c:a81f:5054:ff:fe03:880"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"Action": "UPSERT",
|
|
||||||
"ResourceRecordSet": {
|
|
||||||
"Name": "nextcloud-aio.reeselink.com",
|
|
||||||
"Type": "A",
|
|
||||||
"TTL": 300,
|
|
||||||
"ResourceRecords": [
|
|
||||||
{
|
|
||||||
"Value": "10.1.175.237"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"Action": "UPSERT",
|
|
||||||
"ResourceRecordSet": {
|
|
||||||
"Name": "unifi-external.reeselink.com",
|
|
||||||
"Type": "AAAA",
|
|
||||||
"TTL": 300,
|
|
||||||
"ResourceRecords": [
|
|
||||||
{
|
|
||||||
"Value": "2600:1700:1e6c:a81f:5054:ff:fea0:200c"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"Action": "UPSERT",
|
|
||||||
"ResourceRecordSet": {
|
|
||||||
"Name": "unifi-external.reeselink.com",
|
|
||||||
"Type": "A",
|
|
||||||
"TTL": 300,
|
|
||||||
"ResourceRecords": [
|
|
||||||
{
|
|
||||||
"Value": "10.1.241.139"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"Action": "UPSERT",
|
|
||||||
"ResourceRecordSet": {
|
|
||||||
"Name": "e3s1plus.reeselink.com",
|
|
||||||
"Type": "AAAA",
|
|
||||||
"TTL": 300,
|
|
||||||
"ResourceRecords": [
|
|
||||||
{
|
|
||||||
"Value": "2600:1700:1e6c:a81f:19a4:37de:9672:1f76"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"Action": "UPSERT",
|
|
||||||
"ResourceRecordSet": {
|
|
||||||
"Name": "e3s1plus.reeselink.com",
|
|
||||||
"Type": "A",
|
|
||||||
"TTL": 300,
|
|
||||||
"ResourceRecords": [
|
|
||||||
{
|
|
||||||
"Value": "10.1.224.78"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"Action": "UPSERT",
|
|
||||||
"ResourceRecordSet": {
|
|
||||||
"Name": "cr10se.reeselink.com",
|
|
||||||
"Type": "A",
|
|
||||||
"TTL": 300,
|
|
||||||
"ResourceRecords": [
|
|
||||||
{
|
|
||||||
"Value": "10.2.165.70"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"Action": "UPSERT",
|
|
||||||
"ResourceRecordSet": {
|
|
||||||
"Name": "yellow-wg.reeselink.com",
|
|
||||||
"Type": "AAAA",
|
|
||||||
"TTL": 300,
|
|
||||||
"ResourceRecords": [
|
|
||||||
{
|
|
||||||
"Value": "fd00:fd41:d0f1:1010::1"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"Action": "UPSERT",
|
|
||||||
"ResourceRecordSet": {
|
|
||||||
"Name": "node1-wg.reeselink.com",
|
|
||||||
"Type": "AAAA",
|
|
||||||
"TTL": 300,
|
|
||||||
"ResourceRecords": [
|
|
||||||
{
|
|
||||||
"Value": "fd00:fd41:d0f1:1010::3"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"Action": "UPSERT",
|
|
||||||
"ResourceRecordSet": {
|
|
||||||
"Name": "node2-wg.reeselink.com",
|
|
||||||
"Type": "AAAA",
|
|
||||||
"TTL": 300,
|
|
||||||
"ResourceRecords": [
|
|
||||||
{
|
|
||||||
"Value": "fd00:fd41:d0f1:1010::4"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"Action": "UPSERT",
|
|
||||||
"ResourceRecordSet": {
|
|
||||||
"Name": "node3-wg.reeselink.com",
|
|
||||||
"Type": "AAAA",
|
|
||||||
"TTL": 300,
|
|
||||||
"ResourceRecords": [
|
|
||||||
{
|
|
||||||
"Value": "fd00:fd41:d0f1:1010::5"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"Action": "UPSERT",
|
|
||||||
"ResourceRecordSet": {
|
|
||||||
"Name": "driveripper-wg.reeselink.com",
|
|
||||||
"Type": "AAAA",
|
|
||||||
"TTL": 300,
|
|
||||||
"ResourceRecords": [
|
|
||||||
{
|
|
||||||
"Value": "fd00:fd41:d0f1:1010::6"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"Action": "UPSERT",
|
|
||||||
"ResourceRecordSet": {
|
|
||||||
"Name": "unifi-external-wg.reeselink.com",
|
|
||||||
"Type": "AAAA",
|
|
||||||
"TTL": 300,
|
|
||||||
"ResourceRecords": [
|
|
||||||
{
|
|
||||||
"Value": "fd00:fd41:d0f1:1010::7"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"Action": "UPSERT",
|
|
||||||
"ResourceRecordSet": {
|
|
||||||
"Name": "nextcloud-aio-wg.reeselink.com",
|
|
||||||
"Type": "AAAA",
|
|
||||||
"TTL": 300,
|
|
||||||
"ResourceRecords": [
|
|
||||||
{
|
|
||||||
"Value": "fd00:fd41:d0f1:1010::8"
|
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|||||||
82
external-dns/deploy.yaml
Normal file
82
external-dns/deploy.yaml
Normal file
@@ -0,0 +1,82 @@
|
|||||||
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
|
kind: ClusterRole
|
||||||
|
metadata:
|
||||||
|
name: external-dns
|
||||||
|
labels:
|
||||||
|
app.kubernetes.io/name: external-dns
|
||||||
|
rules:
|
||||||
|
- apiGroups: [""]
|
||||||
|
resources: ["services","endpoints","pods","nodes"]
|
||||||
|
verbs: ["get","watch","list"]
|
||||||
|
- apiGroups: ["extensions","networking.k8s.io"]
|
||||||
|
resources: ["ingresses"]
|
||||||
|
verbs: ["get","watch","list"]
|
||||||
|
---
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
|
kind: ClusterRoleBinding
|
||||||
|
metadata:
|
||||||
|
name: external-dns-viewer
|
||||||
|
labels:
|
||||||
|
app.kubernetes.io/name: external-dns
|
||||||
|
roleRef:
|
||||||
|
apiGroup: rbac.authorization.k8s.io
|
||||||
|
kind: ClusterRole
|
||||||
|
name: external-dns
|
||||||
|
subjects:
|
||||||
|
- kind: ServiceAccount
|
||||||
|
name: external-dns
|
||||||
|
namespace: kube-system # change to desired namespace: externaldns, kube-addons
|
||||||
|
---
|
||||||
|
apiVersion: apps/v1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
name: external-dns
|
||||||
|
namespace: kube-system
|
||||||
|
labels:
|
||||||
|
app.kubernetes.io/name: external-dns
|
||||||
|
spec:
|
||||||
|
strategy:
|
||||||
|
type: Recreate
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
app.kubernetes.io/name: external-dns
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app.kubernetes.io/name: external-dns
|
||||||
|
spec:
|
||||||
|
serviceAccountName: external-dns
|
||||||
|
containers:
|
||||||
|
- name: external-dns
|
||||||
|
image: registry.k8s.io/external-dns/external-dns:v0.14.2
|
||||||
|
# image: nginx
|
||||||
|
args:
|
||||||
|
- --source=service
|
||||||
|
- --source=ingress
|
||||||
|
- --domain-filter=reeseapps.com # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones
|
||||||
|
- --provider=aws
|
||||||
|
# - --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization
|
||||||
|
- --aws-zone-type=public # only look at public hosted zones (valid values are public, private or no value for both)
|
||||||
|
- --registry=txt
|
||||||
|
- --txt-owner-id=external-dns
|
||||||
|
env:
|
||||||
|
- name: AWS_DEFAULT_REGION
|
||||||
|
value: us-east-1 # change to region where EKS is installed
|
||||||
|
- name: AWS_SHARED_CREDENTIALS_FILE
|
||||||
|
value: /.aws/externaldns-credentials
|
||||||
|
volumeMounts:
|
||||||
|
- name: aws-credentials
|
||||||
|
mountPath: /.aws
|
||||||
|
readOnly: true
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
memory: "64Mi"
|
||||||
|
cpu: "250m"
|
||||||
|
limits:
|
||||||
|
memory: "128Mi"
|
||||||
|
cpu: "500m"
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
- name: aws-credentials
|
||||||
|
secret:
|
||||||
|
secretName: external-dns
|
||||||
8
external-dns/sa.yaml
Normal file
8
external-dns/sa.yaml
Normal file
@@ -0,0 +1,8 @@
|
|||||||
|
# comment out sa if it was previously created
|
||||||
|
apiVersion: v1
|
||||||
|
kind: ServiceAccount
|
||||||
|
metadata:
|
||||||
|
name: external-dns
|
||||||
|
namespace: kube-system
|
||||||
|
labels:
|
||||||
|
app.kubernetes.io/name: external-dns
|
||||||
155
fedora/README.md
Normal file
155
fedora/README.md
Normal file
@@ -0,0 +1,155 @@
|
|||||||
|
# Fedora Server
|
||||||
|
|
||||||
|
- [Fedora Server](#fedora-server)
|
||||||
|
- [Installation](#installation)
|
||||||
|
- [Setup SSH](#setup-ssh)
|
||||||
|
- [Fail2Ban](#fail2ban)
|
||||||
|
- [Automatic Updates](#automatic-updates)
|
||||||
|
- [Disable Swap](#disable-swap)
|
||||||
|
- [Extras](#extras)
|
||||||
|
|
||||||
|
<https://docs.fedoraproject.org/en-US/fedora-server/installation/postinstallation-tasks/#_manage_system_updates>
|
||||||
|
|
||||||
|
Note these instructions differentiate between an `operator` and a `server`. The operator can be
|
||||||
|
any machine that configure the server. A pipeline, laptop, dedicated server, etc. are all options.
|
||||||
|
The server can be its own operator, though that's not recommended since servers should be ephemeral
|
||||||
|
and the operator will store information about each server.
|
||||||
|
|
||||||
|
## Installation
|
||||||
|
|
||||||
|
1. Make sure to use `custom` disk partitioner and select `btrfs`.
|
||||||
|
2. Create an administrator. We'll give ssh root access later, but this gives you a cockpit user.
|
||||||
|
3. Ensure IPV6 connection is set to "eui64".
|
||||||
|
4. Set hostname
|
||||||
|
|
||||||
|
## Setup SSH
|
||||||
|
|
||||||
|
On the operator:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export SSH_HOST=kube
|
||||||
|
ssh-keygen -t rsa -b 4096 -C ducoterra@"$SSH_HOST".reeselink.com -f ~/.ssh/id_"$SSH_HOST"_rsa
|
||||||
|
|
||||||
|
# Note: If you get "too many authentication failures" it's likely because you have too many private
|
||||||
|
# keys in your ~/.ssh directory. Use `-o PubkeyAuthentication` to fix it.
|
||||||
|
ssh-copy-id -o PubkeyAuthentication=no -i ~/.ssh/id_"$SSH_HOST"_rsa.pub ducoterra@"$SSH_HOST".reeselink.com
|
||||||
|
|
||||||
|
cat <<EOF >> ~/.ssh/config
|
||||||
|
|
||||||
|
Host $SSH_HOST
|
||||||
|
Hostname "$SSH_HOST".reeselink.com
|
||||||
|
User root
|
||||||
|
ProxyCommand none
|
||||||
|
ForwardAgent no
|
||||||
|
ForwardX11 no
|
||||||
|
Port 22
|
||||||
|
KeepAlive yes
|
||||||
|
IdentityFile ~/.ssh/id_"$SSH_HOST"_rsa
|
||||||
|
EOF
|
||||||
|
```
|
||||||
|
|
||||||
|
On the server:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Copy authorized_keys to root
|
||||||
|
sudo cp ~/.ssh/authorized_keys /root/.ssh/authorized_keys
|
||||||
|
|
||||||
|
# Change your password
|
||||||
|
passwd
|
||||||
|
|
||||||
|
sudo su -
|
||||||
|
echo "PasswordAuthentication no" > /etc/ssh/sshd_config.d/01-prohibit-password.conf
|
||||||
|
echo '%wheel ALL=(ALL) NOPASSWD: ALL' > /etc/sudoers.d/01-nopasswd-wheel
|
||||||
|
```
|
||||||
|
|
||||||
|
On the operator:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Test if you can SSH with a password
|
||||||
|
ssh -o PubkeyAuthentication=no ducoterra@"$SSH_HOST".reeselink.com
|
||||||
|
|
||||||
|
# Test that you can log into the server with ssh config
|
||||||
|
ssh $SSH_HOST
|
||||||
|
```
|
||||||
|
|
||||||
|
## Fail2Ban
|
||||||
|
|
||||||
|
On the server:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
dnf install -y fail2ban
|
||||||
|
|
||||||
|
# Setup initial rules
|
||||||
|
cat <<EOF > /etc/fail2ban/jail.local
|
||||||
|
# Jail configuration additions for local installation
|
||||||
|
|
||||||
|
# Adjust the default configuration's default values
|
||||||
|
[DEFAULT]
|
||||||
|
# Optional enter an trusted IP never to ban
|
||||||
|
ignoreip = 2600:1700:1e6c:a81f::0/64
|
||||||
|
bantime = 6600
|
||||||
|
backend = auto
|
||||||
|
|
||||||
|
# The main configuration file defines all services but
|
||||||
|
# deactivates them by default. We have to activate those neeeded
|
||||||
|
[sshd]
|
||||||
|
enabled = true
|
||||||
|
EOF
|
||||||
|
|
||||||
|
systemctl enable fail2ban --now
|
||||||
|
tail -f /var/log/fail2ban.log
|
||||||
|
```
|
||||||
|
|
||||||
|
## Automatic Updates
|
||||||
|
|
||||||
|
On the server:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
dnf install dnf-automatic -y
|
||||||
|
|
||||||
|
systemctl enable --now dnf-automatic-install.timer
|
||||||
|
```
|
||||||
|
|
||||||
|
## Disable Swap
|
||||||
|
|
||||||
|
```bash
|
||||||
|
swapoff -a
|
||||||
|
zramctl --reset /dev/zram0
|
||||||
|
dnf -y remove zram-generator-defaults
|
||||||
|
```
|
||||||
|
|
||||||
|
## Extras
|
||||||
|
|
||||||
|
On the server:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Set vim as the default editor
|
||||||
|
dnf install -y vim-default-editor --allowerasing
|
||||||
|
|
||||||
|
# Install glances for system monitoring
|
||||||
|
dnf install -y glances
|
||||||
|
|
||||||
|
# Install zsh with autocomplete and suggestions
|
||||||
|
dnf install zsh zsh-autosuggestions zsh-syntax-highlighting
|
||||||
|
|
||||||
|
cat <<EOF > ~/.zshrc
|
||||||
|
# Basic settings
|
||||||
|
autoload bashcompinit && bashcompinit
|
||||||
|
autoload -U compinit; compinit
|
||||||
|
zstyle ':completion:*' menu select
|
||||||
|
|
||||||
|
# Prompt settings
|
||||||
|
autoload -Uz promptinit
|
||||||
|
promptinit
|
||||||
|
prompt redhat
|
||||||
|
PROMPT_EOL_MARK=
|
||||||
|
|
||||||
|
# Syntax Highlighting
|
||||||
|
source /usr/share/zsh-syntax-highlighting/zsh-syntax-highlighting.zsh
|
||||||
|
source /usr/share/zsh-autosuggestions/zsh-autosuggestions.zsh
|
||||||
|
|
||||||
|
### Custom Commands and Aliases ###
|
||||||
|
EOF
|
||||||
|
|
||||||
|
chsh -s $(which zsh) && chsh -s $(which zsh) ducoterra
|
||||||
|
```
|
||||||
@@ -8,6 +8,12 @@ ingress:
|
|||||||
kubernetes.io/ingress.class: nginx
|
kubernetes.io/ingress.class: nginx
|
||||||
nginx.ingress.kubernetes.io/proxy-body-size: "0"
|
nginx.ingress.kubernetes.io/proxy-body-size: "0"
|
||||||
nginx.org/client-max-body-size: "0"
|
nginx.org/client-max-body-size: "0"
|
||||||
|
apiVersion: networking.k8s.io/v1
|
||||||
|
nginx.ingress.kubernetes.io/stream-snippet: |
|
||||||
|
server {
|
||||||
|
listen 22;
|
||||||
|
proxy_pass [::]:22;
|
||||||
|
}
|
||||||
hosts:
|
hosts:
|
||||||
- host: gitea.reeseapps.com
|
- host: gitea.reeseapps.com
|
||||||
paths:
|
paths:
|
||||||
@@ -20,15 +26,11 @@ ingress:
|
|||||||
|
|
||||||
persistence:
|
persistence:
|
||||||
enabled: true
|
enabled: true
|
||||||
create: false
|
create: true
|
||||||
storageClass: zfs-iscsi-enc0
|
|
||||||
claimName: data-gitea-0
|
claimName: data-gitea-0
|
||||||
annotations:
|
annotations:
|
||||||
"helm.sh/resource-policy": keep
|
"helm.sh/resource-policy": keep
|
||||||
|
|
||||||
global:
|
|
||||||
storageClass: zfs-iscsi-enc1
|
|
||||||
|
|
||||||
postgresql:
|
postgresql:
|
||||||
enabled: true
|
enabled: true
|
||||||
image:
|
image:
|
||||||
@@ -36,7 +38,6 @@ postgresql:
|
|||||||
primary:
|
primary:
|
||||||
persistence:
|
persistence:
|
||||||
enabled: true
|
enabled: true
|
||||||
storageClass: zfs-iscsi-enc1
|
|
||||||
annotations:
|
annotations:
|
||||||
"helm.sh/resource-policy": keep
|
"helm.sh/resource-policy": keep
|
||||||
|
|
||||||
@@ -49,15 +50,21 @@ gitea:
|
|||||||
config:
|
config:
|
||||||
service:
|
service:
|
||||||
DISABLE_REGISTRATION: true
|
DISABLE_REGISTRATION: true
|
||||||
|
server:
|
||||||
|
SSH_PORT: 22
|
||||||
|
SSH_DOMAIN: git.reeseapps.com
|
||||||
|
|
||||||
service:
|
service:
|
||||||
ssh:
|
ssh:
|
||||||
port: 2222
|
port: 22
|
||||||
type: LoadBalancer
|
type: LoadBalancer
|
||||||
externalTrafficPolicy: Cluster
|
externalTrafficPolicy: Local
|
||||||
|
ipFamilyPolicy: SingleStack
|
||||||
|
ipFamilies: ["IPv6"]
|
||||||
annotations:
|
annotations:
|
||||||
metallb.universe.tf/address-pool: "production"
|
metallb.universe.tf/address-pool: "external"
|
||||||
metallb.universe.tf/allow-shared-ip: "production"
|
external-dns.alpha.kubernetes.io/hostname: git.reeseapps.com
|
||||||
|
|
||||||
|
|
||||||
redis-cluster:
|
redis-cluster:
|
||||||
enabled: false
|
enabled: false
|
||||||
|
|||||||
@@ -3,10 +3,11 @@ kind: Service
|
|||||||
metadata:
|
metadata:
|
||||||
name: {{ .Release.Name }}
|
name: {{ .Release.Name }}
|
||||||
annotations:
|
annotations:
|
||||||
metallb.universe.tf/address-pool: "production"
|
metallb.universe.tf/address-pool: "internal"
|
||||||
metallb.universe.tf/allow-shared-ip: "production"
|
external-dns.alpha.kubernetes.io/hostname: {{ .Release.Name }}.reeseapps.com
|
||||||
spec:
|
spec:
|
||||||
type: LoadBalancer
|
type: LoadBalancer
|
||||||
|
ipFamilies: ["IPv6"]
|
||||||
selector:
|
selector:
|
||||||
app.kubernetes.io/name: iperf
|
app.kubernetes.io/name: iperf
|
||||||
ports:
|
ports:
|
||||||
|
|||||||
@@ -5,7 +5,6 @@ metadata:
|
|||||||
annotations:
|
annotations:
|
||||||
"helm.sh/resource-policy": keep
|
"helm.sh/resource-policy": keep
|
||||||
spec:
|
spec:
|
||||||
storageClassName: zfs-nfs-enc1
|
|
||||||
accessModes:
|
accessModes:
|
||||||
- ReadWriteOnce
|
- ReadWriteOnce
|
||||||
resources:
|
resources:
|
||||||
|
|||||||
@@ -3,27 +3,10 @@ kind: Service
|
|||||||
metadata:
|
metadata:
|
||||||
name: {{ .Release.Name }}
|
name: {{ .Release.Name }}
|
||||||
annotations:
|
annotations:
|
||||||
metallb.universe.tf/address-pool: "production"
|
metallb.universe.tf/address-pool: "external"
|
||||||
metallb.universe.tf/allow-shared-ip: "production"
|
external-dns.alpha.kubernetes.io/hostname: {{ .Release.Name }}.reeseapps.com
|
||||||
spec:
|
|
||||||
externalTrafficPolicy: Cluster
|
|
||||||
selector:
|
|
||||||
app: {{ .Release.Name }}
|
|
||||||
ports:
|
|
||||||
- port: {{ .Values.port }}
|
|
||||||
targetPort: 25565
|
|
||||||
name: {{ .Release.Name }}
|
|
||||||
type: LoadBalancer
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
apiVersion: v1
|
|
||||||
kind: Service
|
|
||||||
metadata:
|
|
||||||
name: {{ .Release.Name }}v6
|
|
||||||
annotations:
|
|
||||||
metallb.universe.tf/address-pool: "productionv6"
|
|
||||||
spec:
|
spec:
|
||||||
|
ipFamilies: ["IPv6"]
|
||||||
externalTrafficPolicy: Cluster
|
externalTrafficPolicy: Cluster
|
||||||
selector:
|
selector:
|
||||||
app: {{ .Release.Name }}
|
app: {{ .Release.Name }}
|
||||||
|
|||||||
@@ -1,45 +1,65 @@
|
|||||||
image: ducoterra/minecraft:latest
|
image: ducoterra/minecraft:1.0.4
|
||||||
get_server:
|
get_server:
|
||||||
image: ducoterra/get-minecraft:2.0.2
|
image: ducoterra/get-minecraft:2.0.2
|
||||||
server_version: "1.20.2"
|
server_version: "1.21"
|
||||||
port: 25565
|
port: 25565
|
||||||
max_cpu: 4
|
max_cpu: 4
|
||||||
max_ram: 8
|
max_ram: 8
|
||||||
server_props: |
|
server_props: |
|
||||||
max-tick-time=60000
|
enable-jmx-monitoring=false
|
||||||
generator-settings=
|
rcon.port=25575
|
||||||
force-gamemode=false
|
level-seed=
|
||||||
allow-nether=true
|
gamemode=survival
|
||||||
gamemode=0
|
|
||||||
broadcast-console-to-ops=true
|
|
||||||
enable-query=false
|
|
||||||
player-idle-timeout=0
|
|
||||||
difficulty=3
|
|
||||||
spawn-monsters=true
|
|
||||||
op-permission-level=4
|
|
||||||
pvp=false
|
|
||||||
snooper-enabled=true
|
|
||||||
level-type=amplified
|
|
||||||
hardcore=false
|
|
||||||
enable-command-block=false
|
enable-command-block=false
|
||||||
max-players=20
|
enable-query=false
|
||||||
|
generator-settings={}
|
||||||
|
enforce-secure-profile=true
|
||||||
|
level-name=world
|
||||||
|
motd=A Minecraft Server
|
||||||
|
query.port=25565
|
||||||
|
pvp=true
|
||||||
|
generate-structures=true
|
||||||
|
max-chained-neighbor-updates=1000000
|
||||||
|
difficulty=easy
|
||||||
network-compression-threshold=256
|
network-compression-threshold=256
|
||||||
|
max-tick-time=600000
|
||||||
|
require-resource-pack=false
|
||||||
|
use-native-transport=true
|
||||||
|
max-players=20
|
||||||
|
online-mode=true
|
||||||
|
enable-status=true
|
||||||
|
allow-flight=false
|
||||||
|
initial-disabled-packs=
|
||||||
|
broadcast-rcon-to-ops=true
|
||||||
|
view-distance=10
|
||||||
|
server-ip=
|
||||||
|
resource-pack-prompt=
|
||||||
|
allow-nether=true
|
||||||
|
server-port=25565
|
||||||
|
enable-rcon=false
|
||||||
|
sync-chunk-writes=true
|
||||||
|
op-permission-level=4
|
||||||
|
prevent-proxy-connections=false
|
||||||
|
hide-online-players=false
|
||||||
|
resource-pack=
|
||||||
|
entity-broadcast-range-percentage=100
|
||||||
|
simulation-distance=10
|
||||||
|
rcon.password=
|
||||||
|
player-idle-timeout=0
|
||||||
|
force-gamemode=false
|
||||||
|
rate-limit=0
|
||||||
|
hardcore=false
|
||||||
|
white-list=true
|
||||||
|
broadcast-console-to-ops=true
|
||||||
|
spawn-npcs=true
|
||||||
|
spawn-animals=true
|
||||||
|
log-ips=true
|
||||||
|
function-permission-level=2
|
||||||
|
initial-enabled-packs=vanilla
|
||||||
|
level-type=minecraft\:normal
|
||||||
|
text-filtering-config=
|
||||||
|
spawn-monsters=true
|
||||||
|
enforce-whitelist=true
|
||||||
|
spawn-protection=16
|
||||||
resource-pack-sha1=
|
resource-pack-sha1=
|
||||||
max-world-size=29999984
|
max-world-size=29999984
|
||||||
server-port=25565
|
|
||||||
server-ip=
|
|
||||||
spawn-npcs=true
|
|
||||||
allow-flight=true
|
|
||||||
level-name=world
|
|
||||||
view-distance=32
|
|
||||||
resource-pack=
|
|
||||||
spawn-animals=true
|
|
||||||
white-list=true
|
|
||||||
generate-structures=true
|
|
||||||
online-mode=true
|
|
||||||
max-build-height=512
|
|
||||||
level-seed=
|
|
||||||
prevent-proxy-connections=false
|
|
||||||
use-native-transport=true
|
|
||||||
motd=Courniiiiiiieeeeeeeeee
|
|
||||||
enable-rcon=false
|
|
||||||
|
|||||||
@@ -4,10 +4,10 @@ metadata:
|
|||||||
name: {{ .Release.Name }}
|
name: {{ .Release.Name }}
|
||||||
annotations:
|
annotations:
|
||||||
cert-manager.io/cluster-issuer: letsencrypt
|
cert-manager.io/cluster-issuer: letsencrypt
|
||||||
kubernetes.io/ingress.class: nginx
|
|
||||||
nginx.ingress.kubernetes.io/proxy-body-size: "0"
|
nginx.ingress.kubernetes.io/proxy-body-size: "0"
|
||||||
nginx.org/client-max-body-size: "0"
|
nginx.org/client-max-body-size: "0"
|
||||||
spec:
|
spec:
|
||||||
|
ingressClassName: nginx
|
||||||
rules:
|
rules:
|
||||||
- host: {{ .Values.snapdrop.domain }}
|
- host: {{ .Values.snapdrop.domain }}
|
||||||
http:
|
http:
|
||||||
|
|||||||
@@ -5,7 +5,6 @@ metadata:
|
|||||||
annotations:
|
annotations:
|
||||||
"helm.sh/resource-policy": keep
|
"helm.sh/resource-policy": keep
|
||||||
spec:
|
spec:
|
||||||
storageClassName: zfs-iscsi-enc0
|
|
||||||
accessModes:
|
accessModes:
|
||||||
- ReadWriteOnce
|
- ReadWriteOnce
|
||||||
resources:
|
resources:
|
||||||
|
|||||||
@@ -2,12 +2,11 @@ controller:
|
|||||||
service:
|
service:
|
||||||
externalTrafficPolicy: Local
|
externalTrafficPolicy: Local
|
||||||
annotations:
|
annotations:
|
||||||
metallb.universe.tf/address-pool: "nginx"
|
metallb.universe.tf/address-pool: "external"
|
||||||
metallb.universe.tf/allow-shared-ip: "nginx"
|
metallb.universe.tf/allow-shared-ip: nginx
|
||||||
|
ipFamilyPolicy: SingleStack
|
||||||
|
ipFamilies:
|
||||||
|
- IPv6
|
||||||
config:
|
config:
|
||||||
enable-real-ip: "true"
|
|
||||||
use-forwarded-headers: "true"
|
|
||||||
compute-full-forwarded-for: "true"
|
|
||||||
proxy-real-ip-cidr: "0.0.0.0/0"
|
|
||||||
use-proxy-protocol: "true"
|
|
||||||
log-format-upstream: '| Proxy Proto Addr: $proxy_protocol_addr | Remote Addr: $remote_addr:$server_port | Host: $host | Referer: $http_referer | $request | $time_local | $status |'
|
log-format-upstream: '| Proxy Proto Addr: $proxy_protocol_addr | Remote Addr: $remote_addr:$server_port | Host: $host | Referer: $http_referer | $request | $time_local | $status |'
|
||||||
|
allowSnippetAnnotations: true
|
||||||
@@ -1,328 +0,0 @@
|
|||||||
# Fedora Server
|
|
||||||
|
|
||||||
Fedora server is an awesome container hosting OS. It has a lot built in, and setup is pretty
|
|
||||||
quick.
|
|
||||||
|
|
||||||
- [Fedora Server](#fedora-server)
|
|
||||||
- [Initialization](#initialization)
|
|
||||||
- [Disable swap](#disable-swap)
|
|
||||||
- [Network](#network)
|
|
||||||
- [Enable ISCSI](#enable-iscsi)
|
|
||||||
- [Disable Firewalld](#disable-firewalld)
|
|
||||||
- [Set SELinux to Permissive](#set-selinux-to-permissive)
|
|
||||||
- [Install K3S](#install-k3s)
|
|
||||||
- [Database Backups](#database-backups)
|
|
||||||
- [Expanding Root Partition](#expanding-root-partition)
|
|
||||||
- [Arping IP Address](#arping-ip-address)
|
|
||||||
- [Optional Steps](#optional-steps)
|
|
||||||
- [Certbot for Cockpit](#certbot-for-cockpit)
|
|
||||||
|
|
||||||
## Initialization
|
|
||||||
|
|
||||||
1. `dnf install vim pwgen wireguard-tools`
|
|
||||||
2. `hostnamectl hostname node1`
|
|
||||||
3. Set a static IP through the web interface
|
|
||||||
4. Allow wheel group members to sudo without password
|
|
||||||
|
|
||||||
## Disable swap
|
|
||||||
|
|
||||||
```bash
|
|
||||||
swapoff -a
|
|
||||||
dnf remove zram-generator-defaults
|
|
||||||
```
|
|
||||||
|
|
||||||
mask <systemd-zram-setup@zram0.service>
|
|
||||||
|
|
||||||
## Network
|
|
||||||
|
|
||||||
1. Set MTU to 9000
|
|
||||||
|
|
||||||
If your network supports it, use 9000 as your mtu to allow more data per packet between
|
|
||||||
servers. Note! For bridge interfaces you must set both the physical interface and bridge
|
|
||||||
interface to 9000 - setting one but not the other can cause connectivity problems.
|
|
||||||
|
|
||||||
## Enable ISCSI
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Install the following system packages
|
|
||||||
dnf install -y lsscsi iscsi-initiator-utils sg3_utils device-mapper-multipath
|
|
||||||
|
|
||||||
# Enable multipathing
|
|
||||||
mpathconf --enable --with_multipathd y
|
|
||||||
|
|
||||||
# Ensure that iscsid and multipathd are running
|
|
||||||
systemctl enable --now iscsid multipathd
|
|
||||||
|
|
||||||
# Test that discovery works
|
|
||||||
iscsiadm -m discovery -t st -p democratic-csi-server.reeselink.com
|
|
||||||
# Remove them - democratic-csi will populate this
|
|
||||||
rm -rf /var/lib/iscsi/nodes/
|
|
||||||
|
|
||||||
# Start and enable iscsi
|
|
||||||
systemctl enable --now iscsi
|
|
||||||
```
|
|
||||||
|
|
||||||
## Disable Firewalld
|
|
||||||
|
|
||||||
<https://docs.k3s.io/advanced#red-hat-enterprise-linux--centos--fedora>
|
|
||||||
|
|
||||||
Disable firewalld. You could add rules for each service but every time you open a port
|
|
||||||
from a container you'd need to run a firewalld rule.
|
|
||||||
|
|
||||||
You can disable firewalld from the web interface.
|
|
||||||
|
|
||||||
## Set SELinux to Permissive
|
|
||||||
|
|
||||||
K3S is more than capable of running with SELinux set to enforcing. We won't be doing
|
|
||||||
that, however. We'll set it to permissive and you can reenable it once you've added all
|
|
||||||
the rules you need to keep your services running.
|
|
||||||
|
|
||||||
Set SELinux to permissive by editing `/etc/selinux/config`
|
|
||||||
|
|
||||||
SELINUX=permissive
|
|
||||||
|
|
||||||
## Install K3S
|
|
||||||
|
|
||||||
<https://docs.k3s.io/installation/requirements>
|
|
||||||
|
|
||||||
We're going to be tweaking some installation parameters so if you already have k3s
|
|
||||||
installed you can either uninstall it or skip these steps.
|
|
||||||
|
|
||||||
This installation disables Traefik, local-storage, and Klipper. We'll replace them with
|
|
||||||
our own components.
|
|
||||||
|
|
||||||
1. Generate a secure token for each node to use when connecting
|
|
||||||
|
|
||||||
umask 077
|
|
||||||
echo -n $(pwgen 16 4) | sed 's/ /-/g' > token.txt
|
|
||||||
|
|
||||||
2. Create the cluster
|
|
||||||
|
|
||||||
export SECRET=$(cat token.txt)
|
|
||||||
|
|
||||||
curl -sfL https://get.k3s.io | K3S_TOKEN=$SECRET sh -s - \
|
|
||||||
"--cluster-init" \
|
|
||||||
"--flannel-backend=wireguard-native" \
|
|
||||||
"--disable" \
|
|
||||||
"traefik" \
|
|
||||||
"--disable" \
|
|
||||||
"local-storage" \
|
|
||||||
"--disable" \
|
|
||||||
"servicelb" \
|
|
||||||
"--disable" \
|
|
||||||
"coredns" \
|
|
||||||
"--cluster-dns" \
|
|
||||||
"10.43.0.10" \
|
|
||||||
"--tls-san" \
|
|
||||||
"kube.reeselink.com" \
|
|
||||||
"--tls-san" \
|
|
||||||
"kube.reeseapps.com" \
|
|
||||||
|
|
||||||
3. Join each server node
|
|
||||||
|
|
||||||
export SECRET=$(cat token.txt)
|
|
||||||
|
|
||||||
curl -sfL https://get.k3s.io | K3S_TOKEN=$SECRET sh -s - server \
|
|
||||||
--server https://node1.reeselink.com:6443 \
|
|
||||||
--flannel-backend=wireguard-native \
|
|
||||||
"--disable" \
|
|
||||||
"traefik" \
|
|
||||||
"--disable" \
|
|
||||||
"local-storage" \
|
|
||||||
"--disable" \
|
|
||||||
"coredns" \
|
|
||||||
"--disable" \
|
|
||||||
"servicelb" \
|
|
||||||
"--cluster-dns" \
|
|
||||||
"10.43.0.10" \
|
|
||||||
"--tls-san" \
|
|
||||||
"kube.reeselink.com" \
|
|
||||||
|
|
||||||
Now you can change the ownership of (and copy) the k3s.yaml file:
|
|
||||||
|
|
||||||
chown ducoterra /etc/rancher/k3s/k3s.yaml
|
|
||||||
|
|
||||||
scp /etc/rancher/k3s/k3s.yaml ~/.kube/config
|
|
||||||
|
|
||||||
Edit ~/.kube/config and change 127.0.0.1 to containers.reeselink.com
|
|
||||||
|
|
||||||
### Database Backups
|
|
||||||
|
|
||||||
<https://docs.k3s.io/cli/etcd-snapshot>
|
|
||||||
|
|
||||||
Note, you must backup `/var/lib/rancher/k3s/server/token`
|
|
||||||
and use the contents as the token when restoring the backup as data is encrypted with that token.
|
|
||||||
|
|
||||||
## Expanding Root Partition
|
|
||||||
|
|
||||||
```bash
|
|
||||||
dnf install cloud-utils-growpart
|
|
||||||
growpart /dev/sda 3
|
|
||||||
lvextend -l +100%FREE fedora
|
|
||||||
xfs_growfs /dev/mapper/fedora-root
|
|
||||||
```
|
|
||||||
|
|
||||||
## Arping IP Address
|
|
||||||
|
|
||||||
```bash
|
|
||||||
echo 1 > /proc/sys/net/ipv4/ip_nonlocal_bind
|
|
||||||
arping -I bridge0 -U -s 10.1.2.102 10.1.0.1
|
|
||||||
```
|
|
||||||
|
|
||||||
## Optional Steps
|
|
||||||
|
|
||||||
### Certbot for Cockpit
|
|
||||||
|
|
||||||
During this process you'll pick one node to act as your manager for your other nodes.
|
|
||||||
You'll only need to cert a single node and then it will connect via ssh over your local
|
|
||||||
network to the other nodes.
|
|
||||||
|
|
||||||
Create an AWS user which will have route53 access. This is required for certbot's route53
|
|
||||||
validation.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
aws iam create-user --user-name replicator
|
|
||||||
```
|
|
||||||
|
|
||||||
You'll also need a policy which allows the user to modify the selected hosted zone:
|
|
||||||
|
|
||||||
(list with `aws route53 list-hosted-zones`)
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"Version": "2012-10-17",
|
|
||||||
"Statement": [
|
|
||||||
{
|
|
||||||
"Effect": "Allow",
|
|
||||||
"Action": [
|
|
||||||
"route53:ListHostedZones",
|
|
||||||
"route53:GetChange"
|
|
||||||
],
|
|
||||||
"Resource": [
|
|
||||||
"*"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"Effect" : "Allow",
|
|
||||||
"Action" : [
|
|
||||||
"route53:ChangeResourceRecordSets"
|
|
||||||
],
|
|
||||||
"Resource" : [
|
|
||||||
"arn:aws:route53:::hostedzone/Z012820733346FJ0U4FUF",
|
|
||||||
"arn:aws:route53:::hostedzone/Z0092652G7L97DSINN18",
|
|
||||||
"arn:aws:route53:::hostedzone/Z04612891U5Q2JRHUZ11T"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Attach the policy to the user:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
aws iam attach-user-policy \
|
|
||||||
--user-name replicator \
|
|
||||||
--policy-arn arn:aws:iam::892236928704:policy/certbot-route53-reeseapps
|
|
||||||
```
|
|
||||||
|
|
||||||
Generate credentials:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
aws iam create-access-key --user-name replicator
|
|
||||||
```
|
|
||||||
|
|
||||||
On the host machine:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
mkdir ~/.aws
|
|
||||||
vim ~/.aws/config
|
|
||||||
```
|
|
||||||
|
|
||||||
```conf
|
|
||||||
[profile default]
|
|
||||||
region=us-east-2
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
vim ~/.aws/credentials
|
|
||||||
```
|
|
||||||
|
|
||||||
```conf
|
|
||||||
[default]
|
|
||||||
aws_access_key_id=<key>
|
|
||||||
aws_secret_access_key=<key>
|
|
||||||
```
|
|
||||||
|
|
||||||
Install the aws cli v2 on the manager node:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
|
|
||||||
unzip awscliv2.zip
|
|
||||||
sudo ./aws/install
|
|
||||||
```
|
|
||||||
|
|
||||||
Test your credentials with `aws route53 list-hosted-zones`. You should see as list of your
|
|
||||||
hosted zones.
|
|
||||||
|
|
||||||
Now install certbot and acquire a cert using those credentials:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
sudo dnf install certbot python3-certbot-dns-route53
|
|
||||||
sudo certbot certonly --dns-route53 -d containers.reeselink.com
|
|
||||||
|
|
||||||
sudo cp /etc/letsencrypt/live/containers.reeselink.com/fullchain.pem /etc/cockpit/ws-certs.d/50-letsencrypt.cert
|
|
||||||
sudo cp /etc/letsencrypt/live/containers.reeselink.com/privkey.pem /etc/cockpit/ws-certs.d/50-letsencrypt.key
|
|
||||||
```
|
|
||||||
|
|
||||||
Test the renewal process with:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
sudo certbot renew --cert-name containers.reeselink.com --dry-run
|
|
||||||
```
|
|
||||||
|
|
||||||
Create a renewal script in /usr/lib/scripts/certbot-renew.sh
|
|
||||||
|
|
||||||
/usr/lib/scripts/certbot-renew.sh (chmod +x)
|
|
||||||
|
|
||||||
```bash
|
|
||||||
#!/bin/bash
|
|
||||||
|
|
||||||
/usr/bin/certbot renew --cert-name containers.reeselink.com
|
|
||||||
/usr/bin/cp -f /etc/letsencrypt/live/containers.reeselink.com/fullchain.pem /etc/cockpit/ws-certs.d/50-letsencrypt.cert
|
|
||||||
/usr/bin/cp -f /etc/letsencrypt/live/containers.reeselink.com/privkey.pem /etc/cockpit/ws-certs.d/50-letsencrypt.key
|
|
||||||
```
|
|
||||||
|
|
||||||
Now create a systemd oneshot service to run the script
|
|
||||||
|
|
||||||
/etc/systemd/system/certbot-renew.service
|
|
||||||
|
|
||||||
```conf
|
|
||||||
[Unit]
|
|
||||||
Description=Certbot Renewal
|
|
||||||
|
|
||||||
[Service]
|
|
||||||
Type=oneshot
|
|
||||||
ExecStart=/usr/lib/scripts/certbot-renew.sh
|
|
||||||
```
|
|
||||||
|
|
||||||
/etc/systemd/system/certbot-renew.timer
|
|
||||||
|
|
||||||
```conf
|
|
||||||
[Unit]
|
|
||||||
Description=Timer for Certbot Renewal
|
|
||||||
|
|
||||||
[Timer]
|
|
||||||
OnBootSec=300
|
|
||||||
OnUnitActiveSec=1w
|
|
||||||
|
|
||||||
[Install]
|
|
||||||
WantedBy=multi-user.target
|
|
||||||
```
|
|
||||||
|
|
||||||
Enable the service
|
|
||||||
|
|
||||||
```bash
|
|
||||||
systemctl enable --now certbot-renew.timer
|
|
||||||
```
|
|
||||||
|
|
||||||
Cockpit now has a valid TLS certificate that auto-renews!
|
|
||||||
555
k3s/README.md
Normal file
555
k3s/README.md
Normal file
@@ -0,0 +1,555 @@
|
|||||||
|
# K3S
|
||||||
|
|
||||||
|
- [K3S](#k3s)
|
||||||
|
- [Guide](#guide)
|
||||||
|
- [Disable Firewalld](#disable-firewalld)
|
||||||
|
- [Set SELinux to Permissive](#set-selinux-to-permissive)
|
||||||
|
- [Install K3S (Single Node)](#install-k3s-single-node)
|
||||||
|
- [Kube Credentials](#kube-credentials)
|
||||||
|
- [Storage](#storage)
|
||||||
|
- [Coredns](#coredns)
|
||||||
|
- [Metal LB](#metal-lb)
|
||||||
|
- [External DNS](#external-dns)
|
||||||
|
- [Credentials](#credentials)
|
||||||
|
- [Annotation](#annotation)
|
||||||
|
- [Nginx Ingress](#nginx-ingress)
|
||||||
|
- [Cert Manager](#cert-manager)
|
||||||
|
- [Test Minecraft Server](#test-minecraft-server)
|
||||||
|
- [Automatic Updates](#automatic-updates)
|
||||||
|
- [Manual Updates](#manual-updates)
|
||||||
|
- [Create a Userspace](#create-a-userspace)
|
||||||
|
- [Quickstart](#quickstart)
|
||||||
|
- [Userspace](#userspace)
|
||||||
|
- [Namespace](#namespace)
|
||||||
|
- [Roles](#roles)
|
||||||
|
- [Rolebinding](#rolebinding)
|
||||||
|
- [Manual Steps](#manual-steps)
|
||||||
|
- [Create a kubernetes certsigner pod](#create-a-kubernetes-certsigner-pod)
|
||||||
|
- [Create the certsigner secret](#create-the-certsigner-secret)
|
||||||
|
- [Set up the certsigner pod](#set-up-the-certsigner-pod)
|
||||||
|
- [Generate a cert](#generate-a-cert)
|
||||||
|
- [Create a new Userspace](#create-a-new-userspace)
|
||||||
|
- [Sign the cert](#sign-the-cert)
|
||||||
|
- [Add to the config](#add-to-the-config)
|
||||||
|
- [Delete](#delete)
|
||||||
|
- [Signing a user cert - detailed notes](#signing-a-user-cert---detailed-notes)
|
||||||
|
- [Help](#help)
|
||||||
|
- [Troubleshooting](#troubleshooting)
|
||||||
|
- [Deleting a stuck namespace](#deleting-a-stuck-namespace)
|
||||||
|
- [Fixing a bad volume](#fixing-a-bad-volume)
|
||||||
|
- [Mounting an ix-application volume from truenas](#mounting-an-ix-application-volume-from-truenas)
|
||||||
|
- [Mounting a volume](#mounting-a-volume)
|
||||||
|
- [Database Backups](#database-backups)
|
||||||
|
- [Uninstall](#uninstall)
|
||||||
|
|
||||||
|
## Guide
|
||||||
|
|
||||||
|
1. Configure Host
|
||||||
|
2. Install CoreDNS for inter-container discovery
|
||||||
|
3. Install Metal LB for load balancer IP address assignment
|
||||||
|
4. install External DNS for laod balancer IP and ingress DNS records
|
||||||
|
5. Install Nginx Ingress for http services
|
||||||
|
6. Install Cert Manager for automatic Let's Encrypt certificates for Ingress nginx
|
||||||
|
7. Install longhorn storage for automatic PVC creation and management
|
||||||
|
8. Set up automatic database backups
|
||||||
|
|
||||||
|
## Disable Firewalld
|
||||||
|
|
||||||
|
<https://docs.k3s.io/advanced#red-hat-enterprise-linux--centos--fedora>
|
||||||
|
|
||||||
|
Disable firewalld. You could add rules for each service but every time you open a port
|
||||||
|
from a container you'd need to run a firewalld rule.
|
||||||
|
|
||||||
|
You can disable firewalld from the web interface.
|
||||||
|
|
||||||
|
## Set SELinux to Permissive
|
||||||
|
|
||||||
|
K3S is more than capable of running with SELinux set to enforcing. We won't be doing
|
||||||
|
that, however. We'll set it to permissive and you can reenable it once you've added all
|
||||||
|
the rules you need to keep your services running.
|
||||||
|
|
||||||
|
Set SELinux to permissive by editing `/etc/selinux/config`
|
||||||
|
|
||||||
|
SELINUX=permissive
|
||||||
|
|
||||||
|
## Install K3S (Single Node)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.30.2+k3s2 sh -s - \
|
||||||
|
"--flannel-ipv6-masq" \
|
||||||
|
"--disable" \
|
||||||
|
"traefik" \
|
||||||
|
"--disable" \
|
||||||
|
"servicelb" \
|
||||||
|
"--disable" \
|
||||||
|
"coredns" \
|
||||||
|
"--tls-san" \
|
||||||
|
"kube.reeselink.com" \
|
||||||
|
"--cluster-cidr" \
|
||||||
|
"10.42.0.0/16,fd02:c91e:56f4::/56" \
|
||||||
|
"--service-cidr" \
|
||||||
|
"10.43.0.0/16,fd02:c91e:56f5::/112" \
|
||||||
|
"--cluster-dns" \
|
||||||
|
"fd02:c91e:56f5::10"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Kube Credentials
|
||||||
|
|
||||||
|
On the operator
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Copy the kube config down
|
||||||
|
scp kube:/etc/rancher/k3s/k3s.yaml ~/.kube/admin-kube-config
|
||||||
|
|
||||||
|
# Edit the server to match the remote address.
|
||||||
|
```
|
||||||
|
|
||||||
|
## Storage
|
||||||
|
|
||||||
|
1. `mkdir /var/lib/rancher/k3s/storage`
|
||||||
|
2. Edit fstab to mount your drive to `/var/lib/rancher/k3s/storage`
|
||||||
|
3. `systemctl daemon-reload`
|
||||||
|
4. `mount -a`
|
||||||
|
|
||||||
|
## Coredns
|
||||||
|
|
||||||
|
1. Edit `coredns/values.yaml` to ensure the forward nameserver is correct.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Install CoreDNS
|
||||||
|
helm upgrade --install \
|
||||||
|
--namespace=kube-system \
|
||||||
|
--values coredns/values.yaml \
|
||||||
|
coredns coredns/coredns
|
||||||
|
|
||||||
|
# Test DNS works
|
||||||
|
kubectl run -it --rm \
|
||||||
|
--restart=Never \
|
||||||
|
--image=infoblox/dnstools:latest \
|
||||||
|
dnstools
|
||||||
|
```
|
||||||
|
|
||||||
|
## Metal LB
|
||||||
|
|
||||||
|
We'll be swapping K3S's default load balancer with Metal LB for more flexibility. ServiceLB was
|
||||||
|
struggling to allocate IP addresses for load balanced services. MetallLB does make things a little
|
||||||
|
more complicated- you'll need special annotations (see below) but it's otherwise a well-tested,
|
||||||
|
stable load balancing service with features to grow into.
|
||||||
|
|
||||||
|
Metallb is pretty cool. It works via l2 advertisement or BGP. We won't be using BGP, so let's
|
||||||
|
focus on l2.
|
||||||
|
|
||||||
|
When we connect our nodes to a network we give them an IP address range: ex. `192.168.122.20/24`.
|
||||||
|
This range represents all the available addresses the node could be assigned. Usually we assign
|
||||||
|
a single "static" IP address for our node and direct traffic to it by port forwarding from our
|
||||||
|
router. This is fine for single nodes - but what if we have a cluster of nodes and we don't want
|
||||||
|
our service to disappear just because one node is down for maintenance?
|
||||||
|
|
||||||
|
This is where l2 advertising comes in. Metallb will assign a static IP address from a given
|
||||||
|
pool to any arbitrary node - then advertise that node's mac address as the location for the
|
||||||
|
IP. When that node goes down metallb simply advertises a new mac address for the same IP
|
||||||
|
address, effectively moving the IP to another node. This isn't really "load balancing" but
|
||||||
|
"failover". Fortunately, that's exactly what we're looking for.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
helm repo add metallb https://metallb.github.io/metallb
|
||||||
|
helm repo update
|
||||||
|
helm upgrade --install metallb \
|
||||||
|
--namespace kube-system \
|
||||||
|
metallb/metallb
|
||||||
|
```
|
||||||
|
|
||||||
|
MetalLB doesn't know what IP addresses are available for it to allocate so we'll have
|
||||||
|
to provide it with a list. The `metallb-addresspool.yaml` has one IP address (we'll get to
|
||||||
|
IP address sharing in a second) which is an unassigned IP address not allocated to any of our
|
||||||
|
nodes. Note if you have many public IPs which all point to the same router or virtual network
|
||||||
|
you can list them. We're only going to use one because we want to port forward from our router.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# create the metallb allocation pool
|
||||||
|
kubectl apply -f metallb/addresspool.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
You'll need to annotate your service as follows if you want an external IP:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
metadata:
|
||||||
|
annotations:
|
||||||
|
metallb.universe.tf/address-pool: "external"
|
||||||
|
# or
|
||||||
|
metallb.universe.tf/address-pool: "internal"
|
||||||
|
spec:
|
||||||
|
ipFamilyPolicy: SingleStack
|
||||||
|
ipFamilies:
|
||||||
|
- IPv6
|
||||||
|
```
|
||||||
|
|
||||||
|
## External DNS
|
||||||
|
|
||||||
|
<https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/aws.md>
|
||||||
|
|
||||||
|
### Credentials
|
||||||
|
|
||||||
|
1. Generate credentials for the cluster
|
||||||
|
|
||||||
|
```bash
|
||||||
|
aws iam create-user --user-name "externaldns"
|
||||||
|
aws iam attach-user-policy --user-name "externaldns" --policy-arn arn:aws:iam::892236928704:policy/update-reeseapps
|
||||||
|
|
||||||
|
SECRET_ACCESS_KEY=$(aws iam create-access-key --user-name "externaldns")
|
||||||
|
ACCESS_KEY_ID=$(echo $SECRET_ACCESS_KEY | jq -r '.AccessKey.AccessKeyId')
|
||||||
|
|
||||||
|
cat <<-EOF > secrets/externaldns-credentials
|
||||||
|
|
||||||
|
[default]
|
||||||
|
aws_access_key_id = $(echo $ACCESS_KEY_ID)
|
||||||
|
aws_secret_access_key = $(echo $SECRET_ACCESS_KEY | jq -r '.AccessKey.SecretAccessKey')
|
||||||
|
EOF
|
||||||
|
|
||||||
|
kubectl create secret generic external-dns \
|
||||||
|
--namespace kube-system --from-file secrets/externaldns-credentials
|
||||||
|
|
||||||
|
kubectl apply -f external-dns/sa.yaml
|
||||||
|
|
||||||
|
kubectl apply -f external-dns/deploy.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
### Annotation
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
metadata:
|
||||||
|
annotations:
|
||||||
|
external-dns.alpha.kubernetes.io/hostname: example.com
|
||||||
|
```
|
||||||
|
|
||||||
|
## Nginx Ingress
|
||||||
|
|
||||||
|
Now we need an ingress solution (preferably with certs for https). We'll be using nginx since
|
||||||
|
it's a little bit more configurable than traefik (though don't sell traefik short, it's really
|
||||||
|
good. Just finnicky when you have use cases they haven't explicitly coded for).
|
||||||
|
|
||||||
|
```bash
|
||||||
|
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
|
||||||
|
helm repo update
|
||||||
|
helm upgrade --install \
|
||||||
|
ingress-nginx \
|
||||||
|
ingress-nginx/ingress-nginx \
|
||||||
|
--values ingress-nginx/values.yaml \
|
||||||
|
--namespace kube-system
|
||||||
|
```
|
||||||
|
|
||||||
|
## Cert Manager
|
||||||
|
|
||||||
|
Install cert-manager
|
||||||
|
|
||||||
|
```bash
|
||||||
|
helm repo add jetstack https://charts.jetstack.io
|
||||||
|
helm repo update
|
||||||
|
helm upgrade --install \
|
||||||
|
cert-manager jetstack/cert-manager \
|
||||||
|
--namespace kube-system \
|
||||||
|
--set crds.enabled=true
|
||||||
|
```
|
||||||
|
|
||||||
|
Create the let's encrypt issuer (Route53 DNS)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl apply -f certmanager/letsencrypt-issuer.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
You can test if your ingress is working with:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Navigate to demo.reeseapps.com
|
||||||
|
kubectl apply -f k3s/tests/ingress-nginx-test.yaml
|
||||||
|
|
||||||
|
# Cleanup
|
||||||
|
kubectl delete -f k3s/tests/ingress-nginx-test.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
## Test Minecraft Server
|
||||||
|
|
||||||
|
```bash
|
||||||
|
helm upgrade --install minecraft ./helm/minecraft -n minecraft --create-namespace
|
||||||
|
helm upgrade --install minecraft1 ./helm/minecraft -n minecraft --create-namespace
|
||||||
|
```
|
||||||
|
|
||||||
|
## Automatic Updates
|
||||||
|
|
||||||
|
<https://docs.k3s.io/upgrades/automated>
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl apply -f https://github.com/rancher/system-upgrade-controller/releases/latest/download/system-upgrade-controller.yaml
|
||||||
|
kubectl apply -f https://github.com/rancher/system-upgrade-controller/releases/latest/download/crd.yaml
|
||||||
|
kubectl apply -f k3s/upgrade-plan.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
## Manual Updates
|
||||||
|
|
||||||
|
<https://docs.k3s.io/upgrades/manual#manually-upgrade-k3s-using-the-binary>
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sudo su -
|
||||||
|
wget https://github.com/k3s-io/k3s/releases/download/v1.28.3%2Bk3s1/k3s
|
||||||
|
systemctl stop k3s
|
||||||
|
chmod +x k3s
|
||||||
|
mv k3s /usr/local/bin/k3s
|
||||||
|
systemctl start k3s
|
||||||
|
```
|
||||||
|
|
||||||
|
## Create a Userspace
|
||||||
|
|
||||||
|
This creates a user, namespace, and permissions with a simple script.
|
||||||
|
|
||||||
|
### Quickstart
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create certsigner pod for all other operations
|
||||||
|
./setup.sh <server_fqdn>
|
||||||
|
|
||||||
|
# Create a user, use "admin" to create an admin user
|
||||||
|
./upsertuser.sh <ssh_address> <server_fqdn (for kubectl)> <user>
|
||||||
|
|
||||||
|
# Remove a user, their namespace, and their access
|
||||||
|
./removeuserspace <server_fqdn> <user>
|
||||||
|
```
|
||||||
|
|
||||||
|
### Userspace
|
||||||
|
|
||||||
|
#### Namespace
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Namespace
|
||||||
|
metadata:
|
||||||
|
name: {{ .Release.Name }}
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Roles
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
kind: Role
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||||
|
metadata:
|
||||||
|
name: namespace-manager
|
||||||
|
namespace: {{ .Release.Name }}
|
||||||
|
rules:
|
||||||
|
- apiGroups:
|
||||||
|
- ""
|
||||||
|
- extensions
|
||||||
|
- apps
|
||||||
|
- batch
|
||||||
|
- autoscaling
|
||||||
|
- networking.k8s.io
|
||||||
|
- traefik.containo.us
|
||||||
|
- rbac.authorization.k8s.io
|
||||||
|
- metrics.k8s.io
|
||||||
|
resources:
|
||||||
|
- deployments
|
||||||
|
- replicasets
|
||||||
|
- pods
|
||||||
|
- pods/exec
|
||||||
|
- pods/log
|
||||||
|
- pods/attach
|
||||||
|
- daemonsets
|
||||||
|
- statefulsets
|
||||||
|
- replicationcontrollers
|
||||||
|
- horizontalpodautoscalers
|
||||||
|
- services
|
||||||
|
- ingresses
|
||||||
|
- persistentvolumeclaims
|
||||||
|
- jobs
|
||||||
|
- cronjobs
|
||||||
|
- secrets
|
||||||
|
- configmaps
|
||||||
|
- serviceaccounts
|
||||||
|
- rolebindings
|
||||||
|
- ingressroutes
|
||||||
|
- middlewares
|
||||||
|
- endpoints
|
||||||
|
verbs:
|
||||||
|
- "*"
|
||||||
|
- apiGroups:
|
||||||
|
- ""
|
||||||
|
- metrics.k8s.io
|
||||||
|
- rbac.authorization.k8s.io
|
||||||
|
resources:
|
||||||
|
- resourcequotas
|
||||||
|
- roles
|
||||||
|
verbs:
|
||||||
|
- list
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Rolebinding
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
kind: RoleBinding
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1beta1
|
||||||
|
metadata:
|
||||||
|
namespace: {{ .Release.Name }}
|
||||||
|
name: namespace-manager
|
||||||
|
subjects:
|
||||||
|
- kind: User
|
||||||
|
name: {{ .Release.Name }}
|
||||||
|
apiGroup: ""
|
||||||
|
roleRef:
|
||||||
|
kind: ClusterRole
|
||||||
|
name: namespace-manager
|
||||||
|
apiGroup: ""
|
||||||
|
```
|
||||||
|
|
||||||
|
### Manual Steps
|
||||||
|
|
||||||
|
#### Create a kubernetes certsigner pod
|
||||||
|
|
||||||
|
This keeps the client-ca crt and key secret and allows the cert to be signed and stored on the pod
|
||||||
|
|
||||||
|
#### Create the certsigner secret
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl -n kube-system create secret generic certsigner --from-file /var/lib/rancher/k3s/server/tls/client-ca.crt --from-file /var/lib/rancher/k3s/server/tls/client-ca.key
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Set up the certsigner pod
|
||||||
|
|
||||||
|
```bash
|
||||||
|
scp certsigner.yaml <server>:~/certsigner.yaml
|
||||||
|
kubectl apply -f certsigner.yaml
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Generate a cert
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export USER=<user>
|
||||||
|
docker run -it -v $(pwd)/users/$USER:/$USER python:latest openssl genrsa -out /$USER/$USER.key 2048
|
||||||
|
docker run -it -v $(pwd)/users/$USER:/$USER python:latest openssl req -new -key /$USER/$USER.key -out /$USER/$USER.csr -subj "/CN=$USER/O=user"
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Create a new Userspace
|
||||||
|
|
||||||
|
```bash
|
||||||
|
helm template $USER ./namespace | kubectl --context admin apply -f -
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Sign the cert
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export USER=<user>
|
||||||
|
kubectl --context admin cp $(pwd)/users/$USER/$USER.csr certsigner:/certs/$USER.csr
|
||||||
|
kubectl --context admin exec -it --context admin certsigner -- openssl x509 -in /certs/$USER.csr -req -CA /keys/client-ca.crt -CAkey /keys/client-ca.key -CAcreateserial -out /certs/$USER.crt -days 5000
|
||||||
|
kubectl --context admin cp certsigner:/certs/$USER.crt $(pwd)/users/$USER/$USER.crt
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Add to the config
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl config set-credentials $USER --client-certificate=$USER.crt --client-key=$USER.key
|
||||||
|
kubectl config set-context $USER --cluster=mainframe --namespace=$USER --user=$USER
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Delete
|
||||||
|
|
||||||
|
```bash
|
||||||
|
kubectl config delete-context $USER
|
||||||
|
helm template $USER ./namespace | kubectl --context admin delete -f -
|
||||||
|
```
|
||||||
|
|
||||||
|
### Signing a user cert - detailed notes
|
||||||
|
|
||||||
|
NOTE: ca.crt and ca.key are in /var/lib/rancher/k3s/server/tls/client-ca.*
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# First we create the credentials
|
||||||
|
# /CN=<username> - the user
|
||||||
|
# /O=<group> - the group
|
||||||
|
|
||||||
|
# Navigate to the user directory
|
||||||
|
export USER=<username>
|
||||||
|
cd $USER
|
||||||
|
|
||||||
|
# Generate a private key
|
||||||
|
openssl genrsa -out $USER.key 2048
|
||||||
|
# Check the key
|
||||||
|
# openssl pkey -in ca.key -noout -text
|
||||||
|
# Generate and send me the CSR
|
||||||
|
# The "user" group is my default group
|
||||||
|
openssl req -new -key $USER.key -out $USER.csr -subj "/CN=$USER/O=user"
|
||||||
|
|
||||||
|
# Check the CSR
|
||||||
|
# openssl req -in $USER.csr -noout -text
|
||||||
|
# If satisfactory, sign the CSR
|
||||||
|
# Copy from /var/lib/rancher/k3s/server/tls/client-ca.crt and client-ca.key
|
||||||
|
openssl x509 -req -in $USER.csr -CA ../client-ca.crt -CAkey ../client-ca.key -CAcreateserial -out $USER.crt -days 5000
|
||||||
|
# Review the certificate
|
||||||
|
# openssl x509 -in $USER.crt -text -noout
|
||||||
|
|
||||||
|
# Send back the crt
|
||||||
|
# cp $USER.crt $USER.key ../server-ca.crt ~/.kube/
|
||||||
|
kubectl config set-credentials $USER --client-certificate=$USER.crt --client-key=$USER.key
|
||||||
|
kubectl config set-context $USER --cluster=mainframe --namespace=$USER --user=$USER
|
||||||
|
|
||||||
|
# Now we create the namespace, rolebindings, and resource quotas
|
||||||
|
# kubectl apply -f k8s/
|
||||||
|
|
||||||
|
# Add the cluster
|
||||||
|
# CA file can be found at https://3.14.3.100:6443/cacerts
|
||||||
|
- cluster:
|
||||||
|
certificate-authority: server-ca.crt
|
||||||
|
server: https://3.14.3.100:6443
|
||||||
|
name: mainframe
|
||||||
|
|
||||||
|
# Test if everything worked
|
||||||
|
kubectl --context=$USER-context get pods
|
||||||
|
```
|
||||||
|
|
||||||
|
## Help
|
||||||
|
|
||||||
|
### Troubleshooting
|
||||||
|
|
||||||
|
#### Deleting a stuck namespace
|
||||||
|
|
||||||
|
```bash
|
||||||
|
NAMESPACE=nginx
|
||||||
|
kubectl proxy &
|
||||||
|
kubectl get namespace $NAMESPACE -o json |jq '.spec = {"finalizers":[]}' >temp.json
|
||||||
|
curl -k -H "Content-Type: application/json" -X PUT --data-binary @temp.json 127.0.0.1:8001/api/v1/namespaces/$NAMESPACE/finalize
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Fixing a bad volume
|
||||||
|
|
||||||
|
```bash
|
||||||
|
xfs_repair -L /dev/sdg
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Mounting an ix-application volume from truenas
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# set the mountpoint
|
||||||
|
zfs set mountpoint=/ix_pvc enc1/ix-applications/releases/gitea/volumes/pvc-40e27277-71e3-4469-88a3-a39f53435a8b
|
||||||
|
|
||||||
|
#"unset" the mountpoint (back to legacy)
|
||||||
|
zfs set mountpoint=legacy enc1/ix-applications/releases/gitea/volumes/pvc-40e27277-71e3-4469-88a3-a39f53435a8b
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Mounting a volume
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# mount
|
||||||
|
mount -t xfs /dev/zvol/enc0/dcsi/apps/pvc-d5090258-cf20-4f2e-a5cf-330ac00d0049 /mnt/dcsi_pvc
|
||||||
|
|
||||||
|
# unmount
|
||||||
|
umount /mnt/dcsi_pvc
|
||||||
|
```
|
||||||
|
|
||||||
|
## Database Backups
|
||||||
|
|
||||||
|
<https://docs.k3s.io/cli/etcd-snapshot>
|
||||||
|
|
||||||
|
Note, you must backup `/var/lib/rancher/k3s/server/token`
|
||||||
|
and use the contents as the token when restoring the backup as data is encrypted with that token.
|
||||||
|
|
||||||
|
## Uninstall
|
||||||
|
|
||||||
|
```bash
|
||||||
|
/usr/local/bin/k3s-uninstall.sh
|
||||||
|
```
|
||||||
@@ -1,9 +0,0 @@
|
|||||||
# Kubernetes Node Ansible
|
|
||||||
|
|
||||||
Updates /etc/hosts on each kubernetes node with the correct IP for democratic-csi-server.reeselink.com
|
|
||||||
|
|
||||||
## Update Hosts
|
|
||||||
|
|
||||||
```bash
|
|
||||||
ansible-playbook -i ansible/inventory.yaml k3s/hosts/update_hosts.yaml
|
|
||||||
```
|
|
||||||
@@ -1,4 +0,0 @@
|
|||||||
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
|
|
||||||
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
|
|
||||||
# 172.20.0.1 democratic-csi-server.reeselink.com
|
|
||||||
fd00:fd41:d0f1:1010::6 democratic-csi-server.reeselink.com
|
|
||||||
@@ -1,20 +0,0 @@
|
|||||||
- name: Update /etc/hosts
|
|
||||||
hosts: kubernetes
|
|
||||||
become: true
|
|
||||||
become_user: root
|
|
||||||
become_method: sudo
|
|
||||||
tasks:
|
|
||||||
- name: Copy /etc/hosts
|
|
||||||
ansible.builtin.copy:
|
|
||||||
src: ./hosts
|
|
||||||
dest: /etc/hosts
|
|
||||||
owner: root
|
|
||||||
group: root
|
|
||||||
mode: '0644'
|
|
||||||
# - name: Add IP address of all hosts to /etc/hosts
|
|
||||||
# lineinfile:
|
|
||||||
# dest: /etc/hosts
|
|
||||||
# regexp: '.*{{ item.value.address }}$'
|
|
||||||
# line: "{{ item.value.address }} {{ item.value.hostname }}"
|
|
||||||
# state: present
|
|
||||||
# loop: "{{ ip | dict2items }}"
|
|
||||||
@@ -44,12 +44,13 @@ metadata:
|
|||||||
name: ingress-nginx-demo
|
name: ingress-nginx-demo
|
||||||
annotations:
|
annotations:
|
||||||
cert-manager.io/cluster-issuer: letsencrypt
|
cert-manager.io/cluster-issuer: letsencrypt
|
||||||
kubernetes.io/ingress.class: nginx
|
external-dns.alpha.kubernetes.io/ttl: "60"
|
||||||
nginx.ingress.kubernetes.io/proxy-body-size: "0"
|
nginx.ingress.kubernetes.io/proxy-body-size: "0"
|
||||||
nginx.org/client-max-body-size: "0"
|
nginx.org/client-max-body-size: "0"
|
||||||
spec:
|
spec:
|
||||||
|
ingressClassName: nginx
|
||||||
rules:
|
rules:
|
||||||
- host: ingress-nginx-demo.reeseapps.com
|
- host: demo.reeseapps.com
|
||||||
http:
|
http:
|
||||||
paths:
|
paths:
|
||||||
- path: /
|
- path: /
|
||||||
@@ -61,5 +62,5 @@ spec:
|
|||||||
number: 80
|
number: 80
|
||||||
tls:
|
tls:
|
||||||
- hosts:
|
- hosts:
|
||||||
- ingress-nginx-demo.reeseapps.com
|
- demo.reeseapps.com
|
||||||
secretName: ingress-nginx-demo-tls-cert
|
secretName: ingress-nginx-demo-tls-cert
|
||||||
@@ -16,27 +16,4 @@ spec:
|
|||||||
serviceAccountName: system-upgrade
|
serviceAccountName: system-upgrade
|
||||||
upgrade:
|
upgrade:
|
||||||
image: rancher/k3s-upgrade
|
image: rancher/k3s-upgrade
|
||||||
channel: https://update.k3s.io/v1-release/channels/latest
|
channel: https://update.k3s.io/v1-release/channels/latest
|
||||||
---
|
|
||||||
# Agent plan
|
|
||||||
apiVersion: upgrade.cattle.io/v1
|
|
||||||
kind: Plan
|
|
||||||
metadata:
|
|
||||||
name: agent-plan
|
|
||||||
namespace: system-upgrade
|
|
||||||
spec:
|
|
||||||
concurrency: 1
|
|
||||||
cordon: true
|
|
||||||
nodeSelector:
|
|
||||||
matchExpressions:
|
|
||||||
- key: node-role.kubernetes.io/control-plane
|
|
||||||
operator: DoesNotExist
|
|
||||||
prepare:
|
|
||||||
args:
|
|
||||||
- prepare
|
|
||||||
- server-plan
|
|
||||||
image: rancher/k3s-upgrade
|
|
||||||
serviceAccountName: system-upgrade
|
|
||||||
upgrade:
|
|
||||||
image: rancher/k3s-upgrade
|
|
||||||
channel: https://update.k3s.io/v1-release/channels/latest
|
|
||||||
42
kubectl/upgrade-plan.yaml
Normal file
42
kubectl/upgrade-plan.yaml
Normal file
@@ -0,0 +1,42 @@
|
|||||||
|
# Server plan
|
||||||
|
apiVersion: upgrade.cattle.io/v1
|
||||||
|
kind: Plan
|
||||||
|
metadata:
|
||||||
|
name: server-plan
|
||||||
|
namespace: system-upgrade
|
||||||
|
spec:
|
||||||
|
concurrency: 1
|
||||||
|
cordon: true
|
||||||
|
nodeSelector:
|
||||||
|
matchExpressions:
|
||||||
|
- key: node-role.kubernetes.io/control-plane
|
||||||
|
operator: In
|
||||||
|
values:
|
||||||
|
- "true"
|
||||||
|
serviceAccountName: system-upgrade
|
||||||
|
upgrade:
|
||||||
|
image: rancher/k3s-upgrade
|
||||||
|
channel: https://update.k3s.io/v1-release/channels/latest
|
||||||
|
---
|
||||||
|
# Agent plan
|
||||||
|
apiVersion: upgrade.cattle.io/v1
|
||||||
|
kind: Plan
|
||||||
|
metadata:
|
||||||
|
name: agent-plan
|
||||||
|
namespace: system-upgrade
|
||||||
|
spec:
|
||||||
|
concurrency: 1
|
||||||
|
cordon: true
|
||||||
|
nodeSelector:
|
||||||
|
matchExpressions:
|
||||||
|
- key: node-role.kubernetes.io/control-plane
|
||||||
|
operator: DoesNotExist
|
||||||
|
prepare:
|
||||||
|
args:
|
||||||
|
- prepare
|
||||||
|
- server-plan
|
||||||
|
image: rancher/k3s-upgrade
|
||||||
|
serviceAccountName: system-upgrade
|
||||||
|
upgrade:
|
||||||
|
image: rancher/k3s-upgrade
|
||||||
|
channel: https://update.k3s.io/v1-release/channels/latest
|
||||||
@@ -1,4 +1,7 @@
|
|||||||
# Wireguard
|
# Service Mesh
|
||||||
|
|
||||||
|
This will be handled by wireguard. The goal is to establish encrypted communication between
|
||||||
|
hosts for iscsi/nfs/http services.
|
||||||
|
|
||||||
## Install Wireguard
|
## Install Wireguard
|
||||||
|
|
||||||
@@ -17,7 +20,7 @@ ansible-playbook -i ansible/inventory.yaml mesh/peers.yaml
|
|||||||
Collect DNS records from vars.yaml
|
Collect DNS records from vars.yaml
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cat wireguard/vars.yaml | \
|
cat mesh/vars.yaml | \
|
||||||
yq -r '.ip | map([.hostname + "-wg.reeselink.com", .address]).[].[]' > dns/duconet-wg.txt
|
yq -r '.ip | map([.hostname + "-wg.reeselink.com", .address]).[].[]' > dns/duconet-wg.txt
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|||||||
@@ -2,39 +2,40 @@
|
|||||||
hosts:
|
hosts:
|
||||||
- colors
|
- colors
|
||||||
- kubernetes
|
- kubernetes
|
||||||
- truenas
|
- managed
|
||||||
- nextcloud-aio
|
|
||||||
- unifi-external
|
|
||||||
become: true
|
become: true
|
||||||
become_user: root
|
become_user: root
|
||||||
become_method: sudo
|
become_method: sudo
|
||||||
vars_files:
|
vars_files:
|
||||||
- vars.yaml
|
- vars.yaml
|
||||||
tasks:
|
tasks:
|
||||||
- name: Check if duconet-wg exists
|
- name: Check if {{ wireguard.interface }} exists
|
||||||
shell: ip link show duconet-wg
|
shell: ip link show {{ wireguard.interface }}
|
||||||
register: link_check
|
register: link_check
|
||||||
ignore_errors: yes
|
ignore_errors: yes
|
||||||
- name: Add duconet-wg link
|
- name: Add {{ wireguard.interface }} link
|
||||||
shell: ip link add dev duconet-wg type wireguard
|
shell: ip link add dev {{ wireguard.interface }} type wireguard
|
||||||
when: link_check.rc != 0
|
|
||||||
- name: Add duconet-wg addresses
|
|
||||||
shell: "ip address add dev duconet-wg {{ ip[inventory_hostname].address }}/64"
|
|
||||||
when: link_check.rc != 0
|
when: link_check.rc != 0
|
||||||
|
- name: Add {{ wireguard.interface }} ipv6 addresses
|
||||||
|
shell: "ip address add dev {{ wireguard.interface }} {{ ip[inventory_hostname].address_ipv6 }}/64"
|
||||||
|
ignore_errors: yes
|
||||||
|
- name: Add {{ wireguard.interface }} ipv4 addresses
|
||||||
|
shell: "ip address add dev {{ wireguard.interface }} {{ ip[inventory_hostname].address_ipv4 }}/24"
|
||||||
|
ignore_errors: yes
|
||||||
- name: wg set port/key
|
- name: wg set port/key
|
||||||
shell: >
|
shell: >
|
||||||
wg set duconet-wg
|
wg set {{ wireguard.interface }}
|
||||||
listen-port {{ wireguard.listen_port }}
|
listen-port {{ wireguard.listen_port }}
|
||||||
private-key /etc/wireguard/privatekey
|
private-key /etc/wireguard/privatekey
|
||||||
- name: Set link up
|
- name: Set link up
|
||||||
shell: ip link set up dev duconet-wg
|
shell: ip link set up dev {{ wireguard.interface }}
|
||||||
- name: Touch duconet-wg.conf
|
- name: Touch {{ wireguard.interface }}.conf
|
||||||
ansible.builtin.file:
|
ansible.builtin.file:
|
||||||
path: /etc/wireguard/duconet-wg.conf
|
path: /etc/wireguard/{{ wireguard.interface }}.conf
|
||||||
state: touch
|
state: touch
|
||||||
- name: save wg config
|
- name: save wg config
|
||||||
shell: wg-quick save duconet-wg
|
shell: wg-quick save {{ wireguard.interface }}
|
||||||
- name: Enable wg-quick@duconet-wg
|
- name: Enable wg-quick@{{ wireguard.interface }}
|
||||||
ansible.builtin.systemd_service:
|
ansible.builtin.systemd_service:
|
||||||
name: wg-quick@duconet-wg
|
name: wg-quick@{{ wireguard.interface }}
|
||||||
enabled: true
|
enabled: true
|
||||||
|
|||||||
@@ -2,9 +2,7 @@
|
|||||||
hosts:
|
hosts:
|
||||||
- colors
|
- colors
|
||||||
- kubernetes
|
- kubernetes
|
||||||
- truenas
|
- managed
|
||||||
- nextcloud-aio
|
|
||||||
- unifi-external
|
|
||||||
become: true
|
become: true
|
||||||
become_user: root
|
become_user: root
|
||||||
become_method: sudo
|
become_method: sudo
|
||||||
|
|||||||
@@ -2,9 +2,7 @@
|
|||||||
hosts:
|
hosts:
|
||||||
- colors
|
- colors
|
||||||
- kubernetes
|
- kubernetes
|
||||||
- truenas
|
- managed
|
||||||
- nextcloud-aio
|
|
||||||
- unifi-external
|
|
||||||
become: true
|
become: true
|
||||||
become_user: root
|
become_user: root
|
||||||
become_method: sudo
|
become_method: sudo
|
||||||
@@ -12,17 +10,25 @@
|
|||||||
- vars.yaml
|
- vars.yaml
|
||||||
tasks:
|
tasks:
|
||||||
- name: delete unused peers
|
- name: delete unused peers
|
||||||
shell: wg set duconet-wg peer {{ item }} remove
|
shell: wg set {{ wireguard.interface }} peer {{ item }} remove
|
||||||
loop:
|
loop:
|
||||||
- "CQxNsdPgfzjvOszjn/UZHFdAY3k+D9J+vI8qKUjCYV0="
|
- "CQxNsdPgfzjvOszjn/UZHFdAY3k+D9J+vI8qKUjCYV0="
|
||||||
- name: wg set peers
|
- name: wg set peers
|
||||||
shell: >
|
shell: >
|
||||||
wg set duconet-wg
|
wg set {{ wireguard.interface }}
|
||||||
peer {{ item.public_key }}
|
peer {{ item.public_key }}
|
||||||
allowed-ips '{{ ip[item.name].address }}'
|
allowed-ips '{{ ip[item.name].address_ipv6 }},{{ ip[item.name].address_ipv4 }}'
|
||||||
|
persistent-keepalive 5
|
||||||
{% if item.endpoint %}
|
{% if item.endpoint %}
|
||||||
endpoint '{{ item.endpoint }}'
|
endpoint '{{ item.endpoint }}'
|
||||||
{% endif %}
|
{% endif %}
|
||||||
loop: "{{ peers }}"
|
loop: "{{ peers }}"
|
||||||
|
- name: wg delete peers
|
||||||
|
shell: >
|
||||||
|
wg set {{ wireguard.interface }}
|
||||||
|
peer {{ item }} remove
|
||||||
|
loop:
|
||||||
|
- 9/dBUlO9TGf0H9M3xwPiuIuz6Q/u7fSJVZaUxqAiqi8=
|
||||||
|
ignore_errors: yes
|
||||||
- name: save wg config
|
- name: save wg config
|
||||||
shell: wg-quick save duconet-wg
|
shell: wg-quick save {{ wireguard.interface }}
|
||||||
|
|||||||
@@ -1,6 +1,5 @@
|
|||||||
wireguard:
|
wireguard:
|
||||||
listen_port: 51821
|
listen_port: 51821
|
||||||
allowed_ips: fd00:fd41:d0f1:1010::0/64
|
|
||||||
interface: duconet-wg
|
interface: duconet-wg
|
||||||
peers:
|
peers:
|
||||||
- name: yellow
|
- name: yellow
|
||||||
@@ -24,25 +23,39 @@ peers:
|
|||||||
- name: nextcloud-aio
|
- name: nextcloud-aio
|
||||||
public_key: G4L1WGm9nIwaw2p6oZqT4W7+ekoziCePrjI8AFwXHTw=
|
public_key: G4L1WGm9nIwaw2p6oZqT4W7+ekoziCePrjI8AFwXHTw=
|
||||||
endpoint: nextcloud-aio.reeselink.com:51821
|
endpoint: nextcloud-aio.reeselink.com:51821
|
||||||
|
- name: pivpn
|
||||||
|
public_key: mhrhD+orgevCKJyf28KMvzHGy+0LAmNomAN1XcwjrUI=
|
||||||
|
endpoint: pivpn.reeselink.com:51821
|
||||||
ip:
|
ip:
|
||||||
yellow:
|
yellow:
|
||||||
address: fd00:fd41:d0f1:1010::1
|
address_ipv6: fd00:fd41:d0f1:1010::1
|
||||||
|
address_ipv4: 10.180.238.1
|
||||||
hostname: yellow
|
hostname: yellow
|
||||||
node1:
|
node1:
|
||||||
address: fd00:fd41:d0f1:1010::3
|
address_ipv6: fd00:fd41:d0f1:1010::3
|
||||||
|
address_ipv4: 10.180.238.3
|
||||||
hostname: node1
|
hostname: node1
|
||||||
node2:
|
node2:
|
||||||
address: fd00:fd41:d0f1:1010::4
|
address_ipv6: fd00:fd41:d0f1:1010::4
|
||||||
|
address_ipv4: 10.180.238.4
|
||||||
hostname: node2
|
hostname: node2
|
||||||
node3:
|
node3:
|
||||||
address: fd00:fd41:d0f1:1010::5
|
address_ipv6: fd00:fd41:d0f1:1010::5
|
||||||
|
address_ipv4: 10.180.238.5
|
||||||
hostname: node3
|
hostname: node3
|
||||||
driveripper:
|
driveripper:
|
||||||
address: fd00:fd41:d0f1:1010::6
|
address_ipv6: fd00:fd41:d0f1:1010::6
|
||||||
|
address_ipv4: 10.180.238.6
|
||||||
hostname: driveripper
|
hostname: driveripper
|
||||||
unifi-external:
|
unifi-external:
|
||||||
address: fd00:fd41:d0f1:1010::7
|
address_ipv6: fd00:fd41:d0f1:1010::7
|
||||||
|
address_ipv4: 10.180.238.7
|
||||||
hostname: unifi-external
|
hostname: unifi-external
|
||||||
nextcloud-aio:
|
nextcloud-aio:
|
||||||
address: fd00:fd41:d0f1:1010::8
|
address_ipv6: fd00:fd41:d0f1:1010::8
|
||||||
|
address_ipv4: 10.180.238.8
|
||||||
hostname: nextcloud-aio
|
hostname: nextcloud-aio
|
||||||
|
pivpn:
|
||||||
|
address_ipv6: fd00:fd41:d0f1:1010::9
|
||||||
|
address_ipv4: 10.180.238.9
|
||||||
|
hostname: pivpn
|
||||||
|
|||||||
25
metallb/addresspool.yaml
Normal file
25
metallb/addresspool.yaml
Normal file
@@ -0,0 +1,25 @@
|
|||||||
|
apiVersion: metallb.io/v1beta1
|
||||||
|
kind: IPAddressPool
|
||||||
|
metadata:
|
||||||
|
name: external
|
||||||
|
namespace: kube-system
|
||||||
|
spec:
|
||||||
|
addresses:
|
||||||
|
- 2600:1700:1e6c:a81f:bee:bee:bee::/112
|
||||||
|
- 10.1.240.0-10.1.244.254
|
||||||
|
---
|
||||||
|
apiVersion: metallb.io/v1beta1
|
||||||
|
kind: IPAddressPool
|
||||||
|
metadata:
|
||||||
|
name: internal
|
||||||
|
namespace: kube-system
|
||||||
|
spec:
|
||||||
|
addresses:
|
||||||
|
- 2600:1700:1e6c:a81f:cafe:cafe:cafe::/112
|
||||||
|
- 10.1.245.0-10.1.250.254
|
||||||
|
---
|
||||||
|
apiVersion: metallb.io/v1beta1
|
||||||
|
kind: L2Advertisement
|
||||||
|
metadata:
|
||||||
|
name: external
|
||||||
|
namespace: kube-system
|
||||||
@@ -1,30 +0,0 @@
|
|||||||
apiVersion: metallb.io/v1beta1
|
|
||||||
kind: IPAddressPool
|
|
||||||
metadata:
|
|
||||||
name: production
|
|
||||||
namespace: metallb
|
|
||||||
spec:
|
|
||||||
addresses:
|
|
||||||
- 10.1.2.100/32
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
apiVersion: metallb.io/v1beta1
|
|
||||||
kind: IPAddressPool
|
|
||||||
metadata:
|
|
||||||
name: nginx
|
|
||||||
namespace: metallb
|
|
||||||
spec:
|
|
||||||
addresses:
|
|
||||||
- 10.1.2.101/32
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
apiVersion: metallb.io/v1beta1
|
|
||||||
kind: IPAddressPool
|
|
||||||
metadata:
|
|
||||||
name: productionv6
|
|
||||||
namespace: metallb
|
|
||||||
spec:
|
|
||||||
addresses:
|
|
||||||
- 2600:1700:1e6c:a81f:aaaa::1/64
|
|
||||||
@@ -1,30 +0,0 @@
|
|||||||
apiVersion: metallb.io/v1beta1
|
|
||||||
kind: L2Advertisement
|
|
||||||
metadata:
|
|
||||||
name: production
|
|
||||||
namespace: metallb
|
|
||||||
spec:
|
|
||||||
ipAddressPools:
|
|
||||||
- production
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
apiVersion: metallb.io/v1beta1
|
|
||||||
kind: L2Advertisement
|
|
||||||
metadata:
|
|
||||||
name: nginx
|
|
||||||
namespace: metallb
|
|
||||||
spec:
|
|
||||||
ipAddressPools:
|
|
||||||
- nginx
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
apiVersion: metallb.io/v1beta1
|
|
||||||
kind: L2Advertisement
|
|
||||||
metadata:
|
|
||||||
name: productionv6
|
|
||||||
namespace: metallb
|
|
||||||
spec:
|
|
||||||
ipAddressPools:
|
|
||||||
- productionv6
|
|
||||||
19
network/README.md
Normal file
19
network/README.md
Normal file
@@ -0,0 +1,19 @@
|
|||||||
|
# Systemd Networkd
|
||||||
|
|
||||||
|
## IPV6 EUI64 Address Generation
|
||||||
|
|
||||||
|
This will ensure a static IPV6 Address that is based on your mac address.
|
||||||
|
|
||||||
|
You can tell if your ipv6 is eui64 if it has an fe:ff in between the 6th and 7th number.
|
||||||
|
|
||||||
|
### NetworkManager
|
||||||
|
|
||||||
|
(Fedora Server, Raspberry Pi, Debian)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
nmcli connection show --active
|
||||||
|
nmcli -f ipv6.addr-gen-mode connection show <connection>
|
||||||
|
nmcli con mod <connection> ipv6.addr-gen-mode eui64
|
||||||
|
systemctl restart NetworkManager
|
||||||
|
nmcli -f ipv6.addr-gen-mode connection show <connection>
|
||||||
|
```
|
||||||
@@ -1,71 +1,40 @@
|
|||||||
# Nextcloud AIO
|
# Nextcloud AIO
|
||||||
|
|
||||||
- [Nextcloud AIO](#nextcloud-aio)
|
- [Nextcloud AIO](#nextcloud-aio)
|
||||||
- [Prereq](#prereq)
|
|
||||||
- [Iscsi](#iscsi)
|
|
||||||
- [Setup](#setup)
|
- [Setup](#setup)
|
||||||
- [IPV6 (Optional)](#ipv6-optional)
|
- [Install Docker](#install-docker)
|
||||||
|
- [IPV6](#ipv6)
|
||||||
- [Install](#install)
|
- [Install](#install)
|
||||||
|
- [Backups](#backups)
|
||||||
- [Trusted Proxy](#trusted-proxy)
|
- [Trusted Proxy](#trusted-proxy)
|
||||||
|
- [Default phone region](#default-phone-region)
|
||||||
|
- [Adding existing files](#adding-existing-files)
|
||||||
|
- [Theming](#theming)
|
||||||
|
- [Changing the domain](#changing-the-domain)
|
||||||
- [Uninstall](#uninstall)
|
- [Uninstall](#uninstall)
|
||||||
- [Edit QCOW](#edit-qcow)
|
- [Edit QCOW](#edit-qcow)
|
||||||
|
|
||||||
## Prereq
|
<https://github.com/nextcloud/all-in-one>
|
||||||
|
|
||||||
1. Have a reverse proxy pointing at your server
|
|
||||||
2. Have a valid certificate
|
|
||||||
|
|
||||||
### Iscsi
|
|
||||||
|
|
||||||
We can use iscsi to give nextcloud a large block store.
|
|
||||||
|
|
||||||
On Nextcloud:
|
|
||||||
|
|
||||||
1. Create an iscsi initiator in Truenas named `iqn.2024-02.com.reeselink:nextcloud-aio`.
|
|
||||||
2. Create a new authorized access with username `iqn.2024-02.com.reeselink:nextcloud-aio` and password.
|
|
||||||
3. Create a new target called `iqn.2024-02.com.reeselink:nextcloud-aio-data`
|
|
||||||
1. Create an alias called `Nextcloud AIO Data`
|
|
||||||
2. Select the exposed portal
|
|
||||||
3. Select the `iqn.2024-02.com.reeselink:nextcloud-aio` initiator group
|
|
||||||
4. Select CHAP Auth method
|
|
||||||
5. Select auth group containing `iqn.2024-02.com.reeselink:nextcloud-aio`
|
|
||||||
4. Create a new extent called `nextcloud-aio-data`, leave all settings alone
|
|
||||||
5. Create a new associated target with `iqn.2024-02.com.reeselink:nextcloud-aio-data` target and
|
|
||||||
`nextcloud-aio-data` extent
|
|
||||||
|
|
||||||
On Nextcloud AIO:
|
|
||||||
|
|
||||||
Edit `/etc/iscsi/initiatorname.iscsi` and set your initiatorname.
|
|
||||||
|
|
||||||
Edit `/etc/iscsi/iscsid.conf`
|
|
||||||
|
|
||||||
```conf
|
|
||||||
node.session.auth.username = iqn.2024-02.com.reeselink:nextcloud-aio
|
|
||||||
node.session.auth.password = <password>
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Discover targets
|
|
||||||
iscsiadm -m discovery -t st -p driveripper.reeselink.com
|
|
||||||
# Login to the nextcloud-data target
|
|
||||||
iscsiadm -m node -T iqn.2023-01.driveripper.reeselink.com:nextcloud-aio-data -l
|
|
||||||
# Automatically login on startup
|
|
||||||
iscsiadm -m node -T iqn.2023-01.driveripper.reeselink.com:nextcloud-aio-data -o update -n node.startup -v automatic
|
|
||||||
```
|
|
||||||
|
|
||||||
## Setup
|
## Setup
|
||||||
|
|
||||||
### IPV6 (Optional)
|
## Install Docker
|
||||||
|
|
||||||
Add to /etc/docker/daemon.json
|
<https://docs.docker.com/engine/install/fedora/#set-up-the-repository>
|
||||||
|
|
||||||
```json
|
### IPV6
|
||||||
|
|
||||||
|
```bash
|
||||||
|
cat <<EOF > /etc/docker/daemon.json
|
||||||
{
|
{
|
||||||
"ipv6": true,
|
"ipv6": true,
|
||||||
"fixed-cidr-v6": "fd12:3456:789a:1::/64",
|
"fixed-cidr-v6": "fd12:3456:789a:1::/64",
|
||||||
"experimental": true,
|
"experimental": true,
|
||||||
"ip6tables": true
|
"ip6tables": true
|
||||||
}
|
}
|
||||||
|
EOF
|
||||||
|
systemctl restart docker
|
||||||
|
systemctl enable --now docker
|
||||||
```
|
```
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
@@ -76,6 +45,23 @@ docker network create --subnet="fd12:3456:789a:2::/64" --driver bridge --ipv6 ne
|
|||||||
### Install
|
### Install
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
# Default
|
||||||
|
# Note: this puts all your nextcloud data in /nextcloud
|
||||||
|
docker run \
|
||||||
|
--init \
|
||||||
|
--sig-proxy=false \
|
||||||
|
--name nextcloud-aio-mastercontainer \
|
||||||
|
--restart always \
|
||||||
|
--publish 80:80 \
|
||||||
|
--publish 8080:8080 \
|
||||||
|
--publish 8443:8443 \
|
||||||
|
--env NEXTCLOUD_DATADIR="/nextcloud" \
|
||||||
|
--env NEXTCLOUD_MEMORY_LIMIT=8192M \
|
||||||
|
--volume nextcloud_aio_mastercontainer:/mnt/docker-aio-config \
|
||||||
|
--volume /var/run/docker.sock:/var/run/docker.sock:ro \
|
||||||
|
nextcloud/all-in-one:latest
|
||||||
|
|
||||||
|
# Reverse Proxy
|
||||||
docker run \
|
docker run \
|
||||||
--init \
|
--init \
|
||||||
--sig-proxy=false \
|
--sig-proxy=false \
|
||||||
@@ -83,24 +69,64 @@ docker run \
|
|||||||
--restart always \
|
--restart always \
|
||||||
--publish 8080:8080 \
|
--publish 8080:8080 \
|
||||||
--env APACHE_PORT=11000 \
|
--env APACHE_PORT=11000 \
|
||||||
--env APACHE_IP_BINDING=0.0.0.0 \
|
--env APACHE_IP_BINDING="::" \
|
||||||
--env NEXTCLOUD_DATADIR="/mnt/ncdata" \
|
--env NEXTCLOUD_DATADIR="/mnt/ncdata" \
|
||||||
|
--env NEXTCLOUD_MEMORY_LIMIT=8192M \
|
||||||
--volume nextcloud_aio_mastercontainer:/mnt/docker-aio-config \
|
--volume nextcloud_aio_mastercontainer:/mnt/docker-aio-config \
|
||||||
--volume /var/run/docker.sock:/var/run/docker.sock:ro \
|
--volume /var/run/docker.sock:/var/run/docker.sock:ro \
|
||||||
nextcloud/all-in-one:latest
|
nextcloud/all-in-one:latest
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## Backups
|
||||||
|
|
||||||
|
IMPORTANT: you will need both KEY AND PASSPHRASE to access this repo!
|
||||||
|
If you used a repokey mode, the key is stored in the repo, but you should back it up separately.
|
||||||
|
Use "borg key export" to export the key, optionally in printable format.
|
||||||
|
Write down the passphrase. Store both at safe place(s).
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker exec nextcloud-aio-borgbackup borg key export /mnt/borgbackup/borg/
|
||||||
|
```
|
||||||
|
|
||||||
## Trusted Proxy
|
## Trusted Proxy
|
||||||
|
|
||||||
|
If running with a reverse proxy.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
docker exec --user www-data -it nextcloud-aio-nextcloud php occ config:system:set trusted_proxies 2 --value="10.1.0.0/16"
|
docker exec --user www-data -it nextcloud-aio-nextcloud php occ config:system:set trusted_proxies 2 --value="10.1.0.0/16"
|
||||||
|
docker exec --user www-data -it nextcloud-aio-nextcloud php occ config:system:set trusted_proxies 3 --value="fd00:fd41:d0f1:1010::/64"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Default phone region
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker exec --user www-data -it nextcloud-aio-nextcloud php occ config:system:set default_phone_region --value="US"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Adding existing files
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker exec --user www-data -it nextcloud-aio-nextcloud php occ files:scan --path=ducoterra/files
|
||||||
|
```
|
||||||
|
|
||||||
|
## Theming
|
||||||
|
|
||||||
|
Red: `#B30000`
|
||||||
|
|
||||||
|
## Changing the domain
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker run -it --rm --volume nextcloud_aio_mastercontainer:/mnt/docker-aio-config:rw alpine sh -c "apk add --no-cache nano && nano /mnt/docker-aio-config/data/configuration.json"
|
||||||
```
|
```
|
||||||
|
|
||||||
## Uninstall
|
## Uninstall
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
docker stop $(docker ps -a -q)
|
docker stop $(docker ps -a -q)
|
||||||
docker system prune
|
docker container prune
|
||||||
|
|
||||||
|
# DANGER ZONE
|
||||||
|
# This deletes all your data
|
||||||
docker volume prune -a -f
|
docker volume prune -a -f
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|||||||
@@ -42,6 +42,10 @@ stream {
|
|||||||
{{ item.external.domain }}{{ expose_tld }} unix:/var/lib/nginx/tmp/nginx_https.sock;
|
{{ item.external.domain }}{{ expose_tld }} unix:/var/lib/nginx/tmp/nginx_https.sock;
|
||||||
{% endfor %}
|
{% endfor %}
|
||||||
|
|
||||||
|
{% for item in forward %}
|
||||||
|
{{ item.domain }}{{ expose_tld }} {{ item.ip }};
|
||||||
|
{% endfor %}
|
||||||
|
|
||||||
# By default forward to our internal nginx server (probably kubernetes)
|
# By default forward to our internal nginx server (probably kubernetes)
|
||||||
default {{ defaults.forward_ip }};
|
default {{ defaults.forward_ip }};
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,7 +1,5 @@
|
|||||||
upstream kube_backend {
|
upstream kube_backend {
|
||||||
server 10.1.2.13:6443 max_fails=2 fail_timeout=30s;
|
server 10.1.2.13:6443 max_fails=2 fail_timeout=30s;
|
||||||
server 10.1.2.14:6443 max_fails=2 fail_timeout=30s;
|
|
||||||
server 10.1.2.15:6443 max_fails=2 fail_timeout=30s;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
server {
|
server {
|
||||||
|
|||||||
@@ -9,13 +9,14 @@ defaults:
|
|||||||
internal_https_port: 443
|
internal_https_port: 443
|
||||||
internal_ipv4_regex:
|
internal_ipv4_regex:
|
||||||
- "10.1.*"
|
- "10.1.*"
|
||||||
- "192.168.4.*"
|
|
||||||
- "192.168.5.*"
|
|
||||||
- "192.168.6.*"
|
|
||||||
internal_ipv6_regex:
|
internal_ipv6_regex:
|
||||||
- "2600:1700:1e6c:a81f.*"
|
- "2600:1700:1e6c:a81f.*"
|
||||||
expose_tld: .reeseapps.com
|
expose_tld: .reeseapps.com
|
||||||
|
|
||||||
|
forward:
|
||||||
|
- domain: nextcloud
|
||||||
|
ip: fd00:fd41:d0f1:1010::8
|
||||||
|
|
||||||
http:
|
http:
|
||||||
- external:
|
- external:
|
||||||
domain: homeassistant
|
domain: homeassistant
|
||||||
@@ -33,7 +34,7 @@ http:
|
|||||||
extra_http_ports: []
|
extra_http_ports: []
|
||||||
extra_https_ports: []
|
extra_https_ports: []
|
||||||
internal:
|
internal:
|
||||||
ip: "10.1.2.10"
|
ip: "[fd00:fd41:d0f1:1010::6]"
|
||||||
port: 8443
|
port: 8443
|
||||||
protocol: https
|
protocol: https
|
||||||
|
|
||||||
@@ -53,7 +54,7 @@ http:
|
|||||||
extra_http_ports: []
|
extra_http_ports: []
|
||||||
extra_https_ports: []
|
extra_https_ports: []
|
||||||
internal:
|
internal:
|
||||||
ip: "10.1.203.197"
|
ip: "[fd00:fd41:d0f1:1010::1]"
|
||||||
port: 9090
|
port: 9090
|
||||||
protocol: https
|
protocol: https
|
||||||
|
|
||||||
@@ -63,7 +64,7 @@ http:
|
|||||||
extra_http_ports: []
|
extra_http_ports: []
|
||||||
extra_https_ports: []
|
extra_https_ports: []
|
||||||
internal:
|
internal:
|
||||||
ip: "10.1.2.13"
|
ip: "[fd00:fd41:d0f1:1010::3]"
|
||||||
port: 9090
|
port: 9090
|
||||||
protocol: https
|
protocol: https
|
||||||
|
|
||||||
@@ -73,7 +74,7 @@ http:
|
|||||||
extra_http_ports: []
|
extra_http_ports: []
|
||||||
extra_https_ports: []
|
extra_https_ports: []
|
||||||
internal:
|
internal:
|
||||||
ip: "10.1.2.14"
|
ip: "[fd00:fd41:d0f1:1010::4]"
|
||||||
port: 9090
|
port: 9090
|
||||||
protocol: https
|
protocol: https
|
||||||
|
|
||||||
@@ -83,7 +84,7 @@ http:
|
|||||||
extra_http_ports: []
|
extra_http_ports: []
|
||||||
extra_https_ports: []
|
extra_https_ports: []
|
||||||
internal:
|
internal:
|
||||||
ip: "10.1.2.15"
|
ip: "[fd00:fd41:d0f1:1010::5]"
|
||||||
port: 9090
|
port: 9090
|
||||||
protocol: https
|
protocol: https
|
||||||
|
|
||||||
@@ -93,7 +94,7 @@ http:
|
|||||||
extra_http_ports: []
|
extra_http_ports: []
|
||||||
extra_https_ports: []
|
extra_https_ports: []
|
||||||
internal:
|
internal:
|
||||||
ip: "10.1.241.139"
|
ip: "[fd00:fd41:d0f1:1010::7]"
|
||||||
port: 9090
|
port: 9090
|
||||||
protocol: https
|
protocol: https
|
||||||
|
|
||||||
@@ -103,7 +104,7 @@ http:
|
|||||||
extra_http_ports: []
|
extra_http_ports: []
|
||||||
extra_https_ports: []
|
extra_https_ports: []
|
||||||
internal:
|
internal:
|
||||||
ip: "10.1.175.237"
|
ip: "[fd00:fd41:d0f1:1010::8]"
|
||||||
port: 9090
|
port: 9090
|
||||||
protocol: https
|
protocol: https
|
||||||
|
|
||||||
@@ -128,7 +129,7 @@ http:
|
|||||||
extra_http_ports: []
|
extra_http_ports: []
|
||||||
extra_https_ports: []
|
extra_https_ports: []
|
||||||
internal:
|
internal:
|
||||||
ip: "10.1.203.197"
|
ip: "10.180.238.1"
|
||||||
port: 8081
|
port: 8081
|
||||||
protocol: http
|
protocol: http
|
||||||
|
|
||||||
@@ -148,7 +149,7 @@ http:
|
|||||||
extra_http_ports: []
|
extra_http_ports: []
|
||||||
extra_https_ports: []
|
extra_https_ports: []
|
||||||
internal:
|
internal:
|
||||||
ip: 10.1.175.237
|
ip: "[fd00:fd41:d0f1:1010::8]"
|
||||||
port: 11000
|
port: 11000
|
||||||
protocol: http
|
protocol: http
|
||||||
|
|
||||||
@@ -159,6 +160,16 @@ http:
|
|||||||
- 8080
|
- 8080
|
||||||
extra_https_ports: []
|
extra_https_ports: []
|
||||||
internal:
|
internal:
|
||||||
ip: 10.1.241.139
|
ip: "[fd00:fd41:d0f1:1010::7]"
|
||||||
port: 8443
|
port: 8443
|
||||||
protocol: https
|
protocol: https
|
||||||
|
|
||||||
|
- external:
|
||||||
|
domain: pivpn
|
||||||
|
restricted: true
|
||||||
|
extra_http_ports: []
|
||||||
|
extra_https_ports: []
|
||||||
|
internal:
|
||||||
|
ip: "[fd00:fd41:d0f1:1010::9]"
|
||||||
|
port: 9090
|
||||||
|
protocol: https
|
||||||
|
|||||||
@@ -6,7 +6,6 @@
|
|||||||
- [iperf3](#iperf3)
|
- [iperf3](#iperf3)
|
||||||
- [pihole](#pihole)
|
- [pihole](#pihole)
|
||||||
- [Cloudflared](#cloudflared)
|
- [Cloudflared](#cloudflared)
|
||||||
- [WG Easy (Deprecated - use Unifi)](#wg-easy-deprecated---use-unifi)
|
|
||||||
- [Update yellow quadlets](#update-yellow-quadlets)
|
- [Update yellow quadlets](#update-yellow-quadlets)
|
||||||
|
|
||||||
## Notes
|
## Notes
|
||||||
@@ -85,44 +84,6 @@ podman run \
|
|||||||
compose /compose/cloudflared-compose.yaml
|
compose /compose/cloudflared-compose.yaml
|
||||||
```
|
```
|
||||||
|
|
||||||
### WG Easy (Deprecated - use Unifi)
|
|
||||||
|
|
||||||
PASSWORD and PASSWORD_HASH env vars didn't work.
|
|
||||||
|
|
||||||
<https://github.com/wg-easy/wg-easy>
|
|
||||||
|
|
||||||
Note, to create PASSWORD_HASH run:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
python -c 'import bcrypt; print(bcrypt.hashpw(b"testpass", bcrypt.gensalt()).decode())'
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
podman run \
|
|
||||||
-v ./podman/quadlets:/quadlets \
|
|
||||||
quay.io/k9withabone/podlet \
|
|
||||||
-f /quadlets \
|
|
||||||
-i \
|
|
||||||
--overwrite \
|
|
||||||
--wants network-online.target \
|
|
||||||
--after network-online.target \
|
|
||||||
--name=wg-easy \
|
|
||||||
podman run \
|
|
||||||
-e LANG=en \
|
|
||||||
-e WG_HOST=wg.reeseapps.com \
|
|
||||||
-e PORT=51821 \
|
|
||||||
-e WG_PORT=51820 \
|
|
||||||
-v wg-easy:/etc/wireguard \
|
|
||||||
-p 51820:51820/udp \
|
|
||||||
-p 51822:51821/tcp \
|
|
||||||
--secret wg_easy_password,type=env,target=PASSWORD_HASH \
|
|
||||||
--cap-add=NET_ADMIN \
|
|
||||||
--cap-add=SYS_MODULE \
|
|
||||||
--cap-add=NET_RAW \
|
|
||||||
--restart unless-stopped \
|
|
||||||
ghcr.io/wg-easy/wg-easy:nightly
|
|
||||||
```
|
|
||||||
|
|
||||||
## Update yellow quadlets
|
## Update yellow quadlets
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
|||||||
@@ -7,6 +7,8 @@ services:
|
|||||||
image: docker.io/cloudflare/cloudflared:2024.5.0
|
image: docker.io/cloudflare/cloudflared:2024.5.0
|
||||||
command: proxy-dns --address 0.0.0.0 --port 5053 --upstream https://1.1.1.1/dns-query --upstream https://1.0.0.1/dns-query
|
command: proxy-dns --address 0.0.0.0 --port 5053 --upstream https://1.1.1.1/dns-query --upstream https://1.0.0.1/dns-query
|
||||||
ports:
|
ports:
|
||||||
- "0.0.0.0:5053:5053/tcp"
|
- "5053:5053/tcp"
|
||||||
- "0.0.0.0:5053:5053/udp"
|
- "5053:5053/udp"
|
||||||
restart: unless-stopped
|
restart: unless-stopped
|
||||||
|
networks:
|
||||||
|
- podman1
|
||||||
|
|||||||
@@ -5,6 +5,8 @@ services:
|
|||||||
container_name: iperf3
|
container_name: iperf3
|
||||||
image: docker.io/networkstatic/iperf3:latest
|
image: docker.io/networkstatic/iperf3:latest
|
||||||
ports:
|
ports:
|
||||||
- "0.0.0.0:5202:5201/tcp"
|
- "5202:5201/tcp"
|
||||||
command: -s
|
command: -s
|
||||||
restart: unless-stopped
|
restart: unless-stopped
|
||||||
|
networks:
|
||||||
|
- podman1
|
||||||
|
|||||||
@@ -6,9 +6,9 @@ services:
|
|||||||
container_name: pihole
|
container_name: pihole
|
||||||
image: docker.io/pihole/pihole:2024.05.0
|
image: docker.io/pihole/pihole:2024.05.0
|
||||||
ports:
|
ports:
|
||||||
- "0.0.0.0:53:53/tcp"
|
- "53:53/tcp"
|
||||||
- "0.0.0.0:53:53/udp"
|
- "53:53/udp"
|
||||||
- "0.0.0.0:8081:80/tcp"
|
- "8081:80/tcp"
|
||||||
environment:
|
environment:
|
||||||
TZ: "America/Chicago"
|
TZ: "America/Chicago"
|
||||||
# WEBPASSWORD: "SET A PASSWORD HERE"
|
# WEBPASSWORD: "SET A PASSWORD HERE"
|
||||||
@@ -17,6 +17,8 @@ services:
|
|||||||
- pihole:/etc/pihole
|
- pihole:/etc/pihole
|
||||||
- dnsmasq:/etc/dnsmasq.d
|
- dnsmasq:/etc/dnsmasq.d
|
||||||
restart: unless-stopped
|
restart: unless-stopped
|
||||||
|
networks:
|
||||||
|
- podman1
|
||||||
|
|
||||||
volumes:
|
volumes:
|
||||||
pihole:
|
pihole:
|
||||||
|
|||||||
@@ -5,6 +5,7 @@ Wants=network-online.target
|
|||||||
ContainerName=cloudflared
|
ContainerName=cloudflared
|
||||||
Exec=proxy-dns --address 0.0.0.0 --port 5053 --upstream https://1.1.1.1/dns-query --upstream https://1.0.0.1/dns-query
|
Exec=proxy-dns --address 0.0.0.0 --port 5053 --upstream https://1.1.1.1/dns-query --upstream https://1.0.0.1/dns-query
|
||||||
Image=docker.io/cloudflare/cloudflared:2024.5.0
|
Image=docker.io/cloudflare/cloudflared:2024.5.0
|
||||||
|
Network=podman1.network
|
||||||
PublishPort=0.0.0.0:5053:5053/tcp
|
PublishPort=0.0.0.0:5053:5053/tcp
|
||||||
PublishPort=0.0.0.0:5053:5053/udp
|
PublishPort=0.0.0.0:5053:5053/udp
|
||||||
|
|
||||||
|
|||||||
@@ -2,6 +2,7 @@
|
|||||||
ContainerName=iperf3
|
ContainerName=iperf3
|
||||||
Exec=-s
|
Exec=-s
|
||||||
Image=docker.io/networkstatic/iperf3:latest
|
Image=docker.io/networkstatic/iperf3:latest
|
||||||
|
Network=podman1.network
|
||||||
PublishPort=0.0.0.0:5202:5201/tcp
|
PublishPort=0.0.0.0:5202:5201/tcp
|
||||||
|
|
||||||
[Service]
|
[Service]
|
||||||
|
|||||||
@@ -5,6 +5,7 @@ Wants=network-online.target
|
|||||||
ContainerName=pihole
|
ContainerName=pihole
|
||||||
Environment=TZ=America/Chicago
|
Environment=TZ=America/Chicago
|
||||||
Image=docker.io/pihole/pihole:2024.05.0
|
Image=docker.io/pihole/pihole:2024.05.0
|
||||||
|
Network=podman1.network
|
||||||
PublishPort=0.0.0.0:53:53/tcp
|
PublishPort=0.0.0.0:53:53/tcp
|
||||||
PublishPort=0.0.0.0:53:53/udp
|
PublishPort=0.0.0.0:53:53/udp
|
||||||
PublishPort=0.0.0.0:8081:80/tcp
|
PublishPort=0.0.0.0:8081:80/tcp
|
||||||
|
|||||||
3
podman/quadlets/podman1.network
Normal file
3
podman/quadlets/podman1.network
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
# podman1.network
|
||||||
|
[Network]
|
||||||
|
IPv6=true
|
||||||
@@ -15,6 +15,7 @@
|
|||||||
- ./quadlets/iperf3.container
|
- ./quadlets/iperf3.container
|
||||||
- ./quadlets/pihole.container
|
- ./quadlets/pihole.container
|
||||||
- ./quadlets/cloudflared.container
|
- ./quadlets/cloudflared.container
|
||||||
|
- ./quadlets/podman1.network
|
||||||
- name: Daemon-reload to trigger re-read of quadlets
|
- name: Daemon-reload to trigger re-read of quadlets
|
||||||
ansible.builtin.systemd_service:
|
ansible.builtin.systemd_service:
|
||||||
daemon_reload: true
|
daemon_reload: true
|
||||||
@@ -22,4 +23,4 @@
|
|||||||
ansible.builtin.systemd_service:
|
ansible.builtin.systemd_service:
|
||||||
state: restarted
|
state: restarted
|
||||||
name: "{{ item }}"
|
name: "{{ item }}"
|
||||||
loop: ["pihole", "iperf3", "cloudflared"]
|
loop: ["podman1-network", "pihole", "iperf3", "cloudflared"]
|
||||||
|
|||||||
@@ -1,31 +0,0 @@
|
|||||||
# Shell
|
|
||||||
|
|
||||||
## ZSH
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Install git before running
|
|
||||||
sh -c "$(curl -fsSL https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"
|
|
||||||
```
|
|
||||||
|
|
||||||
Available prompt colors are red, blue, green, cyan, yellow, magenta, black, & white.
|
|
||||||
|
|
||||||
~/.zshrc
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cat << EOF > ~/.zshrc
|
|
||||||
export ZSH="\$HOME/.oh-my-zsh"
|
|
||||||
plugins=(git)
|
|
||||||
source \$ZSH/oh-my-zsh.sh
|
|
||||||
|
|
||||||
autoload bashcompinit && bashcompinit
|
|
||||||
autoload -U compinit; compinit
|
|
||||||
|
|
||||||
autoload -Uz promptinit
|
|
||||||
promptinit
|
|
||||||
prompt fade
|
|
||||||
EOF
|
|
||||||
```
|
|
||||||
|
|
||||||
```bash
|
|
||||||
chsh -s $(which zsh) && chsh -s $(which zsh) ducoterra
|
|
||||||
```
|
|
||||||
369
truenas/README.md
Normal file
369
truenas/README.md
Normal file
@@ -0,0 +1,369 @@
|
|||||||
|
# Truenas
|
||||||
|
|
||||||
|
- [Truenas](#truenas)
|
||||||
|
- [Bios settings](#bios-settings)
|
||||||
|
- [Archiving](#archiving)
|
||||||
|
- [Deleting snapshots](#deleting-snapshots)
|
||||||
|
- [But First, ZFS on RPi](#but-first-zfs-on-rpi)
|
||||||
|
- [Pi Setup](#pi-setup)
|
||||||
|
- [Datasets, Snapshots, and Encryption](#datasets-snapshots-and-encryption)
|
||||||
|
- [Migrating encrypted pools](#migrating-encrypted-pools)
|
||||||
|
- [Migrating Properties](#migrating-properties)
|
||||||
|
- [Backup Task Settings](#backup-task-settings)
|
||||||
|
- [Create and Destroy zfs Datasets](#create-and-destroy-zfs-datasets)
|
||||||
|
- [Create and send snapshots](#create-and-send-snapshots)
|
||||||
|
- [Cleaning up old snapshots](#cleaning-up-old-snapshots)
|
||||||
|
- [VMs](#vms)
|
||||||
|
- [Converting zvol to qcow2](#converting-zvol-to-qcow2)
|
||||||
|
- [Tunables](#tunables)
|
||||||
|
- [Core](#core)
|
||||||
|
- [Scale](#scale)
|
||||||
|
- [ARC Limit](#arc-limit)
|
||||||
|
- [Certs](#certs)
|
||||||
|
- [Testing](#testing)
|
||||||
|
- [iperf](#iperf)
|
||||||
|
- [disk](#disk)
|
||||||
|
- [disk health](#disk-health)
|
||||||
|
- [Dead Disks](#dead-disks)
|
||||||
|
- [Corrupted data](#corrupted-data)
|
||||||
|
|
||||||
|
## Bios settings
|
||||||
|
|
||||||
|
These are my recommended settings that seem stable and allow GPU passthrough
|
||||||
|
|
||||||
|
1. Memory 3200mhz, fabric 1600mhz
|
||||||
|
2. AC Power - On
|
||||||
|
3. SVM - On
|
||||||
|
4. IOMMU - On (Do not touch rebar or other pci encoding stuff)
|
||||||
|
5. Fans 100%
|
||||||
|
6. Initial video output: pci 3
|
||||||
|
7. PCIE slot 1 bifurcation: 4x4x4x4
|
||||||
|
8. Disable CSM
|
||||||
|
9. Fast Boot Enabled
|
||||||
|
|
||||||
|
## Archiving
|
||||||
|
|
||||||
|
1. Create a recursive snapshot called "archive_pool_year_month_day"
|
||||||
|
2. Create a replication task called "archive_pool_year_month_day"
|
||||||
|
|
||||||
|
- select all datasets you want to backup
|
||||||
|
- fill in enc0/archives/archive-year-month-day_hour-minute
|
||||||
|
- full filesystem replication
|
||||||
|
- select "Matching naming schema"
|
||||||
|
- Use `archive-%Y-%m-%d_%H-%M`
|
||||||
|
- Deselect run automatically
|
||||||
|
- Save and run
|
||||||
|
|
||||||
|
## Deleting snapshots
|
||||||
|
|
||||||
|
Sometimes you need to delete many snapshots from a certain dataset. The UI is terrible for this, so
|
||||||
|
we need to use `zfs destroy`. xargs is the best way to do this since it allows parallel processing.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# zfs list snapshots with:
|
||||||
|
# -o name: only print the name
|
||||||
|
# -S creation: sort by creation time
|
||||||
|
# -H: don't display headers
|
||||||
|
# -r: recurse through every child dataset
|
||||||
|
zfs list -t snapshot enc0/archives -o name -S creation -H -r
|
||||||
|
|
||||||
|
# pipe it through xargs with:
|
||||||
|
# -n 1: take only 1 argument from the pipe per command
|
||||||
|
# -P 8: eight parallel processes
|
||||||
|
# Also pass to zfs destroy:
|
||||||
|
# -v: verbose
|
||||||
|
# -n: dryrun
|
||||||
|
zfs list -t snapshot enc0/archives -o name -S creation -H -r | xargs -n 1 -P 8 zfs destroy -v -n
|
||||||
|
|
||||||
|
# if that looks good you can remove the "-n"
|
||||||
|
zfs list -t snapshot enc0/archives -o name -S creation -H -r | xargs -n 1 -P 8 zfs destroy -v
|
||||||
|
```
|
||||||
|
|
||||||
|
## But First, ZFS on RPi
|
||||||
|
|
||||||
|
A really good backup server is an RPi running openzfs. See [the openzfs docs](https://openzfs.github.io/openzfs-docs/Getting%20Started/Ubuntu/Ubuntu%2020.04%20Root%20on%20ZFS%20for%20Raspberry%20Pi.html#step-2-setup-zfs) for more info.
|
||||||
|
|
||||||
|
### Pi Setup
|
||||||
|
|
||||||
|
Add the vault ssh CA key to your pi.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -o /etc/ssh/trusted-user-ca-keys.pem https://vault.ducoterra.net/v1/ssh-client-signer/public_key
|
||||||
|
|
||||||
|
echo "TrustedUserCAKeys /etc/ssh/trusted-user-ca-keys.pem" >> /etc/ssh/sshd_config
|
||||||
|
|
||||||
|
service ssh restart
|
||||||
|
```
|
||||||
|
|
||||||
|
Create a pi user.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
adduser pi
|
||||||
|
usermod -a -G sudo pi
|
||||||
|
```
|
||||||
|
|
||||||
|
SSH to the pi as the "pi" user. Delete the ubuntu user.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
killall -u ubuntu
|
||||||
|
userdel -r ubuntu
|
||||||
|
```
|
||||||
|
|
||||||
|
Disable SSH password authentication
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sed -i 's/PasswordAuthentication yes/PasswordAuthentication no/g' /etc/ssh/sshd_config
|
||||||
|
service ssh restart
|
||||||
|
```
|
||||||
|
|
||||||
|
Change the hostname.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
echo pi-nas > /etc/hostname
|
||||||
|
```
|
||||||
|
|
||||||
|
Upgrade and restart the pi.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
apt update && apt upgrade -y && apt autoremove -y
|
||||||
|
reboot
|
||||||
|
```
|
||||||
|
|
||||||
|
Install ZFS.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
apt install -y pv zfs-initramfs
|
||||||
|
```
|
||||||
|
|
||||||
|
Find the disks you want to use to create your pool
|
||||||
|
|
||||||
|
```bash
|
||||||
|
fdisk -l
|
||||||
|
```
|
||||||
|
|
||||||
|
Create a pool.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mkdir -p /mnt/backup
|
||||||
|
zpool create \
|
||||||
|
-o ashift=12 \
|
||||||
|
-O acltype=posixacl -O canmount=off -O compression=lz4 \
|
||||||
|
-O dnodesize=auto -O normalization=formD -O relatime=on \
|
||||||
|
-O xattr=sa -O mountpoint=/mnt/backup \
|
||||||
|
backup ${DISK}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Datasets, Snapshots, and Encryption
|
||||||
|
|
||||||
|
### Migrating encrypted pools
|
||||||
|
|
||||||
|
Since you can't use `-R` to send encrypted datasets recursively you'll need to use more creative tactics. Here's my recommendation:
|
||||||
|
|
||||||
|
1. Save the datasets from a pool to a text file:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
zfs list -r -o name <pool> > pool_datasets.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Next, remove the prefix of the source pool from the list of datasets. Also remove the source pool itself as well as any duplicate pools in the receiving dataset.
|
||||||
|
3. Now, run a command like the following:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
for i in $(cat nvme_pools.txt); do zfs send -v nvme/$i@manual-2021-10-03_22-34 | zfs recv -x encryption enc0/$i; done
|
||||||
|
```
|
||||||
|
|
||||||
|
### Migrating Properties
|
||||||
|
|
||||||
|
If you need to migrate your dataset comments you can use the following bash to automate the task.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
for i in $(zfs list -H -d 1 -o name backup/nvme/k3os-private); do read -r name desc < <(zfs list -H -o name,org.freenas:description $i) && pvc=$(echo "$name" | awk -F "/" '{print $NF}') && zfs set org.freenas:description=$desc enc1/k3os-private/$pvc; done
|
||||||
|
```
|
||||||
|
|
||||||
|
### Backup Task Settings
|
||||||
|
|
||||||
|
| Key | Value |
|
||||||
|
| ------------------------------------ | --------------------- |
|
||||||
|
| Destination Dataset Read-only Policy | SET |
|
||||||
|
| Recursive | true |
|
||||||
|
| Snapshot Retention Policy | Same as Source |
|
||||||
|
| Include Dataset Properties | true |
|
||||||
|
| Periodic Snapshot Tasks | <daily-snapshot-task> |
|
||||||
|
|
||||||
|
### Create and Destroy zfs Datasets
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create a pool
|
||||||
|
zpool create rpool /dev/disk/by-id/disk-id
|
||||||
|
|
||||||
|
# Add a cache disk
|
||||||
|
zpool add backup cache /dev/sda
|
||||||
|
|
||||||
|
# Enable encryption
|
||||||
|
zpool set feature@encryption=enabled rpool
|
||||||
|
|
||||||
|
# Create a dataset
|
||||||
|
zfs create rpool/d1
|
||||||
|
|
||||||
|
# Create an encrypted dataset
|
||||||
|
zfs create -o encryption=on -o keylocation=prompt -o keyformat=passphrase rpool/d1
|
||||||
|
|
||||||
|
# Delete a dataset
|
||||||
|
zfs destroy rpool/d1
|
||||||
|
```
|
||||||
|
|
||||||
|
### Create and send snapshots
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# snapshot pool and all children
|
||||||
|
zfs snapshot -r dataset@now
|
||||||
|
|
||||||
|
# send all child snapshots
|
||||||
|
zfs send -R dataset@snapshot | zfs recv dataset
|
||||||
|
|
||||||
|
# use the -w raw flag to send encrypted snapshots
|
||||||
|
zfs send -R -w dataset@snapshot | zfs recv dataset
|
||||||
|
```
|
||||||
|
|
||||||
|
### Cleaning up old snapshots
|
||||||
|
|
||||||
|
```bash
|
||||||
|
wget https://raw.githubusercontent.com/bahamas10/zfs-prune-snapshots/master/zfs-prune-snapshots
|
||||||
|
```
|
||||||
|
|
||||||
|
## VMs
|
||||||
|
|
||||||
|
1. Force UEFI installation
|
||||||
|
2. `cp /boot/efi/EFI/debian/grubx64.efi /boot/efi/EFI/BOOT/bootx64.efi`
|
||||||
|
|
||||||
|
### Converting zvol to qcow2
|
||||||
|
|
||||||
|
```bash
|
||||||
|
dd if=/dev/zvol/enc1/vms/unifi-e373f of=unifi.raw
|
||||||
|
qemu-img convert -f raw -O qcow2 unifi.raw unifi.qcow2
|
||||||
|
```
|
||||||
|
|
||||||
|
## Tunables
|
||||||
|
|
||||||
|
### Core
|
||||||
|
|
||||||
|
```bash
|
||||||
|
sysctl kern.ipc.somaxconn=2048
|
||||||
|
sysctl kern.ipc.maxsockbuf=16777216
|
||||||
|
sysctl net.inet.tcp.recvspace=4194304
|
||||||
|
sysctl net.inet.tcp.sendspace=2097152
|
||||||
|
sysctl net.inet.tcp.sendbuf_max=16777216
|
||||||
|
sysctl net.inet.tcp.recvbuf_max=16777216
|
||||||
|
sysctl net.inet.tcp.sendbuf_auto=1
|
||||||
|
sysctl net.inet.tcp.recvbuf_auto=1
|
||||||
|
sysctl net.inet.tcp.sendbuf_inc=16384
|
||||||
|
sysctl net.inet.tcp.recvbuf_inc=524288
|
||||||
|
sysctl vfs.zfs.arc_max=34359738368 # set arc size to 32 GiB to prevent eating VMs
|
||||||
|
loader vm.kmem_size=34359738368 # set kmem_size to 32 GiB to force arc_max to apply
|
||||||
|
loader vm.kmem_size_max=34359738368 # set kmem_size_max to 32 GiB to sync with kmem_size
|
||||||
|
```
|
||||||
|
|
||||||
|
Nic options: "mtu 9000 rxcsum txcsum tso4 lro"
|
||||||
|
|
||||||
|
### Scale
|
||||||
|
|
||||||
|
#### ARC Limit
|
||||||
|
|
||||||
|
Create an Init/Shutdown Script of type `Command` with the following:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
echo 34359738368 >> /sys/module/zfs/parameters/zfs_arc_max
|
||||||
|
```
|
||||||
|
|
||||||
|
Set `When` to `Pre Init`.
|
||||||
|
|
||||||
|
## Certs
|
||||||
|
|
||||||
|
<https://raymondc.net/2018/02/28/using-freenas-as-your-ca.html>
|
||||||
|
|
||||||
|
1. Create a new Root certificate (CAs -> ADD -> Internal CA)
|
||||||
|
- Name: Something_Root
|
||||||
|
- Key Length: 4096
|
||||||
|
- Digest: SHA512
|
||||||
|
- Lifetime: 825 (Apple's new requirement)
|
||||||
|
- Extend Key Usage: Server Auth
|
||||||
|
- Common Name: Something Root CA
|
||||||
|
- Subject Alternate Names:
|
||||||
|
2. Create a new intermediate certificate (CAs -> Add -> Intermediate CA)
|
||||||
|
- Name: Something_Intermediate_CA
|
||||||
|
- Key Length: 4096
|
||||||
|
- Digest: SHA512
|
||||||
|
- Lifetime: 825 (Apple's new requirement)
|
||||||
|
- Extend Key Usage: Server Auth
|
||||||
|
3. Create a new Certificate (Certificates -> Add -> Internal Certificate)
|
||||||
|
- Name: Something_Certificate
|
||||||
|
- Key Length: 4096
|
||||||
|
- Digest: SHA512
|
||||||
|
- Lifetime: 825 (Apple's new requirement)
|
||||||
|
- Extend Key Usage: Server Auth
|
||||||
|
|
||||||
|
## Testing
|
||||||
|
|
||||||
|
### iperf
|
||||||
|
|
||||||
|
```bash
|
||||||
|
iperf3 -c mainframe -P 4
|
||||||
|
iperf3 -c mainframe -P 4 -R
|
||||||
|
|
||||||
|
iperf3 -c pc -P 4
|
||||||
|
iperf3 -c pc -P 4 -R
|
||||||
|
```
|
||||||
|
|
||||||
|
### disk
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# write 16GB to disk
|
||||||
|
dd if=/dev/zero of=/tmp/test bs=1024k count=16000
|
||||||
|
# divide result by 1000^3 to get GB/s
|
||||||
|
|
||||||
|
# read 16GB from disk
|
||||||
|
dd if=/tmp/test of=/dev/null bs=1024k
|
||||||
|
# divide result by 1000^3 to get GB/s
|
||||||
|
```
|
||||||
|
|
||||||
|
## disk health
|
||||||
|
|
||||||
|
<https://documents.westerndigital.com/content/dam/doc-library/en_us/assets/public/western-digital/product/internal-drives/wd-black-ssd/product-brief-wd-black-sn750-nvme-ssd.pdf>
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# HDD
|
||||||
|
smartctl -a /dev/ada1 | grep "SMART Attributes" -A 18
|
||||||
|
|
||||||
|
# NVME
|
||||||
|
smartctl -a /dev/nvme1 | grep "SMART/Health Information" -A 17
|
||||||
|
```
|
||||||
|
|
||||||
|
## Dead Disks
|
||||||
|
|
||||||
|
```bash
|
||||||
|
=== START OF INFORMATION SECTION ===
|
||||||
|
Model Family: Western Digital Black
|
||||||
|
Device Model: WDC WD2003FZEX-00Z4SA0
|
||||||
|
Serial Number: WD-WMC5C0D6PZYZ
|
||||||
|
LU WWN Device Id: 5 0014ee 65a5a19fc
|
||||||
|
Firmware Version: 01.01A01
|
||||||
|
User Capacity: 2,000,398,934,016 bytes [2.00 TB]
|
||||||
|
Sector Sizes: 512 bytes logical, 4096 bytes physical
|
||||||
|
Rotation Rate: 7200 rpm
|
||||||
|
Device is: In smartctl database [for details use: -P show]
|
||||||
|
ATA Version is: ACS-2 (minor revision not indicated)
|
||||||
|
SATA Version is: SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
|
||||||
|
Local Time is: Sat Feb 13 18:31:57 2021 EST
|
||||||
|
SMART support is: Available - device has SMART capability.
|
||||||
|
SMART support is: Enabled
|
||||||
|
```
|
||||||
|
|
||||||
|
## Corrupted data
|
||||||
|
|
||||||
|
One or more devices has experienced an error resulting in data corruption. Applications may be affected.
|
||||||
|
|
||||||
|
To get a list of affected files run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
zpool status -v
|
||||||
|
```
|
||||||
25
vpn/README.md
Normal file
25
vpn/README.md
Normal file
@@ -0,0 +1,25 @@
|
|||||||
|
# VPN
|
||||||
|
|
||||||
|
## PiVPN
|
||||||
|
|
||||||
|
<https://www.pivpn.io/>
|
||||||
|
|
||||||
|
1. You'll need to configure systemd-networkd to not use ipv6 privacy extensions
|
||||||
|
|
||||||
|
/etc/systemd/network/05-end0.conf
|
||||||
|
|
||||||
|
```conf
|
||||||
|
[Match]
|
||||||
|
Name=end0
|
||||||
|
|
||||||
|
[Network]
|
||||||
|
DHCP=yes
|
||||||
|
IPv6PrivacyExtensions=false
|
||||||
|
IPv6AcceptRA=true
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Install pivpn
|
||||||
|
|
||||||
|
```bash
|
||||||
|
curl -L https://install.pivpn.io | bash
|
||||||
|
```
|
||||||
Reference in New Issue
Block a user