init
This commit is contained in:
220
README.md
Normal file
220
README.md
Normal file
@@ -0,0 +1,220 @@
|
||||
# Containers!
|
||||
|
||||
A project to store container-based hosting stuff.
|
||||
|
||||
## Platform
|
||||
|
||||
Before you being be sure to take a look at the [Fedora Server Config](FedoraServer.md) readme
|
||||
which explains how to set up a basic fedora server hosting platform with certbot.
|
||||
|
||||
## K3S
|
||||
|
||||
### Install K3S
|
||||
|
||||
We're going to be tweaking some installation parameters so if you already have k3s
|
||||
installed you can either uninstall it or skip these steps.
|
||||
|
||||
This installation disables traefik and local-storage (We don't really need either):
|
||||
|
||||
```bash
|
||||
curl -sfL https://get.k3s.io | sh -s - \
|
||||
"--disable" \
|
||||
"traefik" \
|
||||
"--disable" \
|
||||
"local-storage"
|
||||
```
|
||||
|
||||
Now you can change the ownership of (and copy) the k3s.yaml file:
|
||||
|
||||
```bash
|
||||
chown ducoterra /etc/rancher/k3s/k3s.yaml
|
||||
|
||||
scp /etc/rancher/k3s/k3s.yaml ~/.kube/config
|
||||
```
|
||||
|
||||
Edit ~/.kube/config and change 127.0.0.1 to containers.reeselink.com
|
||||
|
||||
### Database Backups
|
||||
|
||||
We're using SQLite (because it's all we really need). The db is stored at
|
||||
`/var/lib/rancher/k3s/server/db/`. You can just copy that folder to backup the database
|
||||
and restore it by copying it back. Note, you must also copy`/var/lib/rancher/k3s/server/token`
|
||||
and use the contents as the token when restoring the backup as data is encrypted with that token.
|
||||
|
||||
### Nginx Ingress
|
||||
|
||||
Now we need an ingress solution (preferably with certs for https). We'll be using nginx since
|
||||
it's a little bit more configurable than traefik (though don't sell traefik short, it's really
|
||||
good. Just finnicky when you have use cases they haven't explicitly coded for).
|
||||
|
||||
1. Install nginx
|
||||
|
||||
```bash
|
||||
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
|
||||
helm repo update
|
||||
helm install \
|
||||
ingress-nginx \
|
||||
ingress-nginx/ingress-nginx \
|
||||
--namespace ingress-nginx \
|
||||
--create-namespace
|
||||
```
|
||||
|
||||
2. Install cert-manager
|
||||
|
||||
```bash
|
||||
helm repo add jetstack https://charts.jetstack.io
|
||||
helm repo update
|
||||
helm install \
|
||||
cert-manager jetstack/cert-manager \
|
||||
--namespace cert-manager \
|
||||
--create-namespace \
|
||||
--version v1.11.0 \
|
||||
--set installCRDs=true
|
||||
```
|
||||
|
||||
3. Create the let's encrypt issuer
|
||||
|
||||
```bash
|
||||
kubectl apply -f letsencrypt-issuer.yaml
|
||||
```
|
||||
|
||||
You can test if your ingress is working with `kubectl apply -f ingress-nginx-test.yaml`
|
||||
|
||||
Navigate to ingress-nginx-test.reeseapps.com
|
||||
|
||||
### Storage
|
||||
|
||||
We'll be installing democratic csi for our volume manager. Specifically, we'll be installing the
|
||||
freenas-api-nfs driver. All configuration is stored in truenas-nfs.yaml.
|
||||
|
||||
The nfs driver will provision an nfs store owned by user 3000 (kube). You may have to make
|
||||
that user on Truenas. The nfs share created will be world-read/write, so any user can write to
|
||||
it. Users that write to the share will have their uid/gid mapped to Truenas, so if user 33 writes
|
||||
a file to the nfs share it will show up as owned by user 33 on Truenas.
|
||||
|
||||
The iscsi driver will require a portal ID. This is NOT what is reflected in the UI. The most
|
||||
reliable way (seriously) to get the real ID is to open the network monitor in the browser, reload
|
||||
truenas and find the websocket connection, click on it, create the portal and click on the
|
||||
server reseponse. It'll look something like:
|
||||
|
||||
```json
|
||||
{"msg": "added", "collection": "iscsi.portal.query", "id": 7, "fields": {"id": 7, "tag": 1, "comment": "democratic-csi", "listen": [{"ip": "172.20.0.1", "port": 3260}], "discovery_authmethod": "NONE", "discovery_authgroup": null}}
|
||||
```
|
||||
|
||||
The initiator group IDs seem to line up.
|
||||
|
||||
It's good practice to have separate hostnames for your share export and your truenas server. This
|
||||
way you can have a direct link without worrying about changing the user-facing hostname.
|
||||
For example: your truenas server might be driveripper.reeselink.com and your kube server might be
|
||||
containers.reeselink.com. You should also have a democratic-csi-server.reeselink.com and a
|
||||
democratic-csi-client-1.reeselink.com which might be on 172.20.0.1 and 172.20.0.2.
|
||||
|
||||
https://github.com/democratic-csi/democratic-csi
|
||||
|
||||
ISCSI requires a bit of server config before proceeding:
|
||||
|
||||
```bash
|
||||
# Install the following system packages
|
||||
sudo dnf install -y lsscsi iscsi-initiator-utils sg3_utils device-mapper-multipath
|
||||
|
||||
# Enable multipathing
|
||||
sudo mpathconf --enable --with_multipathd y
|
||||
|
||||
# Ensure that iscsid and multipathd are running
|
||||
sudo systemctl enable iscsid multipathd
|
||||
sudo systemctl start iscsid multipathd
|
||||
|
||||
# Start and enable iscsi
|
||||
sudo systemctl enable iscsi
|
||||
sudo systemctl start iscsi
|
||||
```
|
||||
|
||||
And now you can install the drivers:
|
||||
|
||||
```bash
|
||||
helm repo add democratic-csi https://democratic-csi.github.io/charts/
|
||||
helm repo update
|
||||
|
||||
# enc0 bulk storage (nfs)
|
||||
helm upgrade \
|
||||
--install \
|
||||
--values truenas-nfs-enc0.yaml \
|
||||
--namespace democratic-csi \
|
||||
--create-namespace \
|
||||
zfs-nfs-enc0 democratic-csi/democratic-csi
|
||||
|
||||
# enc1 fast storage (nfs)
|
||||
helm upgrade \
|
||||
--install \
|
||||
--values truenas-nfs-enc1.yaml \
|
||||
--namespace democratic-csi \
|
||||
--create-namespace \
|
||||
zfs-nfs-enc1 democratic-csi/democratic-csi
|
||||
|
||||
# enc0 bulk storage (iscsi)
|
||||
helm upgrade \
|
||||
--install \
|
||||
--values truenas-iscsi-enc0.yaml \
|
||||
--namespace democratic-csi \
|
||||
--create-namespace \
|
||||
zfs-iscsi-enc0 democratic-csi/democratic-csi
|
||||
|
||||
# enc1 bulk storage (iscsi)
|
||||
helm upgrade \
|
||||
--install \
|
||||
--values truenas-iscsi-enc1.yaml \
|
||||
--namespace democratic-csi \
|
||||
--create-namespace \
|
||||
zfs-iscsi-enc1 democratic-csi/democratic-csi
|
||||
```
|
||||
|
||||
You can test that things worked with:
|
||||
|
||||
```bash
|
||||
kubectl apply -f democratic-csi-pvc-test.yaml
|
||||
```
|
||||
|
||||
### Nextcloud
|
||||
|
||||
The first chart we'll deploy is nextcloud.
|
||||
|
||||
```bash
|
||||
helm upgrade --install \
|
||||
nextcloud \
|
||||
./helm/nextcloud \
|
||||
--namespace nextcloud \
|
||||
--create-namespace
|
||||
```
|
||||
|
||||
Need to copy lots of files? Copy them to the user data
|
||||
dir and then run
|
||||
|
||||
```bash
|
||||
./occ files:scan --all
|
||||
```
|
||||
|
||||
### Gitea
|
||||
|
||||
Gitea provides a helm chart [here](https://gitea.com/gitea/helm-chart/)
|
||||
|
||||
```bash
|
||||
helm repo add gitea-charts https://dl.gitea.io/charts/
|
||||
helm repo update
|
||||
helm upgrade --install \
|
||||
gitea \
|
||||
gitea-charts/gitea \
|
||||
--values gitea-values.yaml \
|
||||
--namespace gitea \
|
||||
--create-namespace
|
||||
```
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
Deleting a stuck namespace
|
||||
|
||||
```bash
|
||||
NAMESPACE=nginx
|
||||
kubectl proxy &
|
||||
kubectl get namespace $NAMESPACE -o json |jq '.spec = {"finalizers":[]}' >temp.json
|
||||
curl -k -H "Content-Type: application/json" -X PUT --data-binary @temp.json 127.0.0.1:8001/api/v1/namespaces/$NAMESPACE/finalize
|
||||
```
|
||||
Reference in New Issue
Block a user