the great migration from truenas to fedora and all its collatoral
All checks were successful
Reese's Arch Toolbox / build-and-push-arch-toolbox (push) Successful in 24m47s
All checks were successful
Reese's Arch Toolbox / build-and-push-arch-toolbox (push) Successful in 24m47s
This commit is contained in:
@@ -2,12 +2,12 @@
|
||||
|
||||
- [K3S](#k3s)
|
||||
- [Guide](#guide)
|
||||
- [Disable Firewalld](#disable-firewalld)
|
||||
- [Firewalld](#firewalld)
|
||||
- [Set SELinux to Permissive](#set-selinux-to-permissive)
|
||||
- [Install K3S (Single Node)](#install-k3s-single-node)
|
||||
- [Dual Stack IPv6 Support](#dual-stack-ipv6-support)
|
||||
- [Single Stack IPv4](#single-stack-ipv4)
|
||||
- [Kube Credentials](#kube-credentials)
|
||||
- [Storage](#storage)
|
||||
- [Coredns](#coredns)
|
||||
- [Metal LB](#metal-lb)
|
||||
- [VLAN Setup](#vlan-setup)
|
||||
- [Installation](#installation)
|
||||
@@ -32,47 +32,56 @@
|
||||
7. Install longhorn storage for automatic PVC creation and management
|
||||
8. Set up automatic database backups
|
||||
|
||||
## Disable Firewalld
|
||||
## Firewalld
|
||||
|
||||
<https://docs.k3s.io/advanced#red-hat-enterprise-linux--centos--fedora>
|
||||
```bash
|
||||
firewall-cmd --permanent --zone=public --add-port=6443/tcp # apiserver
|
||||
firewall-cmd --permanent --zone=trusted --add-source=10.42.0.0/16 # pods
|
||||
firewall-cmd --permanent --zone=trusted --add-source=fd02:c91e:56f4::/56 # pods
|
||||
firewall-cmd --permanent --zone=trusted --add-source=10.43.0.0/16 # services
|
||||
firewall-cmd --permanent --zone=trusted --add-source=fd02:c91e:56f5::/112 # services
|
||||
|
||||
Disable firewalld. You could add rules for each service but every time you open a port
|
||||
from a container you'd need to run a firewalld rule.
|
||||
|
||||
You can disable firewalld from the web interface.
|
||||
firewall-cmd --reload
|
||||
```
|
||||
|
||||
## Set SELinux to Permissive
|
||||
|
||||
K3S is more than capable of running with SELinux set to enforcing. We won't be doing
|
||||
that, however. We'll set it to permissive and you can reenable it once you've added all
|
||||
the rules you need to keep your services running.
|
||||
|
||||
Set SELinux to permissive by editing `/etc/selinux/config`
|
||||
|
||||
SELINUX=permissive
|
||||
Make sure to add `--selinux` to your install script.
|
||||
|
||||
## Install K3S (Single Node)
|
||||
|
||||
### Dual Stack IPv6 Support
|
||||
|
||||
```bash
|
||||
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.30.2+k3s2 sh -s - \
|
||||
"--cluster-init" \
|
||||
"--flannel-ipv6-masq" \
|
||||
"--disable" \
|
||||
"traefik" \
|
||||
"--disable" \
|
||||
"servicelb" \
|
||||
"--disable" \
|
||||
"coredns" \
|
||||
"--disable" \
|
||||
"local-storage" \
|
||||
"--tls-san" \
|
||||
"kube.reeselink.com" \
|
||||
"--cluster-cidr" \
|
||||
"10.42.0.0/16,fd02:c91e:56f4::/56" \
|
||||
"--service-cidr" \
|
||||
"10.43.0.0/16,fd02:c91e:56f5::/112" \
|
||||
"--cluster-dns" \
|
||||
"fd02:c91e:56f5::10"
|
||||
curl -sfL https://get.k3s.io | sh -s - \
|
||||
"--disable" \
|
||||
"traefik" \
|
||||
"--disable" \
|
||||
"servicelb" \
|
||||
"--tls-san" \
|
||||
"k3s.reeselink.com" \
|
||||
"--flannel-ipv6-masq" \
|
||||
--kubelet-arg="node-ip=::" \
|
||||
"--cluster-cidr" \
|
||||
"10.42.0.0/16,fd02:c91e:56f4::/56" \
|
||||
"--service-cidr" \
|
||||
"10.43.0.0/16,fd02:c91e:56f5::/112" \
|
||||
"--cluster-dns" \
|
||||
"fd02:c91e:56f5::10" \
|
||||
--selinux
|
||||
```
|
||||
|
||||
### Single Stack IPv4
|
||||
|
||||
```bash
|
||||
curl -sfL https://get.k3s.io | sh -s - \
|
||||
"--disable" \
|
||||
"traefik" \
|
||||
"--disable" \
|
||||
"servicelb" \
|
||||
"--tls-san" \
|
||||
"k3s.reeselink.com" \
|
||||
--selinux
|
||||
```
|
||||
|
||||
## Kube Credentials
|
||||
@@ -80,54 +89,21 @@ curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.30.2+k3s2 sh -s - \
|
||||
On the operator
|
||||
|
||||
```bash
|
||||
export KUBE_SERVER_ADDRESS="https://k3s.reeselink.com:6443"
|
||||
# Copy the kube config down
|
||||
scp kube:/etc/rancher/k3s/k3s.yaml ~/.kube/admin-kube-config
|
||||
|
||||
# Edit the server to match the remote address.
|
||||
```
|
||||
|
||||
## Storage
|
||||
|
||||
1. `mkdir /var/lib/rancher/k3s/storage`
|
||||
2. Edit fstab to mount your drive to `/var/lib/rancher/k3s/storage`
|
||||
3. `systemctl daemon-reload`
|
||||
4. `mount -a`
|
||||
|
||||
<https://github.com/rancher/local-path-provisioner/tree/master/deploy/chart/local-path-provisioner>
|
||||
|
||||
```bash
|
||||
# Download the updated template from github
|
||||
kubectl kustomize "github.com/rancher/local-path-provisioner/deploy?ref=v0.0.28" > kubernetes/incubating/local-path-provisioner/local-path-storage.yaml
|
||||
|
||||
# Apply customizations (ssd/hdd storage, read write many support)
|
||||
kubectl kustomize kubernetes/incubating/local-path-provisioner/local-path-provisioner | kubectl apply -f -
|
||||
|
||||
# Create test pod
|
||||
kubectl apply -f infrastructure/graduated/k3s/tests/local-storage-test.yaml
|
||||
```
|
||||
|
||||
## Coredns
|
||||
|
||||
1. Edit `kubernetes/graduated/coredns/values.yaml` to ensure the forward nameserver is correct.
|
||||
|
||||
```bash
|
||||
# Install CoreDNS
|
||||
helm upgrade --install \
|
||||
--namespace=kube-system \
|
||||
--values kubernetes/graduated/coredns/values.yaml \
|
||||
coredns coredns/coredns
|
||||
|
||||
# Test DNS works
|
||||
kubectl run -it --rm \
|
||||
--restart=Never \
|
||||
--image=infoblox/dnstools:latest \
|
||||
dnstools
|
||||
ssh k3s cat /etc/rancher/k3s/k3s.yaml | \
|
||||
yq -y ".clusters[0].cluster.server = \"${KUBE_SERVER_ADDRESS}\"" > \
|
||||
~/.kube/admin-kube-config
|
||||
```
|
||||
|
||||
## Metal LB
|
||||
|
||||
### VLAN Setup
|
||||
|
||||
I would remove firewalld to get this working. VLAN IPv6 traffic doesn't work for some
|
||||
reason and there aren't good docs yet. Your router firewall will suffice, just be sure
|
||||
to configure those rules correctly.
|
||||
|
||||
Before working with Metallb you'll need at least one available VLAN. On Unifi equipment
|
||||
this is accomplished by creating a new network. Don't assign it to anything.
|
||||
|
||||
@@ -141,7 +117,7 @@ With cockpit:
|
||||
5. Click into the new network
|
||||
6. Turn off IPv4 and IPv6 DNS (it will overload the resolv.conf hosts limit)
|
||||
7. Turn on the network interface
|
||||
8. Attempt to ping the acquired address(es)
|
||||
8. Attempt to ping the acquired address(es)
|
||||
|
||||
### Installation
|
||||
|
||||
@@ -168,16 +144,18 @@ address, effectively moving the IP to another node. This isn't really "load bala
|
||||
```bash
|
||||
helm repo add metallb https://metallb.github.io/metallb
|
||||
helm repo update
|
||||
|
||||
# Install metallb
|
||||
helm upgrade --install metallb \
|
||||
--namespace kube-system \
|
||||
metallb/metallb
|
||||
--namespace kube-system \
|
||||
metallb/metallb
|
||||
```
|
||||
|
||||
MetalLB doesn't know what IP addresses are available for it to allocate so we'll have
|
||||
to provide it with a list. The `metallb-addresspool.yaml` has one IP address (we'll get to
|
||||
IP address sharing in a second) which is an unassigned IP address not allocated to any of our
|
||||
nodes. Note if you have many public IPs which all point to the same router or virtual network
|
||||
you can list them. We're only going to use one because we want to port forward from our router.
|
||||
MetalLB doesn't know what IP addresses are available for it to allocate so
|
||||
we'll have to provide it with a list. The
|
||||
[metallb-addresspool.yaml](/kubernetes/graduated/metallb/addresspool.yaml) has
|
||||
the configuration for our available pools. Note these should match the VLAN you
|
||||
created above.
|
||||
|
||||
```bash
|
||||
# create the metallb allocation pool
|
||||
@@ -189,13 +167,18 @@ You'll need to annotate your service as follows if you want an external IP:
|
||||
```yaml
|
||||
metadata:
|
||||
annotations:
|
||||
metallb.universe.tf/address-pool: "external"
|
||||
# or
|
||||
metallb.universe.tf/address-pool: "internal"
|
||||
metallb.universe.tf/address-pool: "unifi-pool"
|
||||
spec:
|
||||
ipFamilyPolicy: SingleStack
|
||||
ipFamilyPolicy: PreferDualStack
|
||||
ipFamilies:
|
||||
- IPv6
|
||||
- IPv4
|
||||
```
|
||||
|
||||
Then test with
|
||||
|
||||
```bash
|
||||
kubectl apply -f systemd/graduated/k3s/tests/metallb-test.yaml
|
||||
```
|
||||
|
||||
## External DNS
|
||||
@@ -211,22 +194,24 @@ aws iam create-user --user-name "externaldns"
|
||||
aws iam attach-user-policy --user-name "externaldns" --policy-arn arn:aws:iam::892236928704:policy/update-reeseapps
|
||||
aws iam attach-user-policy --user-name "externaldns" --policy-arn arn:aws:iam::892236928704:policy/update-reeselink
|
||||
|
||||
SECRET_ACCESS_KEY=$(aws iam create-access-key --user-name "externaldns")
|
||||
ACCESS_KEY_ID=$(echo $SECRET_ACCESS_KEY | jq -r '.AccessKey.AccessKeyId')
|
||||
GENERATED_ACCESS_KEY=$(aws iam create-access-key --user-name "externaldns")
|
||||
ACCESS_KEY_ID=$(echo $GENERATED_ACCESS_KEY | jq -r '.AccessKey.AccessKeyId')
|
||||
SECRET_ACCESS_KEY=$(echo $GENERATED_ACCESS_KEY | jq -r '.AccessKey.SecretAccessKey')
|
||||
|
||||
cat <<-EOF > secrets/externaldns-credentials
|
||||
|
||||
[default]
|
||||
aws_access_key_id = $(echo $ACCESS_KEY_ID)
|
||||
aws_secret_access_key = $(echo $SECRET_ACCESS_KEY | jq -r '.AccessKey.SecretAccessKey')
|
||||
aws_access_key_id = $ACCESS_KEY_ID
|
||||
aws_secret_access_key = $SECRET_ACCESS_KEY
|
||||
EOF
|
||||
|
||||
kubectl create secret generic external-dns \
|
||||
--namespace kube-system --from-file secrets/externaldns-credentials
|
||||
|
||||
kubectl apply -f kubernetes/graduated/external-dns/sa.yaml
|
||||
|
||||
kubectl apply -f kubernetes/graduated/external-dns/deploy.yaml
|
||||
helm repo add external-dns https://kubernetes-sigs.github.io/external-dns/
|
||||
helm upgrade --install external-dns external-dns/external-dns \
|
||||
--values kubernetes/graduated/external-dns/values.yaml \
|
||||
--namespace kube-system
|
||||
```
|
||||
|
||||
### Annotation
|
||||
@@ -1,71 +0,0 @@
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: ssd-test
|
||||
namespace: default
|
||||
spec:
|
||||
storageClassName: ssd
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
resources:
|
||||
requests:
|
||||
storage: 8Gi
|
||||
|
||||
---
|
||||
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: hdd-test
|
||||
namespace: default
|
||||
spec:
|
||||
storageClassName: hdd
|
||||
accessModes:
|
||||
- ReadWriteMany
|
||||
resources:
|
||||
requests:
|
||||
storage: 8Gi
|
||||
|
||||
---
|
||||
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: local-storage-test
|
||||
namespace: default
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: local-storage-test
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: local-storage-test
|
||||
spec:
|
||||
containers:
|
||||
- image: debian
|
||||
command:
|
||||
- bash
|
||||
- -c
|
||||
- 'sleep infinity'
|
||||
name: local-storage-test
|
||||
volumeMounts:
|
||||
- mountPath: /ssd
|
||||
name: ssd
|
||||
- mountPath: /hdd
|
||||
name: hdd
|
||||
resources:
|
||||
limits:
|
||||
memory: "4Gi"
|
||||
cpu: "2"
|
||||
requests:
|
||||
memory: "1Mi"
|
||||
cpu: "1m"
|
||||
restartPolicy: Always
|
||||
volumes:
|
||||
- name: hdd
|
||||
persistentVolumeClaim:
|
||||
claimName: hdd-test
|
||||
- name: ssd
|
||||
persistentVolumeClaim:
|
||||
claimName: ssd-test
|
||||
@@ -36,9 +36,12 @@ metadata:
|
||||
name: ingress-nginx-demo-1
|
||||
namespace: default
|
||||
annotations:
|
||||
metallb.universe.tf/allow-shared-ip: "production"
|
||||
metallb.universe.tf/address-pool: production
|
||||
metallb.universe.tf/address-pool: "unifi-pool"
|
||||
spec:
|
||||
ipFamilyPolicy: PreferDualStack
|
||||
ipFamilies:
|
||||
- IPv6
|
||||
- IPv4
|
||||
type: LoadBalancer
|
||||
ports:
|
||||
- name: http
|
||||
@@ -47,55 +50,3 @@ spec:
|
||||
targetPort: 80
|
||||
selector:
|
||||
app.kubernetes.io/name: ingress-nginx-demo-1
|
||||
|
||||
---
|
||||
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: ingress-nginx-demo-2
|
||||
namespace: default
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: ingress-nginx-demo-2
|
||||
strategy:
|
||||
type: Recreate
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: ingress-nginx-demo-2
|
||||
spec:
|
||||
containers:
|
||||
- name: httpd
|
||||
image: httpd
|
||||
ports:
|
||||
- containerPort: 80
|
||||
name: http
|
||||
resources:
|
||||
requests:
|
||||
memory: "100Mi"
|
||||
cpu: "1m"
|
||||
limits:
|
||||
memory: "256Mi"
|
||||
cpu: "1"
|
||||
|
||||
---
|
||||
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: ingress-nginx-demo-2
|
||||
namespace: default
|
||||
annotations:
|
||||
metallb.universe.tf/allow-shared-ip: "production"
|
||||
metallb.universe.tf/address-pool: production
|
||||
spec:
|
||||
type: LoadBalancer
|
||||
ports:
|
||||
- name: http
|
||||
protocol: TCP
|
||||
port: 8002
|
||||
targetPort: 80
|
||||
selector:
|
||||
app.kubernetes.io/name: ingress-nginx-demo-2
|
||||
|
||||
Reference in New Issue
Block a user