move selinux, firewalld, k3s
This commit is contained in:
354
active/software_k3s/k3s.md
Normal file
354
active/software_k3s/k3s.md
Normal file
@@ -0,0 +1,354 @@
|
||||
# K3S
|
||||
|
||||
- [K3S](#k3s)
|
||||
- [Guide](#guide)
|
||||
- [Firewalld](#firewalld)
|
||||
- [Set SELinux to Permissive](#set-selinux-to-permissive)
|
||||
- [Install K3S (Single Node)](#install-k3s-single-node)
|
||||
- [Dual Stack IPv6 Support](#dual-stack-ipv6-support)
|
||||
- [Single Stack IPv4](#single-stack-ipv4)
|
||||
- [Kube Credentials](#kube-credentials)
|
||||
- [Metal LB](#metal-lb)
|
||||
- [VLAN Setup](#vlan-setup)
|
||||
- [Installation](#installation)
|
||||
- [External DNS](#external-dns)
|
||||
- [Credentials](#credentials)
|
||||
- [Annotation](#annotation)
|
||||
- [Nginx Ingress](#nginx-ingress)
|
||||
- [Cert Manager](#cert-manager)
|
||||
- [Test Minecraft Server](#test-minecraft-server)
|
||||
- [Automatic Updates](#automatic-updates)
|
||||
- [Database Backups](#database-backups)
|
||||
- [Uninstall](#uninstall)
|
||||
|
||||
## Guide
|
||||
|
||||
1. Configure Host
|
||||
2. Install CoreDNS for inter-container discovery
|
||||
3. Install Metal LB for load balancer IP address assignment
|
||||
4. install External DNS for laod balancer IP and ingress DNS records
|
||||
5. Install Nginx Ingress for http services
|
||||
6. Install Cert Manager for automatic Let's Encrypt certificates for Ingress nginx
|
||||
7. Install longhorn storage for automatic PVC creation and management
|
||||
8. Set up automatic database backups
|
||||
|
||||
## Firewalld
|
||||
|
||||
```bash
|
||||
firewall-cmd --permanent --zone=public --add-port=6443/tcp # apiserver
|
||||
firewall-cmd --permanent --zone=trusted --add-source=10.42.0.0/16 # pods
|
||||
firewall-cmd --permanent --zone=trusted --add-source=fd02:c91e:56f4::/56 # pods
|
||||
firewall-cmd --permanent --zone=trusted --add-source=10.43.0.0/16 # services
|
||||
firewall-cmd --permanent --zone=trusted --add-source=fd02:c91e:56f5::/112 # services
|
||||
|
||||
firewall-cmd --reload
|
||||
```
|
||||
|
||||
## Set SELinux to Permissive
|
||||
|
||||
Make sure to add `--selinux` to your install script.
|
||||
|
||||
## Install K3S (Single Node)
|
||||
|
||||
### Dual Stack IPv6 Support
|
||||
|
||||
```bash
|
||||
curl -sfL https://get.k3s.io | sh -s - \
|
||||
"--disable" \
|
||||
"traefik" \
|
||||
"--disable" \
|
||||
"servicelb" \
|
||||
"--tls-san" \
|
||||
"k3s.reeselink.com" \
|
||||
"--flannel-ipv6-masq" \
|
||||
--kubelet-arg="node-ip=::" \
|
||||
"--cluster-cidr" \
|
||||
"10.42.0.0/16,fd02:c91e:56f4::/56" \
|
||||
"--service-cidr" \
|
||||
"10.43.0.0/16,fd02:c91e:56f5::/112" \
|
||||
"--cluster-dns" \
|
||||
"fd02:c91e:56f5::10" \
|
||||
--selinux
|
||||
```
|
||||
|
||||
### Single Stack IPv4
|
||||
|
||||
```bash
|
||||
curl -sfL https://get.k3s.io | sh -s - \
|
||||
"--disable" \
|
||||
"traefik" \
|
||||
"--disable" \
|
||||
"servicelb" \
|
||||
"--tls-san" \
|
||||
"k3s.reeselink.com" \
|
||||
--selinux
|
||||
```
|
||||
|
||||
## Kube Credentials
|
||||
|
||||
On the operator
|
||||
|
||||
```bash
|
||||
export KUBE_SERVER_ADDRESS="https://k3s.reeselink.com:6443"
|
||||
# Copy the kube config down
|
||||
ssh k3s cat /etc/rancher/k3s/k3s.yaml | \
|
||||
yq -y ".clusters[0].cluster.server = \"${KUBE_SERVER_ADDRESS}\"" > \
|
||||
~/.kube/admin-kube-config
|
||||
```
|
||||
|
||||
## Metal LB
|
||||
|
||||
### VLAN Setup
|
||||
|
||||
I would remove firewalld to get this working. VLAN IPv6 traffic doesn't work for some
|
||||
reason and there aren't good docs yet. Your router firewall will suffice, just be sure
|
||||
to configure those rules correctly.
|
||||
|
||||
Before working with Metallb you'll need at least one available VLAN. On Unifi equipment
|
||||
this is accomplished by creating a new network. Don't assign it to anything.
|
||||
|
||||
On the linux machine you can use nmcli or cockpit to configure a new VLAN network interface.
|
||||
With cockpit:
|
||||
|
||||
1. Add a new VLAN network
|
||||
2. The parent should be the physical adapter connected to your switch
|
||||
3. Set the VLAN ID to the VLAN number of your created unifi network
|
||||
4. Click create
|
||||
5. Click into the new network
|
||||
6. Turn off IPv4 and IPv6 DNS (it will overload the resolv.conf hosts limit)
|
||||
7. Turn on the network interface
|
||||
8. Attempt to ping the acquired address(es)
|
||||
|
||||
### Installation
|
||||
|
||||
We'll be swapping K3S's default load balancer with Metal LB for more flexibility. ServiceLB was
|
||||
struggling to allocate IP addresses for load balanced services. MetallLB does make things a little
|
||||
more complicated- you'll need special annotations (see below) but it's otherwise a well-tested,
|
||||
stable load balancing service with features to grow into.
|
||||
|
||||
Metallb is pretty cool. It works via l2 advertisement or BGP. We won't be using BGP, so let's
|
||||
focus on l2.
|
||||
|
||||
When we connect our nodes to a network we give them an IP address range: ex. `192.168.122.20/24`.
|
||||
This range represents all the available addresses the node could be assigned. Usually we assign
|
||||
a single "static" IP address for our node and direct traffic to it by port forwarding from our
|
||||
router. This is fine for single nodes - but what if we have a cluster of nodes and we don't want
|
||||
our service to disappear just because one node is down for maintenance?
|
||||
|
||||
This is where l2 advertising comes in. Metallb will assign a static IP address from a given
|
||||
pool to any arbitrary node - then advertise that node's mac address as the location for the
|
||||
IP. When that node goes down metallb simply advertises a new mac address for the same IP
|
||||
address, effectively moving the IP to another node. This isn't really "load balancing" but
|
||||
"failover". Fortunately, that's exactly what we're looking for.
|
||||
|
||||
```bash
|
||||
helm repo add metallb https://metallb.github.io/metallb
|
||||
helm repo update
|
||||
|
||||
# Install metallb
|
||||
helm upgrade --install metallb \
|
||||
--namespace kube-system \
|
||||
metallb/metallb
|
||||
```
|
||||
|
||||
MetalLB doesn't know what IP addresses are available for it to allocate so
|
||||
we'll have to provide it with a list. The
|
||||
[metallb-addresspool.yaml](/active/kubernetes_metallb/addresspool.yaml) has
|
||||
the configuration for our available pools. Note these should match the VLAN you
|
||||
created above.
|
||||
|
||||
```bash
|
||||
# create the metallb allocation pool
|
||||
kubectl apply -f active/kubernetes_metallb/addresspool.yaml
|
||||
```
|
||||
|
||||
You'll need to annotate your service as follows if you want an external IP:
|
||||
|
||||
```yaml
|
||||
metadata:
|
||||
annotations:
|
||||
metallb.universe.tf/address-pool: "unifi-pool"
|
||||
spec:
|
||||
ipFamilyPolicy: PreferDualStack
|
||||
ipFamilies:
|
||||
- IPv6
|
||||
- IPv4
|
||||
```
|
||||
|
||||
Then test with
|
||||
|
||||
```bash
|
||||
kubectl apply -f active/systemd_k3s/tests/metallb-test.yaml
|
||||
```
|
||||
|
||||
## External DNS
|
||||
|
||||
<https://github.com/kubernetes-sigs/external-dns/blob/master/docs/tutorials/aws.md>
|
||||
|
||||
### Credentials
|
||||
|
||||
1. Generate credentials for the cluster
|
||||
|
||||
```bash
|
||||
aws iam create-user --user-name "externaldns"
|
||||
aws iam attach-user-policy --user-name "externaldns" --policy-arn arn:aws:iam::892236928704:policy/update-reeseapps
|
||||
aws iam attach-user-policy --user-name "externaldns" --policy-arn arn:aws:iam::892236928704:policy/update-reeselink
|
||||
|
||||
GENERATED_ACCESS_KEY=$(aws iam create-access-key --user-name "externaldns")
|
||||
ACCESS_KEY_ID=$(echo $GENERATED_ACCESS_KEY | jq -r '.AccessKey.AccessKeyId')
|
||||
SECRET_ACCESS_KEY=$(echo $GENERATED_ACCESS_KEY | jq -r '.AccessKey.SecretAccessKey')
|
||||
|
||||
cat <<-EOF > secrets/externaldns-credentials
|
||||
|
||||
[default]
|
||||
aws_access_key_id = $ACCESS_KEY_ID
|
||||
aws_secret_access_key = $SECRET_ACCESS_KEY
|
||||
EOF
|
||||
|
||||
kubectl create secret generic external-dns \
|
||||
--namespace kube-system --from-file secrets/externaldns-credentials
|
||||
|
||||
helm repo add external-dns https://kubernetes-sigs.github.io/external-dns/
|
||||
helm upgrade --install external-dns external-dns/external-dns \
|
||||
--values active/kubernetes_external-dns/values.yaml \
|
||||
--namespace kube-system
|
||||
```
|
||||
|
||||
### Annotation
|
||||
|
||||
```yaml
|
||||
metadata:
|
||||
annotations:
|
||||
external-dns.alpha.kubernetes.io/hostname: example.com
|
||||
```
|
||||
|
||||
## Nginx Ingress
|
||||
|
||||
Now we need an ingress solution (preferably with certs for https). We'll be using nginx since
|
||||
it's a little bit more configurable than traefik (though don't sell traefik short, it's really
|
||||
good. Just finnicky when you have use cases they haven't explicitly coded for).
|
||||
|
||||
```bash
|
||||
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
|
||||
helm repo update
|
||||
helm upgrade --install \
|
||||
ingress-nginx \
|
||||
ingress-nginx/ingress-nginx \
|
||||
--values active/kubernetes_ingress-nginx/values.yaml \
|
||||
--namespace kube-system
|
||||
```
|
||||
|
||||
## Cert Manager
|
||||
|
||||
Install cert-manager
|
||||
|
||||
```bash
|
||||
helm repo add jetstack https://charts.jetstack.io
|
||||
helm repo update
|
||||
helm upgrade --install \
|
||||
cert-manager jetstack/cert-manager \
|
||||
--namespace kube-system \
|
||||
--set crds.enabled=true
|
||||
```
|
||||
|
||||
Create the let's encrypt issuer (Route53 DNS)
|
||||
|
||||
```bash
|
||||
export LE_ACCESS_KEY_ID=
|
||||
export LE_SECRET_KEY=
|
||||
|
||||
cat <<EOF > secrets/cert-manager-secret.yaml
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: prod-route53-credentials-cert-manager
|
||||
data:
|
||||
access-key-id: $(echo $LE_ACCESS_KEY_ID | base64)
|
||||
secret-access-key: $(echo $LE_SECRET_KEY | base64)
|
||||
EOF
|
||||
|
||||
kubectl apply -f secrets/cert-manager-secret.yaml
|
||||
```
|
||||
|
||||
```bash
|
||||
cat <<EOF > secrets/route53-cluster-issuer.yaml
|
||||
apiVersion: cert-manager.io/v1
|
||||
kind: ClusterIssuer
|
||||
metadata:
|
||||
name: letsencrypt
|
||||
spec:
|
||||
acme:
|
||||
server: https://acme-v02.api.letsencrypt.org/directory
|
||||
email: nginx@ducoterra.net
|
||||
privateKeySecretRef:
|
||||
name: letsencrypt
|
||||
solvers:
|
||||
- selector:
|
||||
dnsZones:
|
||||
- "reeseapps.com"
|
||||
dns01:
|
||||
route53:
|
||||
region: us-east-1
|
||||
hostedZoneID: Z012820733346FJ0U4FUF
|
||||
accessKeyID: ${LE_ACCESS_KEY_ID}
|
||||
secretAccessKeySecretRef:
|
||||
name: prod-route53-credentials-cert-manager
|
||||
key: secret-access-key
|
||||
EOF
|
||||
|
||||
kubectl apply -f secrets/route53-cluster-issuer.yaml
|
||||
```
|
||||
|
||||
You can test if your ingress is working with:
|
||||
|
||||
```bash
|
||||
# Navigate to demo.reeseapps.com
|
||||
kubectl apply -f active/infrastructure_k3s/tests/ingress-nginx-test.yaml
|
||||
|
||||
# Cleanup
|
||||
kubectl delete -f active/infrastructure_k3s/tests/ingress-nginx-test.yaml
|
||||
```
|
||||
|
||||
## Test Minecraft Server
|
||||
|
||||
```bash
|
||||
helm upgrade --install minecraft active/kubernetes_minecraft -n minecraft --create-namespace
|
||||
```
|
||||
|
||||
## Automatic Updates
|
||||
|
||||
<https://docs.k3s.io/upgrades/automated>
|
||||
|
||||
```bash
|
||||
kubectl create namespace system-upgrade
|
||||
kubectl apply -f https://github.com/rancher/system-upgrade-controller/releases/latest/download/system-upgrade-controller.yaml
|
||||
kubectl apply -f https://github.com/rancher/system-upgrade-controller/releases/latest/download/crd.yaml
|
||||
kubectl apply -f active/infrastructure_k3s/upgrade-plan.yaml
|
||||
|
||||
# Check plan
|
||||
kubectl get plan -n system-upgrade
|
||||
```
|
||||
|
||||
## Database Backups
|
||||
|
||||
<https://docs.k3s.io/cli/etcd-snapshot>
|
||||
|
||||
Note, you must backup `/var/lib/rancher/k3s/server/token`
|
||||
and use the contents as the toklisten when restoring the backup as data is encrypted with that token.
|
||||
|
||||
Backups are saved to `/var/lib/rancher/k3s/server/db/snapshots/` by default.
|
||||
|
||||
```bash
|
||||
k3s etcd-snapshot save
|
||||
k3s etcd-snapshot list
|
||||
|
||||
k3s server \
|
||||
--cluster-reset \
|
||||
--cluster-reset-restore-path=/var/lib/rancher/k3s/server/db/snapshots/on-demand-kube-1720459685
|
||||
```
|
||||
|
||||
## Uninstall
|
||||
|
||||
```bash
|
||||
/usr/local/bin/k3s-uninstall.sh
|
||||
```
|
||||
93
active/software_k3s/tests/democratic-csi-pvc-test.yaml
Normal file
93
active/software_k3s/tests/democratic-csi-pvc-test.yaml
Normal file
@@ -0,0 +1,93 @@
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: zfs-iscsi-enc0
|
||||
annotations:
|
||||
"helm.sh/resource-policy": keep
|
||||
spec:
|
||||
storageClassName: zfs-iscsi-enc0
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 8Gi
|
||||
|
||||
---
|
||||
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: zfs-iscsi-enc1
|
||||
annotations:
|
||||
"helm.sh/resource-policy": keep
|
||||
spec:
|
||||
storageClassName: zfs-iscsi-enc1
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 8Gi
|
||||
|
||||
---
|
||||
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: zfs-nfs-enc1
|
||||
annotations:
|
||||
"helm.sh/resource-policy": keep
|
||||
spec:
|
||||
storageClassName: zfs-nfs-enc1
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 8Gi
|
||||
|
||||
---
|
||||
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: democratic-csi-test
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: democratic-csi-test
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: democratic-csi-test
|
||||
spec:
|
||||
containers:
|
||||
- image: debian
|
||||
command:
|
||||
- bash
|
||||
- -c
|
||||
- 'sleep infinity'
|
||||
name: democratic-csi-test
|
||||
volumeMounts:
|
||||
- mountPath: /zfs_iscsi_enc0
|
||||
name: zfs-iscsi-enc0
|
||||
- mountPath: /zfs_iscsi_enc1
|
||||
name: zfs-iscsi-enc1
|
||||
- mountPath: /zfs_nfs_enc1
|
||||
name: zfs-nfs-enc1
|
||||
resources:
|
||||
limits:
|
||||
memory: "4Gi"
|
||||
cpu: "2"
|
||||
requests:
|
||||
memory: "1Mi"
|
||||
cpu: "1m"
|
||||
restartPolicy: Always
|
||||
volumes:
|
||||
- name: zfs-iscsi-enc0
|
||||
persistentVolumeClaim:
|
||||
claimName: zfs-iscsi-enc0
|
||||
- name: zfs-iscsi-enc1
|
||||
persistentVolumeClaim:
|
||||
claimName: zfs-iscsi-enc1
|
||||
- name: zfs-nfs-enc1
|
||||
persistentVolumeClaim:
|
||||
claimName: zfs-nfs-enc1
|
||||
45
active/software_k3s/tests/ffmpeg.yaml
Normal file
45
active/software_k3s/tests/ffmpeg.yaml
Normal file
@@ -0,0 +1,45 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: ffmpeg
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: ffmpeg
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: ffmpeg
|
||||
spec:
|
||||
volumes:
|
||||
- name: data
|
||||
persistentVolumeClaim:
|
||||
claimName: ffmpeg
|
||||
containers:
|
||||
- name: ffmpeg
|
||||
image: linuxserver/ffmpeg:latest
|
||||
volumeMounts:
|
||||
- mountPath: /config
|
||||
name: data
|
||||
command:
|
||||
- /bin/bash
|
||||
- -c
|
||||
- 'sleep infinity'
|
||||
resources:
|
||||
limits:
|
||||
memory: "2Gi"
|
||||
cpu: "8"
|
||||
|
||||
---
|
||||
|
||||
apiVersion: v1
|
||||
kind: PersistentVolumeClaim
|
||||
metadata:
|
||||
name: ffmpeg
|
||||
spec:
|
||||
storageClassName: zfs-iscsi-enc0-ext4
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
resources:
|
||||
requests:
|
||||
storage: 64Gi
|
||||
66
active/software_k3s/tests/ingress-nginx-test.yaml
Normal file
66
active/software_k3s/tests/ingress-nginx-test.yaml
Normal file
@@ -0,0 +1,66 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: ingress-nginx-demo
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: ingress-nginx-demo
|
||||
strategy:
|
||||
type: Recreate
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: ingress-nginx-demo
|
||||
spec:
|
||||
containers:
|
||||
- name: httpd
|
||||
image: httpd
|
||||
ports:
|
||||
- containerPort: 80
|
||||
name: http
|
||||
|
||||
---
|
||||
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: ingress-nginx-demo
|
||||
spec:
|
||||
type: ClusterIP
|
||||
selector:
|
||||
app.kubernetes.io/name: ingress-nginx-demo
|
||||
ports:
|
||||
- name: ingress-nginx-demo
|
||||
protocol: TCP
|
||||
port: 80
|
||||
targetPort: http
|
||||
|
||||
---
|
||||
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: ingress-nginx-demo
|
||||
annotations:
|
||||
cert-manager.io/cluster-issuer: letsencrypt
|
||||
external-dns.alpha.kubernetes.io/ttl: "60"
|
||||
nginx.ingress.kubernetes.io/proxy-body-size: "0"
|
||||
nginx.org/client-max-body-size: "0"
|
||||
spec:
|
||||
ingressClassName: nginx
|
||||
rules:
|
||||
- host: demo.reeseapps.com
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: ingress-nginx-demo
|
||||
port:
|
||||
number: 80
|
||||
tls:
|
||||
- hosts:
|
||||
- demo.reeseapps.com
|
||||
secretName: ingress-nginx-demo-tls-cert
|
||||
52
active/software_k3s/tests/metallb-test.yaml
Normal file
52
active/software_k3s/tests/metallb-test.yaml
Normal file
@@ -0,0 +1,52 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: ingress-nginx-demo-1
|
||||
namespace: default
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app.kubernetes.io/name: ingress-nginx-demo-1
|
||||
strategy:
|
||||
type: Recreate
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app.kubernetes.io/name: ingress-nginx-demo-1
|
||||
spec:
|
||||
containers:
|
||||
- name: httpd
|
||||
image: httpd
|
||||
ports:
|
||||
- containerPort: 80
|
||||
name: http
|
||||
resources:
|
||||
requests:
|
||||
memory: "100Mi"
|
||||
cpu: "1m"
|
||||
limits:
|
||||
memory: "256Mi"
|
||||
cpu: "1"
|
||||
|
||||
---
|
||||
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: ingress-nginx-demo-1
|
||||
namespace: default
|
||||
annotations:
|
||||
metallb.universe.tf/address-pool: "unifi-pool"
|
||||
spec:
|
||||
ipFamilyPolicy: PreferDualStack
|
||||
ipFamilies:
|
||||
- IPv6
|
||||
- IPv4
|
||||
type: LoadBalancer
|
||||
ports:
|
||||
- name: http
|
||||
protocol: TCP
|
||||
port: 8001
|
||||
targetPort: 80
|
||||
selector:
|
||||
app.kubernetes.io/name: ingress-nginx-demo-1
|
||||
49
active/software_k3s/tests/statefulset-example.yaml
Normal file
49
active/software_k3s/tests/statefulset-example.yaml
Normal file
@@ -0,0 +1,49 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: nginx
|
||||
labels:
|
||||
app: nginx
|
||||
spec:
|
||||
ports:
|
||||
- port: 80
|
||||
name: web
|
||||
clusterIP: None
|
||||
selector:
|
||||
app: nginx
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: web
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: nginx # has to match .spec.template.metadata.labels
|
||||
serviceName: "nginx"
|
||||
replicas: 3 # by default is 1
|
||||
minReadySeconds: 10 # by default is 0
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: nginx # has to match .spec.selector.matchLabels
|
||||
spec:
|
||||
terminationGracePeriodSeconds: 10
|
||||
containers:
|
||||
- name: nginx
|
||||
image: registry.k8s.io/nginx-slim:0.8
|
||||
ports:
|
||||
- containerPort: 80
|
||||
name: web
|
||||
volumeMounts:
|
||||
- name: www
|
||||
mountPath: /usr/share/nginx/html
|
||||
volumeClaimTemplates:
|
||||
- metadata:
|
||||
name: www
|
||||
spec:
|
||||
accessModes: [ "ReadWriteOnce" ]
|
||||
storageClassName: "local-path"
|
||||
resources:
|
||||
requests:
|
||||
storage: 1Gi
|
||||
42
active/software_k3s/upgrade-plan.yaml
Normal file
42
active/software_k3s/upgrade-plan.yaml
Normal file
@@ -0,0 +1,42 @@
|
||||
# Server plan
|
||||
apiVersion: upgrade.cattle.io/v1
|
||||
kind: Plan
|
||||
metadata:
|
||||
name: server-plan
|
||||
namespace: system-upgrade
|
||||
spec:
|
||||
concurrency: 1
|
||||
cordon: true
|
||||
nodeSelector:
|
||||
matchExpressions:
|
||||
- key: node-role.kubernetes.io/control-plane
|
||||
operator: In
|
||||
values:
|
||||
- "true"
|
||||
serviceAccountName: system-upgrade
|
||||
upgrade:
|
||||
image: rancher/k3s-upgrade
|
||||
channel: https://update.k3s.io/v1-release/channels/stable
|
||||
---
|
||||
# Agent plan
|
||||
apiVersion: upgrade.cattle.io/v1
|
||||
kind: Plan
|
||||
metadata:
|
||||
name: agent-plan
|
||||
namespace: system-upgrade
|
||||
spec:
|
||||
concurrency: 1
|
||||
cordon: true
|
||||
nodeSelector:
|
||||
matchExpressions:
|
||||
- key: node-role.kubernetes.io/control-plane
|
||||
operator: DoesNotExist
|
||||
prepare:
|
||||
args:
|
||||
- prepare
|
||||
- server-plan
|
||||
image: rancher/k3s-upgrade
|
||||
serviceAccountName: system-upgrade
|
||||
upgrade:
|
||||
image: rancher/k3s-upgrade
|
||||
channel: https://update.k3s.io/v1-release/channels/stable
|
||||
Reference in New Issue
Block a user