k0s updates

This commit is contained in:
2025-10-30 22:51:17 -04:00
parent 57ae6b7e72
commit 6df02e8dff
13 changed files with 185 additions and 42 deletions

View File

@@ -1,3 +1,13 @@
# Kubernetes # Kubernetes
See [k3s](/active/systemd_k3s/k3s.md) ## CLI Tools
kubectl: <https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/>
helm: <https://helm.sh/docs/intro/install/>
## Install a Kubernetes Server
For k3s, see [k3s](/active/systemd_k3s/k3s.md)
For k0s, see [k0s](/active/systemd_k0s/k0s.md)

View File

@@ -0,0 +1,7 @@
service:
http:
type: LoadBalancer
externalTrafficPolicy: Local
ssh:
type: LoadBalancer
externalTrafficPolicy: Local

View File

@@ -21,14 +21,10 @@ ingress:
persistence: persistence:
enabled: true enabled: true
create: true create: true
storageClass: zfs-iscsi-enc0
claimName: data-gitea-staging-0 claimName: data-gitea-staging-0
annotations: annotations:
"helm.sh/resource-policy": keep "helm.sh/resource-policy": keep
global:
storageClass: zfs-iscsi-enc1
postgresql: postgresql:
enabled: true enabled: true
image: image:
@@ -36,7 +32,6 @@ postgresql:
primary: primary:
persistence: persistence:
enabled: true enabled: true
storageClass: zfs-iscsi-enc1
annotations: annotations:
"helm.sh/resource-policy": keep "helm.sh/resource-policy": keep

View File

@@ -1,6 +1,7 @@
# Gitea # Gitea
- [Gitea](#gitea) - [Gitea](#gitea)
- [Demo](#demo)
- [Staging](#staging) - [Staging](#staging)
- [Install](#install) - [Install](#install)
- [Backup and Restore](#backup-and-restore) - [Backup and Restore](#backup-and-restore)
@@ -14,6 +15,17 @@ they decide to change things. This is the first chart (besides ingress-nginx) wh
we need to pay attention to the MetalLB annotation. This has been set in the values.yaml we need to pay attention to the MetalLB annotation. This has been set in the values.yaml
file. file.
## Demo
```bash
helm upgrade --install \
gitea \
gitea-charts/gitea \
--values active/kubernetes_gitea/gitea-demo-values.yaml \
--namespace gitea \
--create-namespace
```
## Staging ## Staging
There is a `gitea-staging.yaml` file with staging values. This should be installed in There is a `gitea-staging.yaml` file with staging values. This should be installed in

View File

@@ -10,8 +10,14 @@
```bash ```bash
# Download the updated template from github # Download the updated template from github
kubectl kustomize "github.com/rancher/local-path-provisioner/deploy?ref=v0.0.31" > active/kubernetes_local-path-provisioner/local-path-storage.yaml kubectl kustomize "github.com/rancher/local-path-provisioner/deploy?ref=v0.0.32" > active/kubernetes_local-path-provisioner/local-path-storage.yaml
# Apply customizations (ssd/hdd storage, read write many support) # Apply customizations (ssd/hdd storage, read write many support)
kubectl kustomize active/kubernetes_local-path-provisioner | kubectl apply -f - kubectl kustomize active/kubernetes_local-path-provisioner | kubectl apply -f -
``` ```
Mark the class as default
```bash
kubectl patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
```

View File

@@ -176,7 +176,7 @@ spec:
fieldPath: metadata.namespace fieldPath: metadata.namespace
- name: CONFIG_MOUNT_PATH - name: CONFIG_MOUNT_PATH
value: /etc/config/ value: /etc/config/
image: rancher/local-path-provisioner:v0.0.31 image: rancher/local-path-provisioner:v0.0.32
imagePullPolicy: IfNotPresent imagePullPolicy: IfNotPresent
name: local-path-provisioner name: local-path-provisioner
volumeMounts: volumeMounts:

View File

@@ -5,6 +5,15 @@ below installs nimcraft. For each installation you'll want to create your own va
with a new port. The server-downloader is called "minecraft_get_server" and is available on with a new port. The server-downloader is called "minecraft_get_server" and is available on
[Github](https://github.com/ducoterra/minecraft_get_server). [Github](https://github.com/ducoterra/minecraft_get_server).
After installing, you can run admin commands (like whitelisting players) by
attaching to the container:
```bash
kubectl attach -it <pod>
> /whitelist add ducoterra
```
## Testing ## Testing
```bash ```bash

View File

@@ -56,10 +56,10 @@ spec:
value: "1" value: "1"
resources: resources:
requests: requests:
memory: {{ div .Values.max_ram 2 }}Gi memory: "{{ div .Values.max_ram 2 }}Gi"
cpu: 1m cpu: 1m
limits: limits:
memory: {{ add 1 .Values.max_ram }}Gi memory: "{{ add 1 .Values.max_ram }}Gi"
cpu: {{ .Values.max_cpu | quote }} cpu: {{ .Values.max_cpu | quote }}
volumes: volumes:
- name: data - name: data

View File

@@ -5,7 +5,6 @@ metadata:
annotations: annotations:
"helm.sh/resource-policy": keep "helm.sh/resource-policy": keep
spec: spec:
storageClassName: ssd
accessModes: accessModes:
- ReadWriteOnce - ReadWriteOnce
resources: resources:

View File

@@ -2,11 +2,7 @@ apiVersion: v1
kind: Service kind: Service
metadata: metadata:
name: {{ .Release.Name }} name: {{ .Release.Name }}
annotations:
metallb.universe.tf/address-pool: "external"
external-dns.alpha.kubernetes.io/hostname: {{ .Release.Name }}.reeseapps.com
spec: spec:
ipFamilies: ["IPv6"]
externalTrafficPolicy: Cluster externalTrafficPolicy: Cluster
selector: selector:
app: {{ .Release.Name }} app: {{ .Release.Name }}

124
active/software_k0s/k0s.md Normal file
View File

@@ -0,0 +1,124 @@
# K0s
## Install Single Node Cluster
<https://docs.k0sproject.io/v0.11.0/k0s-single-node/>
```bash
# Allow all traffic in
firewall-cmd --set-default-zone=trusted
# Install k0s cli
curl -sSLf https://get.k0s.sh | sudo sh
# Setup the config
k0s config create > k0s.yaml
# Install single node cluster controller/node
k0s install controller -c k0s.yaml --enable-worker
# Start and enable the service
systemctl enable --now k0scontroller
# Enable bash completion
echo 'source <(k0s completion bash)' >>~/.bashrc
source ~/.bashrc
# Make an admin user
mkdir ~/.kube
k0s kubeconfig create --groups "system:m asters" admin > ~/.kube/config
# Remove the taint that prevents scheduling on the controller
kubectl edit node
```
## Install Multi Node Cluster
<https://docs.k0sproject.io/v0.11.0/k0s-multi-node/>
Install the controller on the controller machine
```bash
# Allow all traffic in
firewall-cmd --set-default-zone=trusted
# Install k0s cli
curl -sSLf https://get.k0s.sh | sudo sh
# Save default config
k0s config create > k0s.yaml
# Install the controller
k0s install controller
# Enable the controller
systemctl enable --now k0scontroller
# Enable bash completion
echo 'source <(k0s completion bash)' >>~/.bashrc
source ~/.bashrc
# Make an admin user (scp ~/.kube/config to your operator machine)
# kubectl config set-context --current --namespace kube-system
mkdir ~/.kube
k0s kubeconfig create --groups "system:masters" admin > ~/.kube/config
# Generate a worker join token
k0s token create --role=worker > worker0-token
```
Now on the worker machine, install the worker
```bash
# Allow all traffic in
firewall-cmd --set-default-zone=trusted
# On the operator, copy the token file from the controller to the worker
scp vm-k0s-controller:worker0-token vm-k0s-worker:token-file
# Install k0s cli
curl -sSLf https://get.k0s.sh | sudo sh
# Join the worker
k0s install worker --token-file token-file
# Start the service
systemctl enable --now k0sworker
# Enable bash completion
echo 'source <(k0s completion bash)' >>~/.bashrc
source ~/.bashrc
```
## Uninstall
```bash
systemctl stop k0scontroller
k0s reset
reboot
```
## Install Metallb
<https://docs.k0sproject.io/v1.26.0+k0s.0/examples/metallb-loadbalancer/>
1. Create a VLAN with a dedicated subnet for Metallb. Disable DHCP.
2. Attach this new VLAN to your worker nodes
3. Assign the worker nodes an address within the created network.
4. Install Metallb. Check `active/software_k0s/metallb-configmap.yaml` before proceeding.
```bash
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.10.2/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.10.2/manifests/metallb.yaml
kubectl apply -f active/software_k0s/metallb-configmap.yaml
```
### Uninstall Metallb
```bash
kubectl delete -f active/software_k0s/metallb-configmap.yaml
kubectl delete -f https://raw.githubusercontent.com/metallb/metallb/v0.10.2/manifests/metallb.yaml
kubectl delete -f https://raw.githubusercontent.com/metallb/metallb/v0.10.2/manifests/namespace.yaml
```

View File

@@ -0,0 +1,12 @@
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.123.100-192.168.123.254

View File

@@ -1,27 +0,0 @@
# K0s
## Install
```bash
# Install k0s cli
curl -sSLf https://get.k0s.sh | sudo sh
# Setup the config
k0s config create > k0s.yaml
# Install controller/node
k0s install controller -c k0s.yaml --enable-worker
systemctl enable --now k0scontroller
# Make an admin user
mkdir ~/.kube
k0s kubeconfig create --groups "system:masters" admin > ~/.kube/config
```
## Uninstall
```bash
systemctl stop k0scontroller
k0s reset
reboot
```