kube local storage migration
This commit is contained in:
1
.gitignore
vendored
1
.gitignore
vendored
@@ -1,2 +1,3 @@
|
|||||||
secrets/
|
secrets/
|
||||||
venv/
|
venv/
|
||||||
|
tmp/
|
||||||
134
README.md
134
README.md
@@ -7,15 +7,12 @@ A project to store homelab stuff.
|
|||||||
- [Homelab](#homelab)
|
- [Homelab](#homelab)
|
||||||
- [Table of Contents](#table-of-contents)
|
- [Table of Contents](#table-of-contents)
|
||||||
- [Apps](#apps)
|
- [Apps](#apps)
|
||||||
- [Dashboard](#dashboard)
|
|
||||||
- [Nextcloud](#nextcloud)
|
|
||||||
- [Test Deploy](#test-deploy)
|
|
||||||
- [Gitea](#gitea)
|
- [Gitea](#gitea)
|
||||||
- [Staging](#staging)
|
- [Staging](#staging)
|
||||||
- [Install](#install)
|
- [Install](#install)
|
||||||
- [Minecraft](#minecraft)
|
- [Minecraft](#minecraft)
|
||||||
- [Nimcraft](#nimcraft)
|
|
||||||
- [Testing](#testing)
|
- [Testing](#testing)
|
||||||
|
- [Nimcraft](#nimcraft)
|
||||||
- [Courtnie](#courtnie)
|
- [Courtnie](#courtnie)
|
||||||
- [Snapdrop](#snapdrop)
|
- [Snapdrop](#snapdrop)
|
||||||
- [Jellyfin](#jellyfin)
|
- [Jellyfin](#jellyfin)
|
||||||
@@ -25,95 +22,6 @@ A project to store homelab stuff.
|
|||||||
|
|
||||||
## Apps
|
## Apps
|
||||||
|
|
||||||
### Dashboard
|
|
||||||
|
|
||||||
The kubernetes dashboard isn't all that useful but it can sometimes give you a good
|
|
||||||
visual breakdown when things are going wrong. It's sometimes faster than running
|
|
||||||
`kubectl get` commands over and over.
|
|
||||||
|
|
||||||
Create the dashboard and an admin user with:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
helm upgrade \
|
|
||||||
--install \
|
|
||||||
--namespace kubernetes-dashboard \
|
|
||||||
--create-namespace \
|
|
||||||
dashboard-user ./helm/dashboard-user
|
|
||||||
|
|
||||||
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
Then login with the following:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
kubectl -n kubernetes-dashboard create token admin-user
|
|
||||||
kubectl proxy
|
|
||||||
```
|
|
||||||
|
|
||||||
### Nextcloud
|
|
||||||
|
|
||||||
The first chart we'll deploy is nextcloud. This is a custom chart because Nextcloud
|
|
||||||
doesn't support helm installation natively (yet). There is a native Docker image and
|
|
||||||
really detailed installation instructions so we can pretty easily piece together what's
|
|
||||||
required.
|
|
||||||
|
|
||||||
This image runs the nextcloud cron job automatically and creates random secrets for all
|
|
||||||
infrastructure - very helpful for a secure deployment, not very helpful for migrating
|
|
||||||
clusters. You'll want to export the secrets and save them in a secure location.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
helm upgrade --install \
|
|
||||||
nextcloud \
|
|
||||||
./helm/nextcloud \
|
|
||||||
--namespace nextcloud \
|
|
||||||
--create-namespace
|
|
||||||
```
|
|
||||||
|
|
||||||
Need to add lots of files? Copy them to the user data dir and then run
|
|
||||||
|
|
||||||
```bash
|
|
||||||
./occ files:scan --all
|
|
||||||
```
|
|
||||||
|
|
||||||
Set up SES with the following links:
|
|
||||||
|
|
||||||
<https://docs.aws.amazon.com/general/latest/gr/ses.html>
|
|
||||||
|
|
||||||
To upgrade you'll need to:
|
|
||||||
|
|
||||||
1. Apply the new image in values.yaml
|
|
||||||
2. Exec into the container and run the following:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
su -s /bin/bash www-data
|
|
||||||
./occ upgrade
|
|
||||||
./occ maintenance:mode --off
|
|
||||||
```
|
|
||||||
|
|
||||||
See <https://docs.nextcloud.com/server/latest/admin_manual/maintenance/upgrade.html#maintenance-mode>
|
|
||||||
for more information.
|
|
||||||
|
|
||||||
#### Test Deploy
|
|
||||||
|
|
||||||
You can create a test deployment with the following:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
helm upgrade --install nextcloud ./helm/nextcloud \
|
|
||||||
--namespace nextcloud-test \
|
|
||||||
--create-namespace \
|
|
||||||
--set nextcloud.domain=nextcloud-test.reeseapps.com \
|
|
||||||
--set nextcloud.html.storageClassName=zfs-nfs-enc1 \
|
|
||||||
--set nextcloud.html.storage=8Gi \
|
|
||||||
--set nextcloud.data.storageClassName=zfs-nfs-enc1 \
|
|
||||||
--set nextcloud.data.storage=8Gi \
|
|
||||||
--set postgres.storageClassName=zfs-nfs-enc1 \
|
|
||||||
--set postgres.storage=8Gi \
|
|
||||||
--set redis.storageClassName=zfs-nfs-enc1 \
|
|
||||||
--set redis.storage=8Gi \
|
|
||||||
--set show_passwords=true \
|
|
||||||
--dry-run
|
|
||||||
```
|
|
||||||
|
|
||||||
### Gitea
|
### Gitea
|
||||||
|
|
||||||
Gitea provides a helm chart [here](https://gitea.com/gitea/helm-chart/). We're not
|
Gitea provides a helm chart [here](https://gitea.com/gitea/helm-chart/). We're not
|
||||||
@@ -189,25 +97,24 @@ below installs nimcraft. For each installation you'll want to create your own va
|
|||||||
with a new port. The server-downloader is called "minecraft_get_server" and is available on
|
with a new port. The server-downloader is called "minecraft_get_server" and is available on
|
||||||
[Github](https://github.com/ducoterra/minecraft_get_server).
|
[Github](https://github.com/ducoterra/minecraft_get_server).
|
||||||
|
|
||||||
#### Nimcraft
|
|
||||||
|
|
||||||
```bash
|
|
||||||
helm upgrade --install \
|
|
||||||
nimcraft \
|
|
||||||
./helm/minecraft \
|
|
||||||
--namespace nimcraft \
|
|
||||||
--create-namespace
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Testing
|
#### Testing
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
helm upgrade --install \
|
helm upgrade --install \
|
||||||
testcraft \
|
testcraft \
|
||||||
./helm/minecraft \
|
./minecraft \
|
||||||
--namespace testcraft \
|
--namespace minecraft \
|
||||||
--create-namespace \
|
--create-namespace
|
||||||
--set port=25566
|
```
|
||||||
|
|
||||||
|
#### Nimcraft
|
||||||
|
|
||||||
|
```bash
|
||||||
|
helm upgrade --install \
|
||||||
|
nimcraft \
|
||||||
|
./minecraft \
|
||||||
|
--namespace minecraft \
|
||||||
|
--create-namespace
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Courtnie
|
#### Courtnie
|
||||||
@@ -215,10 +122,9 @@ helm upgrade --install \
|
|||||||
```bash
|
```bash
|
||||||
helm upgrade --install \
|
helm upgrade --install \
|
||||||
courtniecraft \
|
courtniecraft \
|
||||||
./helm/minecraft \
|
./minecraft \
|
||||||
--namespace courtniecraft \
|
--namespace minecraft \
|
||||||
--create-namespace \
|
--create-namespace
|
||||||
--set port=25568
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Snapdrop
|
### Snapdrop
|
||||||
@@ -228,7 +134,7 @@ Snapdrop is a file sharing app that allows airdrop-like functionality over the w
|
|||||||
```bash
|
```bash
|
||||||
helm upgrade --install \
|
helm upgrade --install \
|
||||||
snapdrop \
|
snapdrop \
|
||||||
./helm/snapdrop \
|
./snapdrop \
|
||||||
--namespace snapdrop \
|
--namespace snapdrop \
|
||||||
--create-namespace
|
--create-namespace
|
||||||
```
|
```
|
||||||
@@ -240,7 +146,7 @@ This assumes you have a media NFS share.
|
|||||||
```bash
|
```bash
|
||||||
helm upgrade --install \
|
helm upgrade --install \
|
||||||
jellyfin \
|
jellyfin \
|
||||||
./helm/jellyfin \
|
./jellyfin \
|
||||||
--namespace jellyfin \
|
--namespace jellyfin \
|
||||||
--create-namespace
|
--create-namespace
|
||||||
```
|
```
|
||||||
@@ -252,7 +158,7 @@ This creates a basic iperf3 server.
|
|||||||
```bash
|
```bash
|
||||||
helm upgrade --install \
|
helm upgrade --install \
|
||||||
iperf3 \
|
iperf3 \
|
||||||
./helm/iperf3 \
|
./iperf3 \
|
||||||
--namespace iperf3 \
|
--namespace iperf3 \
|
||||||
--create-namespace
|
--create-namespace
|
||||||
```
|
```
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
# AWS Credentials
|
# AWS Credentials
|
||||||
|
|
||||||
## Aws Policies
|
## Aws Certbot Route53 Policies
|
||||||
|
|
||||||
Example Policy:
|
Example Policy:
|
||||||
|
|
||||||
|
|||||||
@@ -2,10 +2,8 @@
|
|||||||
|
|
||||||
- [Network Management](#network-management)
|
- [Network Management](#network-management)
|
||||||
- [Reeseapps vs Reeselink](#reeseapps-vs-reeselink)
|
- [Reeseapps vs Reeselink](#reeseapps-vs-reeselink)
|
||||||
- [DNS Caching](#dns-caching)
|
|
||||||
- [Reeselink Addresses](#reeselink-addresses)
|
- [Reeselink Addresses](#reeselink-addresses)
|
||||||
- [Reeseapps Addresses](#reeseapps-addresses)
|
- [Reeseapps Addresses](#reeseapps-addresses)
|
||||||
- [Duconet WG Addresses](#duconet-wg-addresses)
|
|
||||||
|
|
||||||
## Reeseapps vs Reeselink
|
## Reeseapps vs Reeselink
|
||||||
|
|
||||||
@@ -17,69 +15,14 @@ domains.
|
|||||||
and other machine to machine connections. They can be public or private and are mostly for
|
and other machine to machine connections. They can be public or private and are mostly for
|
||||||
convenience.
|
convenience.
|
||||||
|
|
||||||
## DNS Caching
|
|
||||||
|
|
||||||
Use unifi to cache important DNS records. The following are critical:
|
|
||||||
|
|
||||||
- `driveripper-wg.reeselink.com` `Host (AAAA)` `fd00:fd41:d0f1:1010::6`
|
|
||||||
- `democratic-csi-server.reeselink.com` `Host (A)` `fd00:fd41:d0f1:1010::6`
|
|
||||||
- `driveripper.reeseapps.com` `Host (AAAA)` `2600:1700:1e6c:a81f:153e:9c35:8ff3:fa3`
|
|
||||||
- `driveripper.reeseapps.com` `Host (AAAA)` `2600:1700:1e6c:a81f:793d:7abf:e94d:9bc4`
|
|
||||||
|
|
||||||
|
|
||||||
## Reeselink Addresses
|
## Reeselink Addresses
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
aws route53 change-resource-record-sets --hosted-zone-id Z0092652G7L97DSINN18 --change-batch file://dns/reeselink.json
|
aws route53 change-resource-record-sets --hosted-zone-id Z0092652G7L97DSINN18 --change-batch file://dns/reeselink.json
|
||||||
```
|
```
|
||||||
|
|
||||||
You can extract these addresses into a text file with:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# IPV6
|
|
||||||
cat dns/reeselink.json | \
|
|
||||||
jq -c -r '[ .Changes.[] |
|
|
||||||
select( .ResourceRecordSet.Type | . == "AAAA") ]
|
|
||||||
| .[]
|
|
||||||
| .ResourceRecordSet
|
|
||||||
| .Name,.ResourceRecords.[].Value' > dns/ipv6.txt
|
|
||||||
|
|
||||||
# IPV4
|
|
||||||
cat dns/reeselink.json | \
|
|
||||||
jq -c -r '[ .Changes.[] |
|
|
||||||
select( .ResourceRecordSet.Type | . == "A") ]
|
|
||||||
| .[]
|
|
||||||
| .ResourceRecordSet
|
|
||||||
| .Name,.ResourceRecords.[].Value' > dns/ipv4.txt
|
|
||||||
```
|
|
||||||
|
|
||||||
## Reeseapps Addresses
|
## Reeseapps Addresses
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
aws route53 change-resource-record-sets --hosted-zone-id Z012820733346FJ0U4FUF --change-batch file://dns/reeseapps.json
|
aws route53 change-resource-record-sets --hosted-zone-id Z012820733346FJ0U4FUF --change-batch file://dns/reeseapps.json
|
||||||
```
|
```
|
||||||
|
|
||||||
## Duconet WG Addresses
|
|
||||||
|
|
||||||
After generating new addresses from wireguard's vars.yaml. Use find and replace regex
|
|
||||||
with the following:
|
|
||||||
|
|
||||||
```regex
|
|
||||||
(.*.reeselink.com)\n(.*)$
|
|
||||||
```
|
|
||||||
|
|
||||||
```regex
|
|
||||||
{
|
|
||||||
"Action": "UPSERT",
|
|
||||||
"ResourceRecordSet": {
|
|
||||||
"Name": "$1",
|
|
||||||
"Type": "AAAA",
|
|
||||||
"TTL": 300,
|
|
||||||
"ResourceRecords": [
|
|
||||||
{
|
|
||||||
"Value": "$2"
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
```
|
|
||||||
|
|||||||
@@ -26,6 +26,19 @@
|
|||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"Action": "UPSERT",
|
||||||
|
"ResourceRecordSet": {
|
||||||
|
"Name": "unifi-external.reeseapps.com",
|
||||||
|
"Type": "AAAA",
|
||||||
|
"TTL": 300,
|
||||||
|
"ResourceRecords": [
|
||||||
|
{
|
||||||
|
"Value": "2600:1700:1e6c:a81f:2a0:98ff:fe5e:edc3"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -39,6 +39,19 @@
|
|||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"Action": "UPSERT",
|
||||||
|
"ResourceRecordSet": {
|
||||||
|
"Name": "unifi-external.reeselink.com",
|
||||||
|
"Type": "AAAA",
|
||||||
|
"TTL": 300,
|
||||||
|
"ResourceRecords": [
|
||||||
|
{
|
||||||
|
"Value": "2600:1700:1e6c:a81f:2a0:98ff:fe5e:edc3"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -130,7 +130,7 @@ dnf install -y vim-default-editor --allowerasing
|
|||||||
dnf install -y glances
|
dnf install -y glances
|
||||||
|
|
||||||
# Install zsh with autocomplete and suggestions
|
# Install zsh with autocomplete and suggestions
|
||||||
dnf install zsh zsh-autosuggestions zsh-syntax-highlighting
|
dnf install -y zsh zsh-autosuggestions zsh-syntax-highlighting
|
||||||
|
|
||||||
cat <<EOF > ~/.zshrc
|
cat <<EOF > ~/.zshrc
|
||||||
# Basic settings
|
# Basic settings
|
||||||
|
|||||||
@@ -1,24 +0,0 @@
|
|||||||
apiVersion: v2
|
|
||||||
name: dashboard
|
|
||||||
description: A Kubernetes Dashboard User Deployment
|
|
||||||
|
|
||||||
# A chart can be either an 'application' or a 'library' chart.
|
|
||||||
#
|
|
||||||
# Application charts are a collection of templates that can be packaged into versioned archives
|
|
||||||
# to be deployed.
|
|
||||||
#
|
|
||||||
# Library charts provide useful utilities or functions for the chart developer. They're included as
|
|
||||||
# a dependency of application charts to inject those utilities and functions into the rendering
|
|
||||||
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
|
|
||||||
type: application
|
|
||||||
|
|
||||||
# This is the chart version. This version number should be incremented each time you make changes
|
|
||||||
# to the chart and its templates, including the app version.
|
|
||||||
# Versions are expected to follow Semantic Versioning (https://semver.org/)
|
|
||||||
version: 0.1.0
|
|
||||||
|
|
||||||
# This is the version number of the application being deployed. This version number should be
|
|
||||||
# incremented each time you make changes to the application. Versions are not expected to
|
|
||||||
# follow Semantic Versioning. They should reflect the version the application is using.
|
|
||||||
# It is recommended to use it with quotes.
|
|
||||||
appVersion: "1.16.0"
|
|
||||||
@@ -1,20 +0,0 @@
|
|||||||
apiVersion: v1
|
|
||||||
kind: ServiceAccount
|
|
||||||
metadata:
|
|
||||||
name: admin-user
|
|
||||||
namespace: kubernetes-dashboard
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
apiVersion: rbac.authorization.k8s.io/v1
|
|
||||||
kind: ClusterRoleBinding
|
|
||||||
metadata:
|
|
||||||
name: admin-user
|
|
||||||
roleRef:
|
|
||||||
apiGroup: rbac.authorization.k8s.io
|
|
||||||
kind: ClusterRole
|
|
||||||
name: cluster-admin
|
|
||||||
subjects:
|
|
||||||
- kind: ServiceAccount
|
|
||||||
name: admin-user
|
|
||||||
namespace: kubernetes-dashboard
|
|
||||||
@@ -1,23 +0,0 @@
|
|||||||
# Patterns to ignore when building packages.
|
|
||||||
# This supports shell glob matching, relative path matching, and
|
|
||||||
# negation (prefixed with !). Only one pattern per line.
|
|
||||||
.DS_Store
|
|
||||||
# Common VCS dirs
|
|
||||||
.git/
|
|
||||||
.gitignore
|
|
||||||
.bzr/
|
|
||||||
.bzrignore
|
|
||||||
.hg/
|
|
||||||
.hgignore
|
|
||||||
.svn/
|
|
||||||
# Common backup files
|
|
||||||
*.swp
|
|
||||||
*.bak
|
|
||||||
*.tmp
|
|
||||||
*.orig
|
|
||||||
*~
|
|
||||||
# Various IDEs
|
|
||||||
.project
|
|
||||||
.idea/
|
|
||||||
*.tmproj
|
|
||||||
.vscode/
|
|
||||||
@@ -1,11 +0,0 @@
|
|||||||
Nextcloud has been installed!
|
|
||||||
|
|
||||||
{{ if .Values.show_passwords -}}
|
|
||||||
`show_passwords` is true. Here are the generated (or retrieved) passwords:
|
|
||||||
|
|
||||||
NEXTCLOUD_ADMIN_PASSWORD: {{ include "NEXTCLOUD_ADMIN_PASSWORD" . | quote }}
|
|
||||||
POSTGRES_PASSWORD: {{ include "POSTGRES_PASSWORD" . | quote }}
|
|
||||||
REDIS_HOST_PASSWORD: {{ include "REDIS_PASSWORD" . | quote }}
|
|
||||||
{{ else }}
|
|
||||||
Run with `--set show_passwords=true` to output the generated passwords.
|
|
||||||
{{- end }}
|
|
||||||
@@ -1,47 +0,0 @@
|
|||||||
{{- define "helm_keep_annotation" -}}
|
|
||||||
"helm.sh/resource-policy": keep
|
|
||||||
{{- end -}}
|
|
||||||
|
|
||||||
{{/* Generated Postgres Config */}}
|
|
||||||
{{ define "POSTGRES_NAME" }}{{ printf "%s-postgres" .Release.Name | lower }}{{ end }}
|
|
||||||
{{ define "POSTGRES_DB" }}nextcloud{{ end }}
|
|
||||||
{{ define "DATABASE_HOST" }}{{ .Release.Name }}-postgres{{ end }}
|
|
||||||
{{ define "POSTGRES_USER" }}postgres{{ end }}
|
|
||||||
|
|
||||||
{{/* Generated Nextcloud Config */}}
|
|
||||||
{{ define "NEXTCLOUD_NAME" }}{{ printf "%s-nextcloud" .Release.Name | lower }}{{ end }}
|
|
||||||
{{ define "ADMIN_USER" }}admin{{ end }}
|
|
||||||
|
|
||||||
{{/* Generated Redis Config */}}
|
|
||||||
{{ define "REDIS_NAME" }}{{ printf "%s-redis" .Release.Name | lower }}{{ end }}
|
|
||||||
{{ define "REDIS_HOST" }}{{ .Release.Name }}-redis{{ end }}
|
|
||||||
|
|
||||||
{{/* Postgres password lookup - uses existing password if possible */}}
|
|
||||||
{{ define "POSTGRES_PASSWORD" -}}
|
|
||||||
{{- $POSTGRES_SECRET := (lookup "v1" "Secret" .Release.Namespace ( include "POSTGRES_NAME" . )).data -}}
|
|
||||||
{{- if $POSTGRES_SECRET -}}
|
|
||||||
{{- printf $POSTGRES_SECRET.POSTGRES_PASSWORD | b64enc -}}
|
|
||||||
{{- else -}}
|
|
||||||
{{- printf (required ".Values.postgres.password is required" .Values.postgres.password) | b64enc -}}
|
|
||||||
{{- end -}}
|
|
||||||
{{- end }}
|
|
||||||
|
|
||||||
{{/* Nextcloud admin password lookup - uses existing password if possible */}}
|
|
||||||
{{ define "NEXTCLOUD_ADMIN_PASSWORD" -}}
|
|
||||||
{{- $NEXTCLOUD_SECRETS := (lookup "v1" "Secret" .Release.Namespace ( include "NEXTCLOUD_NAME" . )).data -}}
|
|
||||||
{{- if $NEXTCLOUD_SECRETS -}}
|
|
||||||
{{- printf $NEXTCLOUD_SECRETS.NEXTCLOUD_ADMIN_PASSWORD | b64enc -}}
|
|
||||||
{{- else -}}
|
|
||||||
{{- printf (required ".Values.nextcloud.admin.password is required" .Values.nextcloud.admin.password) | b64enc -}}
|
|
||||||
{{- end -}}
|
|
||||||
{{- end }}
|
|
||||||
|
|
||||||
{{/* Redis password lookup - uses existing password if possible */}}
|
|
||||||
{{ define "REDIS_PASSWORD" -}}
|
|
||||||
{{- $REDIS_SECRETS := (lookup "v1" "Secret" .Release.Namespace ( include "REDIS_NAME" . )).data -}}
|
|
||||||
{{- if $REDIS_SECRETS -}}
|
|
||||||
{{- printf $REDIS_SECRETS.REDIS_PASSWORD | b64enc -}}
|
|
||||||
{{- else -}}
|
|
||||||
{{- printf (required ".Values.redis.password is required" .Values.redis.password) | b64enc -}}
|
|
||||||
{{- end -}}
|
|
||||||
{{- end }}
|
|
||||||
@@ -1,110 +0,0 @@
|
|||||||
apiVersion: apps/v1
|
|
||||||
kind: Deployment
|
|
||||||
metadata:
|
|
||||||
name: {{ .Release.Name }}
|
|
||||||
spec:
|
|
||||||
selector:
|
|
||||||
matchLabels:
|
|
||||||
app.kubernetes.io/name: nextcloud
|
|
||||||
strategy:
|
|
||||||
type: Recreate
|
|
||||||
template:
|
|
||||||
metadata:
|
|
||||||
labels:
|
|
||||||
app.kubernetes.io/name: nextcloud
|
|
||||||
spec:
|
|
||||||
tolerations:
|
|
||||||
- key: "node.kubernetes.io/unreachable"
|
|
||||||
operator: "Exists"
|
|
||||||
effect: "NoExecute"
|
|
||||||
tolerationSeconds: 1
|
|
||||||
- key: "node.kubernetes.io/not-ready"
|
|
||||||
operator: "Exists"
|
|
||||||
effect: "NoExecute"
|
|
||||||
tolerationSeconds: 1
|
|
||||||
containers:
|
|
||||||
- name: nextcloud
|
|
||||||
image: {{ .Values.nextcloud.image }}
|
|
||||||
ports:
|
|
||||||
- containerPort: 80
|
|
||||||
name: http
|
|
||||||
envFrom:
|
|
||||||
- configMapRef:
|
|
||||||
name: {{ .Release.Name }}-nextcloud
|
|
||||||
- secretRef:
|
|
||||||
name: {{ .Release.Name }}-nextcloud
|
|
||||||
volumeMounts:
|
|
||||||
- mountPath: /var/www/html
|
|
||||||
name: html
|
|
||||||
- mountPath: /var/www/html/data
|
|
||||||
name: data
|
|
||||||
resources:
|
|
||||||
requests:
|
|
||||||
memory: "1Gi"
|
|
||||||
cpu: "1m"
|
|
||||||
limits:
|
|
||||||
memory: "4Gi"
|
|
||||||
cpu: "4"
|
|
||||||
- name: postgres
|
|
||||||
image: postgres:15
|
|
||||||
envFrom:
|
|
||||||
- configMapRef:
|
|
||||||
name: {{ .Release.Name }}-postgres
|
|
||||||
- secretRef:
|
|
||||||
name: {{ .Release.Name }}-postgres
|
|
||||||
volumeMounts:
|
|
||||||
- name: postgres
|
|
||||||
mountPath: /var/lib/postgresql/data
|
|
||||||
- name: postgres-init
|
|
||||||
mountPath: /docker-entrypoint-initdb.d/init-user-db.sh
|
|
||||||
subPath: init-user-db.sh
|
|
||||||
ports:
|
|
||||||
- containerPort: 5432
|
|
||||||
name: postgres
|
|
||||||
resources:
|
|
||||||
requests:
|
|
||||||
memory: "1Gi"
|
|
||||||
cpu: "1m"
|
|
||||||
limits:
|
|
||||||
memory: "4Gi"
|
|
||||||
cpu: "4"
|
|
||||||
- name: redis
|
|
||||||
image: redis:7
|
|
||||||
ports:
|
|
||||||
- containerPort: 6379
|
|
||||||
name: redis
|
|
||||||
volumeMounts:
|
|
||||||
- mountPath: /data
|
|
||||||
name: redis
|
|
||||||
command:
|
|
||||||
- redis-server
|
|
||||||
- --save
|
|
||||||
- "60"
|
|
||||||
- "1"
|
|
||||||
- --loglevel
|
|
||||||
- warning
|
|
||||||
- --requirepass
|
|
||||||
- {{ include "REDIS_PASSWORD" . | b64dec | quote }}
|
|
||||||
resources:
|
|
||||||
requests:
|
|
||||||
memory: "1Gi"
|
|
||||||
cpu: "1m"
|
|
||||||
limits:
|
|
||||||
memory: "4Gi"
|
|
||||||
cpu: "4"
|
|
||||||
volumes:
|
|
||||||
- name: html
|
|
||||||
persistentVolumeClaim:
|
|
||||||
claimName: {{ .Release.Name }}-html-iops
|
|
||||||
- name: data
|
|
||||||
persistentVolumeClaim:
|
|
||||||
claimName: {{ .Release.Name }}-data
|
|
||||||
- name: postgres
|
|
||||||
persistentVolumeClaim:
|
|
||||||
claimName: {{ .Release.Name }}-postgres-iops
|
|
||||||
- name: redis
|
|
||||||
emptyDir:
|
|
||||||
sizeLimit: 500Mi
|
|
||||||
- name: postgres-init
|
|
||||||
secret:
|
|
||||||
secretName: {{ .Release.Name }}-postgres-init
|
|
||||||
@@ -1,25 +0,0 @@
|
|||||||
apiVersion: networking.k8s.io/v1
|
|
||||||
kind: Ingress
|
|
||||||
metadata:
|
|
||||||
name: {{ .Release.Name }}
|
|
||||||
annotations:
|
|
||||||
cert-manager.io/cluster-issuer: letsencrypt
|
|
||||||
kubernetes.io/ingress.class: nginx
|
|
||||||
nginx.ingress.kubernetes.io/proxy-body-size: "0"
|
|
||||||
nginx.org/client-max-body-size: "0"
|
|
||||||
spec:
|
|
||||||
rules:
|
|
||||||
- host: {{ .Values.nextcloud.domain }}
|
|
||||||
http:
|
|
||||||
paths:
|
|
||||||
- path: /
|
|
||||||
pathType: Prefix
|
|
||||||
backend:
|
|
||||||
service:
|
|
||||||
name: nextcloud
|
|
||||||
port:
|
|
||||||
name: http
|
|
||||||
tls:
|
|
||||||
- hosts:
|
|
||||||
- {{ .Values.nextcloud.domain }}
|
|
||||||
secretName: nextcloud-tls-cert
|
|
||||||
@@ -1,19 +0,0 @@
|
|||||||
apiVersion: v1
|
|
||||||
kind: ConfigMap
|
|
||||||
metadata:
|
|
||||||
name: {{ .Release.Name }}-nextcloud
|
|
||||||
annotations:
|
|
||||||
{{- include "helm_keep_annotation" . | nindent 4 }}
|
|
||||||
data:
|
|
||||||
NEXTCLOUD_TRUSTED_DOMAINS: {{ .Values.nextcloud.domain }}
|
|
||||||
OVERWRITEPROTOCOL: https
|
|
||||||
OVERWRITECLIURL: https://{{ .Values.nextcloud.domain }}
|
|
||||||
NEXTCLOUD_ADMIN_USER: admin
|
|
||||||
POSTGRES_USER: nextcloud
|
|
||||||
POSTGRES_HOST: {{ .Release.Name }}
|
|
||||||
POSTGRES_DB: nextcloud
|
|
||||||
REDIS_HOST: {{ .Release.Name }}
|
|
||||||
PHP_UPLOAD_LIMIT: 1000000M
|
|
||||||
PHP_MEMORY_LIMIT: 2048M
|
|
||||||
TRUSTED_PROXIES: 10.42.0.1/24
|
|
||||||
APACHE_DISABLE_REWRITE_IP: "1"
|
|
||||||
@@ -1,62 +0,0 @@
|
|||||||
apiVersion: batch/v1
|
|
||||||
kind: CronJob
|
|
||||||
metadata:
|
|
||||||
name: {{ .Release.Name }}-cron
|
|
||||||
spec:
|
|
||||||
schedule: "*/5 * * * *"
|
|
||||||
failedJobsHistoryLimit: 1
|
|
||||||
successfulJobsHistoryLimit: 0
|
|
||||||
jobTemplate:
|
|
||||||
spec:
|
|
||||||
template:
|
|
||||||
spec:
|
|
||||||
affinity:
|
|
||||||
podAffinity:
|
|
||||||
requiredDuringSchedulingIgnoredDuringExecution:
|
|
||||||
- labelSelector:
|
|
||||||
matchExpressions:
|
|
||||||
- key: app.kubernetes.io/name
|
|
||||||
operator: In
|
|
||||||
values:
|
|
||||||
- nextcloud
|
|
||||||
# This is the selector for the node
|
|
||||||
# So when you find a valid node - schedule the pod to the
|
|
||||||
# node that matches hostname
|
|
||||||
# needed for iscsi mounts
|
|
||||||
topologyKey: kubernetes.io/hostname
|
|
||||||
|
|
||||||
securityContext:
|
|
||||||
runAsUser: 33
|
|
||||||
runAsGroup: 33
|
|
||||||
containers:
|
|
||||||
- name: nextcloud
|
|
||||||
image: {{ .Values.nextcloud.image }}
|
|
||||||
command:
|
|
||||||
- php
|
|
||||||
- -f
|
|
||||||
- cron.php
|
|
||||||
volumeMounts:
|
|
||||||
- mountPath: /var/www/html
|
|
||||||
name: html
|
|
||||||
- mountPath: /var/www/html/data
|
|
||||||
name: data
|
|
||||||
envFrom:
|
|
||||||
- configMapRef:
|
|
||||||
name: {{ .Release.Name }}-nextcloud
|
|
||||||
- secretRef:
|
|
||||||
name: {{ .Release.Name }}-nextcloud
|
|
||||||
resources:
|
|
||||||
requests:
|
|
||||||
memory: "1Gi"
|
|
||||||
cpu: "1m"
|
|
||||||
limits:
|
|
||||||
memory: "4Gi"
|
|
||||||
cpu: "4"
|
|
||||||
volumes:
|
|
||||||
- name: html
|
|
||||||
persistentVolumeClaim:
|
|
||||||
claimName: {{ .Release.Name }}-html-iops
|
|
||||||
- name: data
|
|
||||||
persistentVolumeClaim:
|
|
||||||
claimName: {{ .Release.Name }}-data
|
|
||||||
restartPolicy: OnFailure
|
|
||||||
@@ -1,13 +0,0 @@
|
|||||||
apiVersion: v1
|
|
||||||
kind: PersistentVolumeClaim
|
|
||||||
metadata:
|
|
||||||
name: {{ .Release.Name }}-data
|
|
||||||
annotations:
|
|
||||||
"helm.sh/resource-policy": keep
|
|
||||||
spec:
|
|
||||||
storageClassName: {{ .Values.nextcloud.data.storageClassName }}
|
|
||||||
accessModes:
|
|
||||||
- ReadWriteOnce
|
|
||||||
resources:
|
|
||||||
requests:
|
|
||||||
storage: {{ .Values.nextcloud.data.storage }}
|
|
||||||
@@ -1,13 +0,0 @@
|
|||||||
apiVersion: v1
|
|
||||||
kind: PersistentVolumeClaim
|
|
||||||
metadata:
|
|
||||||
name: {{ .Release.Name }}-html-iops
|
|
||||||
annotations:
|
|
||||||
"helm.sh/resource-policy": keep
|
|
||||||
spec:
|
|
||||||
storageClassName: {{ .Values.nextcloud.html.storageClassName }}
|
|
||||||
accessModes:
|
|
||||||
- ReadWriteOnce
|
|
||||||
resources:
|
|
||||||
requests:
|
|
||||||
storage: {{ .Values.nextcloud.html.storage }}
|
|
||||||
@@ -1,11 +0,0 @@
|
|||||||
apiVersion: v1
|
|
||||||
kind: Secret
|
|
||||||
metadata:
|
|
||||||
name: {{ .Release.Name }}-nextcloud
|
|
||||||
annotations:
|
|
||||||
{{- include "helm_keep_annotation" . | nindent 4 }}
|
|
||||||
type: generic
|
|
||||||
data:
|
|
||||||
NEXTCLOUD_ADMIN_PASSWORD: {{ include "NEXTCLOUD_ADMIN_PASSWORD" . | quote }}
|
|
||||||
POSTGRES_PASSWORD: {{ include "POSTGRES_PASSWORD" . | quote }}
|
|
||||||
REDIS_HOST_PASSWORD: {{ include "REDIS_PASSWORD" . | quote }}
|
|
||||||
@@ -1,9 +0,0 @@
|
|||||||
apiVersion: v1
|
|
||||||
kind: ConfigMap
|
|
||||||
metadata:
|
|
||||||
name: {{ .Release.Name }}-postgres
|
|
||||||
annotations:
|
|
||||||
{{- include "helm_keep_annotation" . | nindent 4 }}
|
|
||||||
data:
|
|
||||||
POSTGRES_USER: postgres
|
|
||||||
POSTGRES_DB: nextcloud
|
|
||||||
@@ -1,16 +0,0 @@
|
|||||||
apiVersion: v1
|
|
||||||
kind: Secret
|
|
||||||
metadata:
|
|
||||||
name: {{ .Release.Name }}-postgres-init
|
|
||||||
annotations:
|
|
||||||
{{- include "helm_keep_annotation" . | nindent 4 }}
|
|
||||||
stringData:
|
|
||||||
init-user-db.sh: |
|
|
||||||
#!/bin/bash
|
|
||||||
set -e
|
|
||||||
|
|
||||||
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL
|
|
||||||
CREATE USER nextcloud PASSWORD '{{ include "POSTGRES_PASSWORD" . | b64dec }}';
|
|
||||||
GRANT ALL PRIVILEGES ON DATABASE nextcloud TO nextcloud;
|
|
||||||
GRANT USAGE, CREATE ON SCHEMA public TO nextcloud;
|
|
||||||
EOSQL
|
|
||||||
@@ -1,13 +0,0 @@
|
|||||||
apiVersion: v1
|
|
||||||
kind: PersistentVolumeClaim
|
|
||||||
metadata:
|
|
||||||
name: {{ .Release.Name }}-postgres-iops
|
|
||||||
annotations:
|
|
||||||
"helm.sh/resource-policy": keep
|
|
||||||
spec:
|
|
||||||
storageClassName: {{ .Values.postgres.storageClassName }}
|
|
||||||
accessModes:
|
|
||||||
- ReadWriteOnce
|
|
||||||
resources:
|
|
||||||
requests:
|
|
||||||
storage: {{ .Values.postgres.storage }}
|
|
||||||
@@ -1,9 +0,0 @@
|
|||||||
apiVersion: v1
|
|
||||||
kind: Secret
|
|
||||||
metadata:
|
|
||||||
name: {{ .Release.Name }}-postgres
|
|
||||||
annotations:
|
|
||||||
{{- include "helm_keep_annotation" . | nindent 4 }}
|
|
||||||
type: generic
|
|
||||||
data:
|
|
||||||
POSTGRES_PASSWORD: {{ include "POSTGRES_PASSWORD" . | quote }}
|
|
||||||
@@ -1,13 +0,0 @@
|
|||||||
apiVersion: v1
|
|
||||||
kind: PersistentVolumeClaim
|
|
||||||
metadata:
|
|
||||||
name: {{ .Release.Name }}-redis-iops
|
|
||||||
annotations:
|
|
||||||
"helm.sh/resource-policy": keep
|
|
||||||
spec:
|
|
||||||
storageClassName: {{ .Values.redis.storageClassName }}
|
|
||||||
accessModes:
|
|
||||||
- ReadWriteOnce
|
|
||||||
resources:
|
|
||||||
requests:
|
|
||||||
storage: {{ .Values.redis.storage }}
|
|
||||||
@@ -1,9 +0,0 @@
|
|||||||
apiVersion: v1
|
|
||||||
kind: Secret
|
|
||||||
metadata:
|
|
||||||
name: {{ .Release.Name }}-redis
|
|
||||||
annotations:
|
|
||||||
{{- include "helm_keep_annotation" . | nindent 4 }}
|
|
||||||
type: generic
|
|
||||||
data:
|
|
||||||
REDIS_PASSWORD: {{ include "REDIS_PASSWORD" . | quote }}
|
|
||||||
@@ -1,21 +0,0 @@
|
|||||||
apiVersion: v1
|
|
||||||
kind: Service
|
|
||||||
metadata:
|
|
||||||
name: {{ .Release.Name }}
|
|
||||||
spec:
|
|
||||||
type: ClusterIP
|
|
||||||
selector:
|
|
||||||
app.kubernetes.io/name: nextcloud
|
|
||||||
ports:
|
|
||||||
- name: http
|
|
||||||
protocol: TCP
|
|
||||||
port: 80
|
|
||||||
targetPort: http
|
|
||||||
- name: postgres
|
|
||||||
protocol: TCP
|
|
||||||
port: 5432
|
|
||||||
targetPort: postgres
|
|
||||||
- name: redis
|
|
||||||
protocol: TCP
|
|
||||||
port: 6379
|
|
||||||
targetPort: redis
|
|
||||||
@@ -1,21 +0,0 @@
|
|||||||
nextcloud:
|
|
||||||
image: nextcloud:27
|
|
||||||
domain: nextcloud.reeseapps.com
|
|
||||||
html:
|
|
||||||
storageClassName: zfs-iscsi-enc1
|
|
||||||
storage: 16Gi
|
|
||||||
data:
|
|
||||||
storageClassName: zfs-iscsi-enc0
|
|
||||||
storage: 2Ti
|
|
||||||
admin:
|
|
||||||
password: ""
|
|
||||||
|
|
||||||
postgres:
|
|
||||||
storageClassName: zfs-iscsi-enc1
|
|
||||||
storage: 32Gi
|
|
||||||
password: ""
|
|
||||||
|
|
||||||
redis:
|
|
||||||
storageClassName: zfs-nfs-enc1
|
|
||||||
storage: 32Gi
|
|
||||||
password: ""
|
|
||||||
@@ -1,23 +0,0 @@
|
|||||||
# Patterns to ignore when building packages.
|
|
||||||
# This supports shell glob matching, relative path matching, and
|
|
||||||
# negation (prefixed with !). Only one pattern per line.
|
|
||||||
.DS_Store
|
|
||||||
# Common VCS dirs
|
|
||||||
.git/
|
|
||||||
.gitignore
|
|
||||||
.bzr/
|
|
||||||
.bzrignore
|
|
||||||
.hg/
|
|
||||||
.hgignore
|
|
||||||
.svn/
|
|
||||||
# Common backup files
|
|
||||||
*.swp
|
|
||||||
*.bak
|
|
||||||
*.tmp
|
|
||||||
*.orig
|
|
||||||
*~
|
|
||||||
# Various IDEs
|
|
||||||
.project
|
|
||||||
.idea/
|
|
||||||
*.tmproj
|
|
||||||
.vscode/
|
|
||||||
@@ -1,23 +0,0 @@
|
|||||||
apiVersion: v2
|
|
||||||
name: Nextcloud
|
|
||||||
description: A Simple Nextcloud Chart
|
|
||||||
|
|
||||||
# A chart can be either an 'application' or a 'library' chart.
|
|
||||||
#
|
|
||||||
# Application charts are a collection of templates that can be packaged into versioned archives
|
|
||||||
# to be deployed.
|
|
||||||
#
|
|
||||||
# Library charts provide useful utilities or functions for the chart developer. They're included as
|
|
||||||
# a dependency of application charts to inject those utilities and functions into the rendering
|
|
||||||
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
|
|
||||||
type: application
|
|
||||||
|
|
||||||
# This is the chart version. This version number should be incremented each time you make changes
|
|
||||||
# to the chart and its templates, including the app version.
|
|
||||||
# Versions are expected to follow Semantic Versioning (https://semver.org/)
|
|
||||||
version: 0.1.0
|
|
||||||
|
|
||||||
# This is the version number of the application being deployed. This version number should be
|
|
||||||
# incremented each time you make changes to the application. Versions are not expected to
|
|
||||||
# follow Semantic Versioning. They should reflect the version the application is using.
|
|
||||||
appVersion: 1.16.0
|
|
||||||
@@ -2,34 +2,12 @@
|
|||||||
|
|
||||||
- [Home Assistant](#home-assistant)
|
- [Home Assistant](#home-assistant)
|
||||||
- [Setup and Configuration](#setup-and-configuration)
|
- [Setup and Configuration](#setup-and-configuration)
|
||||||
- [Cert](#cert)
|
|
||||||
- [Door Lock](#door-lock)
|
- [Door Lock](#door-lock)
|
||||||
- [Philips Hue Lights](#philips-hue-lights)
|
- [Philips Hue Lights](#philips-hue-lights)
|
||||||
- [Shelly](#shelly)
|
- [Shelly](#shelly)
|
||||||
- [Trackers](#trackers)
|
|
||||||
- [Looping Over Entities with Labels](#looping-over-entities-with-labels)
|
|
||||||
- [Get All Entity Attributes](#get-all-entity-attributes)
|
|
||||||
|
|
||||||
## Setup and Configuration
|
## Setup and Configuration
|
||||||
|
|
||||||
### Cert
|
|
||||||
|
|
||||||
```bash
|
|
||||||
openssl req -sha256 -addext "subjectAltName = DNS:homeassistant.reeselink.com" -newkey rsa:4096 -nodes -keyout secrets/ha-privkey.pem -x509 -days 3650 -out secrets/ha-fullchain.pem
|
|
||||||
scp secrets/ha-* root@homeassistant.reeselink.com:~/ssl/
|
|
||||||
```
|
|
||||||
|
|
||||||
configuration.yaml
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
http:
|
|
||||||
ssl_certificate: "/ssl/ha-fullchain.pem"
|
|
||||||
ssl_key: "/ssl/ha-privkey.pem"
|
|
||||||
use_x_forwarded_for: true
|
|
||||||
trusted_proxies:
|
|
||||||
- 10.1.0.0/16
|
|
||||||
```
|
|
||||||
|
|
||||||
### Door Lock
|
### Door Lock
|
||||||
|
|
||||||
1. Install Z-wave
|
1. Install Z-wave
|
||||||
@@ -59,25 +37,3 @@ the range of your home assistant's bluetooth capabilities. Active scanning uses
|
|||||||
is quicker to pick up and transmit device information. Note that "gateway mode" is not required,
|
is quicker to pick up and transmit device information. Note that "gateway mode" is not required,
|
||||||
just enable bluetooth and rpc or select "active" from the configuration menu for the shelly
|
just enable bluetooth and rpc or select "active" from the configuration menu for the shelly
|
||||||
device.
|
device.
|
||||||
|
|
||||||
### Trackers
|
|
||||||
|
|
||||||
See `hass_trackers/`
|
|
||||||
|
|
||||||
### Looping Over Entities with Labels
|
|
||||||
|
|
||||||
<https://www.home-assistant.io/docs/configuration/templating/#labels>
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
{% for item in label_entities("Battery Level") -%}
|
|
||||||
- {{ item }}
|
|
||||||
{% endfor %}
|
|
||||||
```
|
|
||||||
|
|
||||||
### Get All Entity Attributes
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
{% for item in label_entities("Battery Level") -%}
|
|
||||||
- {{ states[item].attributes }}
|
|
||||||
{% endfor %}
|
|
||||||
```
|
|
||||||
|
|||||||
0
helm/dashboard-user/.helmignore → iperf3/.helmignore
Normal file → Executable file
0
helm/dashboard-user/.helmignore → iperf3/.helmignore
Normal file → Executable file
0
helm/dashboard-user/values.yaml → iperf3/values.yaml
Normal file → Executable file
0
helm/dashboard-user/values.yaml → iperf3/values.yaml
Normal file → Executable file
261
k3s/README.md
261
k3s/README.md
@@ -16,30 +16,14 @@
|
|||||||
- [Cert Manager](#cert-manager)
|
- [Cert Manager](#cert-manager)
|
||||||
- [Test Minecraft Server](#test-minecraft-server)
|
- [Test Minecraft Server](#test-minecraft-server)
|
||||||
- [Automatic Updates](#automatic-updates)
|
- [Automatic Updates](#automatic-updates)
|
||||||
- [Manual Updates](#manual-updates)
|
- [Database Backups](#database-backups)
|
||||||
- [Create a Userspace](#create-a-userspace)
|
|
||||||
- [Quickstart](#quickstart)
|
- [Quickstart](#quickstart)
|
||||||
- [Userspace](#userspace)
|
|
||||||
- [Namespace](#namespace)
|
|
||||||
- [Roles](#roles)
|
|
||||||
- [Rolebinding](#rolebinding)
|
|
||||||
- [Manual Steps](#manual-steps)
|
|
||||||
- [Create a kubernetes certsigner pod](#create-a-kubernetes-certsigner-pod)
|
|
||||||
- [Create the certsigner secret](#create-the-certsigner-secret)
|
|
||||||
- [Set up the certsigner pod](#set-up-the-certsigner-pod)
|
|
||||||
- [Generate a cert](#generate-a-cert)
|
|
||||||
- [Create a new Userspace](#create-a-new-userspace)
|
|
||||||
- [Sign the cert](#sign-the-cert)
|
|
||||||
- [Add to the config](#add-to-the-config)
|
|
||||||
- [Delete](#delete)
|
|
||||||
- [Signing a user cert - detailed notes](#signing-a-user-cert---detailed-notes)
|
|
||||||
- [Help](#help)
|
- [Help](#help)
|
||||||
- [Troubleshooting](#troubleshooting)
|
- [Troubleshooting](#troubleshooting)
|
||||||
- [Deleting a stuck namespace](#deleting-a-stuck-namespace)
|
- [Deleting a stuck namespace](#deleting-a-stuck-namespace)
|
||||||
- [Fixing a bad volume](#fixing-a-bad-volume)
|
- [Fixing a bad volume](#fixing-a-bad-volume)
|
||||||
- [Mounting an ix-application volume from truenas](#mounting-an-ix-application-volume-from-truenas)
|
- [Mounting an ix-application volume from truenas](#mounting-an-ix-application-volume-from-truenas)
|
||||||
- [Mounting a volume](#mounting-a-volume)
|
- [Mounting a volume](#mounting-a-volume)
|
||||||
- [Database Backups](#database-backups)
|
|
||||||
- [Uninstall](#uninstall)
|
- [Uninstall](#uninstall)
|
||||||
|
|
||||||
## Guide
|
## Guide
|
||||||
@@ -76,6 +60,7 @@ Set SELinux to permissive by editing `/etc/selinux/config`
|
|||||||
|
|
||||||
```bash
|
```bash
|
||||||
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.30.2+k3s2 sh -s - \
|
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.30.2+k3s2 sh -s - \
|
||||||
|
"--cluster-init" \
|
||||||
"--flannel-ipv6-masq" \
|
"--flannel-ipv6-masq" \
|
||||||
"--disable" \
|
"--disable" \
|
||||||
"traefik" \
|
"traefik" \
|
||||||
@@ -83,6 +68,8 @@ curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v1.30.2+k3s2 sh -s - \
|
|||||||
"servicelb" \
|
"servicelb" \
|
||||||
"--disable" \
|
"--disable" \
|
||||||
"coredns" \
|
"coredns" \
|
||||||
|
"--disable" \
|
||||||
|
"local-storage" \
|
||||||
"--tls-san" \
|
"--tls-san" \
|
||||||
"kube.reeselink.com" \
|
"kube.reeselink.com" \
|
||||||
"--cluster-cidr" \
|
"--cluster-cidr" \
|
||||||
@@ -111,6 +98,19 @@ scp kube:/etc/rancher/k3s/k3s.yaml ~/.kube/admin-kube-config
|
|||||||
3. `systemctl daemon-reload`
|
3. `systemctl daemon-reload`
|
||||||
4. `mount -a`
|
4. `mount -a`
|
||||||
|
|
||||||
|
<https://github.com/rancher/local-path-provisioner/tree/master/deploy/chart/local-path-provisioner>
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Download the updated template from github
|
||||||
|
kubectl kustomize "github.com/rancher/local-path-provisioner/deploy?ref=v0.0.28" > local-path-provisioner/local-path-storage.yaml
|
||||||
|
|
||||||
|
# Apply customizations (ssd/hdd storage, read write many support)
|
||||||
|
kubectl kustomize local-path-provisioner | kubectl apply -f -
|
||||||
|
|
||||||
|
# Create test pod
|
||||||
|
kubectl apply -f k3s/tests/local-storage-test.yaml
|
||||||
|
```
|
||||||
|
|
||||||
## Coredns
|
## Coredns
|
||||||
|
|
||||||
1. Edit `coredns/values.yaml` to ensure the forward nameserver is correct.
|
1. Edit `coredns/values.yaml` to ensure the forward nameserver is correct.
|
||||||
@@ -270,8 +270,7 @@ kubectl delete -f k3s/tests/ingress-nginx-test.yaml
|
|||||||
## Test Minecraft Server
|
## Test Minecraft Server
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
helm upgrade --install minecraft ./helm/minecraft -n minecraft --create-namespace
|
helm upgrade --install minecraft ./minecraft -n minecraft --create-namespace
|
||||||
helm upgrade --install minecraft1 ./helm/minecraft -n minecraft --create-namespace
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## Automatic Updates
|
## Automatic Updates
|
||||||
@@ -279,28 +278,33 @@ helm upgrade --install minecraft1 ./helm/minecraft -n minecraft --create-namespa
|
|||||||
<https://docs.k3s.io/upgrades/automated>
|
<https://docs.k3s.io/upgrades/automated>
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
kubectl create namespace system-upgrade
|
||||||
kubectl apply -f https://github.com/rancher/system-upgrade-controller/releases/latest/download/system-upgrade-controller.yaml
|
kubectl apply -f https://github.com/rancher/system-upgrade-controller/releases/latest/download/system-upgrade-controller.yaml
|
||||||
kubectl apply -f https://github.com/rancher/system-upgrade-controller/releases/latest/download/crd.yaml
|
kubectl apply -f https://github.com/rancher/system-upgrade-controller/releases/latest/download/crd.yaml
|
||||||
kubectl apply -f k3s/upgrade-plan.yaml
|
kubectl apply -f k3s/upgrade-plan.yaml
|
||||||
|
|
||||||
|
# Check plan
|
||||||
|
kubectl get plan -n system-upgrade
|
||||||
```
|
```
|
||||||
|
|
||||||
## Manual Updates
|
## Database Backups
|
||||||
|
|
||||||
<https://docs.k3s.io/upgrades/manual#manually-upgrade-k3s-using-the-binary>
|
<https://docs.k3s.io/cli/etcd-snapshot>
|
||||||
|
|
||||||
|
Note, you must backup `/var/lib/rancher/k3s/server/token`
|
||||||
|
and use the contents as the toklisten when restoring the backup as data is encrypted with that token.
|
||||||
|
|
||||||
|
Backups are saved to `/var/lib/rancher/k3s/server/db/snapshots/` by default.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
sudo su -
|
k3s etcd-snapshot save
|
||||||
wget https://github.com/k3s-io/k3s/releases/download/v1.28.3%2Bk3s1/k3s
|
k3s etcd-snapshot list
|
||||||
systemctl stop k3s
|
|
||||||
chmod +x k3s
|
k3s server \
|
||||||
mv k3s /usr/local/bin/k3s
|
--cluster-reset \
|
||||||
systemctl start k3s
|
--cluster-reset-restore-path=/var/lib/rancher/k3s/server/db/snapshots/on-demand-kube-1720459685
|
||||||
```
|
```
|
||||||
|
|
||||||
## Create a Userspace
|
|
||||||
|
|
||||||
This creates a user, namespace, and permissions with a simple script.
|
|
||||||
|
|
||||||
### Quickstart
|
### Quickstart
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
@@ -314,194 +318,6 @@ This creates a user, namespace, and permissions with a simple script.
|
|||||||
./removeuserspace <server_fqdn> <user>
|
./removeuserspace <server_fqdn> <user>
|
||||||
```
|
```
|
||||||
|
|
||||||
### Userspace
|
|
||||||
|
|
||||||
#### Namespace
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
apiVersion: v1
|
|
||||||
kind: Namespace
|
|
||||||
metadata:
|
|
||||||
name: {{ .Release.Name }}
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Roles
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
kind: Role
|
|
||||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
|
||||||
metadata:
|
|
||||||
name: namespace-manager
|
|
||||||
namespace: {{ .Release.Name }}
|
|
||||||
rules:
|
|
||||||
- apiGroups:
|
|
||||||
- ""
|
|
||||||
- extensions
|
|
||||||
- apps
|
|
||||||
- batch
|
|
||||||
- autoscaling
|
|
||||||
- networking.k8s.io
|
|
||||||
- traefik.containo.us
|
|
||||||
- rbac.authorization.k8s.io
|
|
||||||
- metrics.k8s.io
|
|
||||||
resources:
|
|
||||||
- deployments
|
|
||||||
- replicasets
|
|
||||||
- pods
|
|
||||||
- pods/exec
|
|
||||||
- pods/log
|
|
||||||
- pods/attach
|
|
||||||
- daemonsets
|
|
||||||
- statefulsets
|
|
||||||
- replicationcontrollers
|
|
||||||
- horizontalpodautoscalers
|
|
||||||
- services
|
|
||||||
- ingresses
|
|
||||||
- persistentvolumeclaims
|
|
||||||
- jobs
|
|
||||||
- cronjobs
|
|
||||||
- secrets
|
|
||||||
- configmaps
|
|
||||||
- serviceaccounts
|
|
||||||
- rolebindings
|
|
||||||
- ingressroutes
|
|
||||||
- middlewares
|
|
||||||
- endpoints
|
|
||||||
verbs:
|
|
||||||
- "*"
|
|
||||||
- apiGroups:
|
|
||||||
- ""
|
|
||||||
- metrics.k8s.io
|
|
||||||
- rbac.authorization.k8s.io
|
|
||||||
resources:
|
|
||||||
- resourcequotas
|
|
||||||
- roles
|
|
||||||
verbs:
|
|
||||||
- list
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Rolebinding
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
kind: RoleBinding
|
|
||||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
|
||||||
metadata:
|
|
||||||
namespace: {{ .Release.Name }}
|
|
||||||
name: namespace-manager
|
|
||||||
subjects:
|
|
||||||
- kind: User
|
|
||||||
name: {{ .Release.Name }}
|
|
||||||
apiGroup: ""
|
|
||||||
roleRef:
|
|
||||||
kind: ClusterRole
|
|
||||||
name: namespace-manager
|
|
||||||
apiGroup: ""
|
|
||||||
```
|
|
||||||
|
|
||||||
### Manual Steps
|
|
||||||
|
|
||||||
#### Create a kubernetes certsigner pod
|
|
||||||
|
|
||||||
This keeps the client-ca crt and key secret and allows the cert to be signed and stored on the pod
|
|
||||||
|
|
||||||
#### Create the certsigner secret
|
|
||||||
|
|
||||||
```bash
|
|
||||||
kubectl -n kube-system create secret generic certsigner --from-file /var/lib/rancher/k3s/server/tls/client-ca.crt --from-file /var/lib/rancher/k3s/server/tls/client-ca.key
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Set up the certsigner pod
|
|
||||||
|
|
||||||
```bash
|
|
||||||
scp certsigner.yaml <server>:~/certsigner.yaml
|
|
||||||
kubectl apply -f certsigner.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Generate a cert
|
|
||||||
|
|
||||||
```bash
|
|
||||||
export USER=<user>
|
|
||||||
docker run -it -v $(pwd)/users/$USER:/$USER python:latest openssl genrsa -out /$USER/$USER.key 2048
|
|
||||||
docker run -it -v $(pwd)/users/$USER:/$USER python:latest openssl req -new -key /$USER/$USER.key -out /$USER/$USER.csr -subj "/CN=$USER/O=user"
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Create a new Userspace
|
|
||||||
|
|
||||||
```bash
|
|
||||||
helm template $USER ./namespace | kubectl --context admin apply -f -
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Sign the cert
|
|
||||||
|
|
||||||
```bash
|
|
||||||
export USER=<user>
|
|
||||||
kubectl --context admin cp $(pwd)/users/$USER/$USER.csr certsigner:/certs/$USER.csr
|
|
||||||
kubectl --context admin exec -it --context admin certsigner -- openssl x509 -in /certs/$USER.csr -req -CA /keys/client-ca.crt -CAkey /keys/client-ca.key -CAcreateserial -out /certs/$USER.crt -days 5000
|
|
||||||
kubectl --context admin cp certsigner:/certs/$USER.crt $(pwd)/users/$USER/$USER.crt
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Add to the config
|
|
||||||
|
|
||||||
```bash
|
|
||||||
kubectl config set-credentials $USER --client-certificate=$USER.crt --client-key=$USER.key
|
|
||||||
kubectl config set-context $USER --cluster=mainframe --namespace=$USER --user=$USER
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Delete
|
|
||||||
|
|
||||||
```bash
|
|
||||||
kubectl config delete-context $USER
|
|
||||||
helm template $USER ./namespace | kubectl --context admin delete -f -
|
|
||||||
```
|
|
||||||
|
|
||||||
### Signing a user cert - detailed notes
|
|
||||||
|
|
||||||
NOTE: ca.crt and ca.key are in /var/lib/rancher/k3s/server/tls/client-ca.*
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# First we create the credentials
|
|
||||||
# /CN=<username> - the user
|
|
||||||
# /O=<group> - the group
|
|
||||||
|
|
||||||
# Navigate to the user directory
|
|
||||||
export USER=<username>
|
|
||||||
cd $USER
|
|
||||||
|
|
||||||
# Generate a private key
|
|
||||||
openssl genrsa -out $USER.key 2048
|
|
||||||
# Check the key
|
|
||||||
# openssl pkey -in ca.key -noout -text
|
|
||||||
# Generate and send me the CSR
|
|
||||||
# The "user" group is my default group
|
|
||||||
openssl req -new -key $USER.key -out $USER.csr -subj "/CN=$USER/O=user"
|
|
||||||
|
|
||||||
# Check the CSR
|
|
||||||
# openssl req -in $USER.csr -noout -text
|
|
||||||
# If satisfactory, sign the CSR
|
|
||||||
# Copy from /var/lib/rancher/k3s/server/tls/client-ca.crt and client-ca.key
|
|
||||||
openssl x509 -req -in $USER.csr -CA ../client-ca.crt -CAkey ../client-ca.key -CAcreateserial -out $USER.crt -days 5000
|
|
||||||
# Review the certificate
|
|
||||||
# openssl x509 -in $USER.crt -text -noout
|
|
||||||
|
|
||||||
# Send back the crt
|
|
||||||
# cp $USER.crt $USER.key ../server-ca.crt ~/.kube/
|
|
||||||
kubectl config set-credentials $USER --client-certificate=$USER.crt --client-key=$USER.key
|
|
||||||
kubectl config set-context $USER --cluster=mainframe --namespace=$USER --user=$USER
|
|
||||||
|
|
||||||
# Now we create the namespace, rolebindings, and resource quotas
|
|
||||||
# kubectl apply -f k8s/
|
|
||||||
|
|
||||||
# Add the cluster
|
|
||||||
# CA file can be found at https://3.14.3.100:6443/cacerts
|
|
||||||
- cluster:
|
|
||||||
certificate-authority: server-ca.crt
|
|
||||||
server: https://3.14.3.100:6443
|
|
||||||
name: mainframe
|
|
||||||
|
|
||||||
# Test if everything worked
|
|
||||||
kubectl --context=$USER-context get pods
|
|
||||||
```
|
|
||||||
|
|
||||||
## Help
|
## Help
|
||||||
|
|
||||||
### Troubleshooting
|
### Troubleshooting
|
||||||
@@ -541,13 +357,6 @@ mount -t xfs /dev/zvol/enc0/dcsi/apps/pvc-d5090258-cf20-4f2e-a5cf-330ac00d0049 /
|
|||||||
umount /mnt/dcsi_pvc
|
umount /mnt/dcsi_pvc
|
||||||
```
|
```
|
||||||
|
|
||||||
## Database Backups
|
|
||||||
|
|
||||||
<https://docs.k3s.io/cli/etcd-snapshot>
|
|
||||||
|
|
||||||
Note, you must backup `/var/lib/rancher/k3s/server/token`
|
|
||||||
and use the contents as the token when restoring the backup as data is encrypted with that token.
|
|
||||||
|
|
||||||
## Uninstall
|
## Uninstall
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
|
|||||||
@@ -1,30 +0,0 @@
|
|||||||
kind: ClusterRole
|
|
||||||
apiVersion: rbac.authorization.k8s.io/v1
|
|
||||||
metadata:
|
|
||||||
name: cluster-readonly
|
|
||||||
rules:
|
|
||||||
- apiGroups:
|
|
||||||
- ""
|
|
||||||
- rbac.authorization.k8s.io
|
|
||||||
- storage.k8s.io
|
|
||||||
- networking.k8s.io
|
|
||||||
- traefik.containo.us
|
|
||||||
resources:
|
|
||||||
- storageclasses
|
|
||||||
verbs:
|
|
||||||
- list
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
apiVersion: rbac.authorization.k8s.io/v1
|
|
||||||
kind: ClusterRoleBinding
|
|
||||||
metadata:
|
|
||||||
name: cluster-readonly
|
|
||||||
subjects:
|
|
||||||
- kind: Group
|
|
||||||
name: user
|
|
||||||
apiGroup: ""
|
|
||||||
roleRef:
|
|
||||||
kind: ClusterRole
|
|
||||||
name: cluster-readonly
|
|
||||||
apiGroup: ""
|
|
||||||
@@ -1,16 +0,0 @@
|
|||||||
kind: ClusterRole
|
|
||||||
apiVersion: rbac.authorization.k8s.io/v1
|
|
||||||
metadata:
|
|
||||||
name: user-readonly
|
|
||||||
rules:
|
|
||||||
- apiGroups:
|
|
||||||
- rbac.authorization.k8s.io
|
|
||||||
- storage.k8s.io
|
|
||||||
- networking.k8s.io
|
|
||||||
resources:
|
|
||||||
- clusterroles
|
|
||||||
- storageclasses
|
|
||||||
- ingressclasses
|
|
||||||
verbs:
|
|
||||||
- list
|
|
||||||
- watch
|
|
||||||
@@ -1,12 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
# Use
|
|
||||||
# ./setup.sh <server_fqdn>
|
|
||||||
|
|
||||||
export SERVER=$1
|
|
||||||
|
|
||||||
ssh -t $SERVER sudo kubectl -n kube-system create secret generic certsigner --from-file /var/lib/rancher/k3s/server/tls/client-ca.crt --from-file /var/lib/rancher/k3s/server/tls/client-ca.key
|
|
||||||
scp certsigner.yaml $SERVER:~/certsigner.yaml
|
|
||||||
ssh $SERVER kubectl apply -f certsigner.yaml
|
|
||||||
scp clusterrole.yaml $SERVER:~/clusterrole.yaml
|
|
||||||
ssh $SERVER kubectl apply -f clusterrole.yaml
|
|
||||||
71
k3s/tests/local-storage-test.yaml
Normal file
71
k3s/tests/local-storage-test.yaml
Normal file
@@ -0,0 +1,71 @@
|
|||||||
|
apiVersion: v1
|
||||||
|
kind: PersistentVolumeClaim
|
||||||
|
metadata:
|
||||||
|
name: ssd-test
|
||||||
|
namespace: default
|
||||||
|
spec:
|
||||||
|
storageClassName: ssd
|
||||||
|
accessModes:
|
||||||
|
- ReadWriteMany
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
storage: 8Gi
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
apiVersion: v1
|
||||||
|
kind: PersistentVolumeClaim
|
||||||
|
metadata:
|
||||||
|
name: hdd-test
|
||||||
|
namespace: default
|
||||||
|
spec:
|
||||||
|
storageClassName: hdd
|
||||||
|
accessModes:
|
||||||
|
- ReadWriteMany
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
storage: 8Gi
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
apiVersion: apps/v1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
name: local-storage-test
|
||||||
|
namespace: default
|
||||||
|
spec:
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
app: local-storage-test
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: local-storage-test
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- image: debian
|
||||||
|
command:
|
||||||
|
- bash
|
||||||
|
- -c
|
||||||
|
- 'sleep infinity'
|
||||||
|
name: local-storage-test
|
||||||
|
volumeMounts:
|
||||||
|
- mountPath: /ssd
|
||||||
|
name: ssd
|
||||||
|
- mountPath: /hdd
|
||||||
|
name: hdd
|
||||||
|
resources:
|
||||||
|
limits:
|
||||||
|
memory: "4Gi"
|
||||||
|
cpu: "2"
|
||||||
|
requests:
|
||||||
|
memory: "1Mi"
|
||||||
|
cpu: "1m"
|
||||||
|
restartPolicy: Always
|
||||||
|
volumes:
|
||||||
|
- name: hdd
|
||||||
|
persistentVolumeClaim:
|
||||||
|
claimName: hdd-test
|
||||||
|
- name: ssd
|
||||||
|
persistentVolumeClaim:
|
||||||
|
claimName: ssd-test
|
||||||
@@ -16,4 +16,4 @@ spec:
|
|||||||
serviceAccountName: system-upgrade
|
serviceAccountName: system-upgrade
|
||||||
upgrade:
|
upgrade:
|
||||||
image: rancher/k3s-upgrade
|
image: rancher/k3s-upgrade
|
||||||
channel: https://update.k3s.io/v1-release/channels/latest
|
channel: https://update.k3s.io/v1-release/channels/stable
|
||||||
@@ -1,42 +0,0 @@
|
|||||||
# Server plan
|
|
||||||
apiVersion: upgrade.cattle.io/v1
|
|
||||||
kind: Plan
|
|
||||||
metadata:
|
|
||||||
name: server-plan
|
|
||||||
namespace: system-upgrade
|
|
||||||
spec:
|
|
||||||
concurrency: 1
|
|
||||||
cordon: true
|
|
||||||
nodeSelector:
|
|
||||||
matchExpressions:
|
|
||||||
- key: node-role.kubernetes.io/control-plane
|
|
||||||
operator: In
|
|
||||||
values:
|
|
||||||
- "true"
|
|
||||||
serviceAccountName: system-upgrade
|
|
||||||
upgrade:
|
|
||||||
image: rancher/k3s-upgrade
|
|
||||||
channel: https://update.k3s.io/v1-release/channels/latest
|
|
||||||
---
|
|
||||||
# Agent plan
|
|
||||||
apiVersion: upgrade.cattle.io/v1
|
|
||||||
kind: Plan
|
|
||||||
metadata:
|
|
||||||
name: agent-plan
|
|
||||||
namespace: system-upgrade
|
|
||||||
spec:
|
|
||||||
concurrency: 1
|
|
||||||
cordon: true
|
|
||||||
nodeSelector:
|
|
||||||
matchExpressions:
|
|
||||||
- key: node-role.kubernetes.io/control-plane
|
|
||||||
operator: DoesNotExist
|
|
||||||
prepare:
|
|
||||||
args:
|
|
||||||
- prepare
|
|
||||||
- server-plan
|
|
||||||
image: rancher/k3s-upgrade
|
|
||||||
serviceAccountName: system-upgrade
|
|
||||||
upgrade:
|
|
||||||
image: rancher/k3s-upgrade
|
|
||||||
channel: https://update.k3s.io/v1-release/channels/latest
|
|
||||||
13
local-path-provisioner/ConfigMap-patch.yaml
Normal file
13
local-path-provisioner/ConfigMap-patch.yaml
Normal file
@@ -0,0 +1,13 @@
|
|||||||
|
- op: replace # action
|
||||||
|
path: /data/config.json # resource we want to change
|
||||||
|
value: |-
|
||||||
|
{
|
||||||
|
"storageClassConfigs": {
|
||||||
|
"ssd": {
|
||||||
|
"sharedFileSystemPath": "/opt/local-path-provisioner/ssd"
|
||||||
|
},
|
||||||
|
"hdd": {
|
||||||
|
"sharedFileSystemPath": "/opt/local-path-provisioner/hdd"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
3
local-path-provisioner/StorageClass-hdd-patch.yaml
Normal file
3
local-path-provisioner/StorageClass-hdd-patch.yaml
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
- op: replace # action
|
||||||
|
path: /metadata/name # resource we want to change
|
||||||
|
value: hdd # value we want to use for patching
|
||||||
19
local-path-provisioner/kustomization.yaml
Normal file
19
local-path-provisioner/kustomization.yaml
Normal file
@@ -0,0 +1,19 @@
|
|||||||
|
apiVersion: kustomize.config.k8s.io/v1beta1
|
||||||
|
kind: Kustomization
|
||||||
|
resources:
|
||||||
|
- local-path-storage.yaml
|
||||||
|
- ssd-storage.yaml
|
||||||
|
patches:
|
||||||
|
- target:
|
||||||
|
group: storage.k8s.io
|
||||||
|
version: v1
|
||||||
|
kind: StorageClass
|
||||||
|
name: local-path
|
||||||
|
path: StorageClass-hdd-patch.yaml
|
||||||
|
- target:
|
||||||
|
group: ""
|
||||||
|
version: v1
|
||||||
|
kind: ConfigMap
|
||||||
|
path: ConfigMap-patch.yaml
|
||||||
|
- target: {}
|
||||||
|
path: namespace-patch.yaml
|
||||||
189
local-path-provisioner/local-path-storage.yaml
Normal file
189
local-path-provisioner/local-path-storage.yaml
Normal file
@@ -0,0 +1,189 @@
|
|||||||
|
apiVersion: v1
|
||||||
|
kind: Namespace
|
||||||
|
metadata:
|
||||||
|
name: local-path-storage
|
||||||
|
---
|
||||||
|
apiVersion: storage.k8s.io/v1
|
||||||
|
kind: StorageClass
|
||||||
|
metadata:
|
||||||
|
name: local-path
|
||||||
|
provisioner: rancher.io/local-path
|
||||||
|
reclaimPolicy: Delete
|
||||||
|
volumeBindingMode: WaitForFirstConsumer
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: ServiceAccount
|
||||||
|
metadata:
|
||||||
|
name: local-path-provisioner-service-account
|
||||||
|
namespace: local-path-storage
|
||||||
|
---
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
|
kind: Role
|
||||||
|
metadata:
|
||||||
|
name: local-path-provisioner-role
|
||||||
|
namespace: local-path-storage
|
||||||
|
rules:
|
||||||
|
- apiGroups:
|
||||||
|
- ""
|
||||||
|
resources:
|
||||||
|
- pods
|
||||||
|
verbs:
|
||||||
|
- get
|
||||||
|
- list
|
||||||
|
- watch
|
||||||
|
- create
|
||||||
|
- patch
|
||||||
|
- update
|
||||||
|
- delete
|
||||||
|
---
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
|
kind: ClusterRole
|
||||||
|
metadata:
|
||||||
|
name: local-path-provisioner-role
|
||||||
|
rules:
|
||||||
|
- apiGroups:
|
||||||
|
- ""
|
||||||
|
resources:
|
||||||
|
- nodes
|
||||||
|
- persistentvolumeclaims
|
||||||
|
- configmaps
|
||||||
|
- pods
|
||||||
|
- pods/log
|
||||||
|
verbs:
|
||||||
|
- get
|
||||||
|
- list
|
||||||
|
- watch
|
||||||
|
- apiGroups:
|
||||||
|
- ""
|
||||||
|
resources:
|
||||||
|
- persistentvolumes
|
||||||
|
verbs:
|
||||||
|
- get
|
||||||
|
- list
|
||||||
|
- watch
|
||||||
|
- create
|
||||||
|
- patch
|
||||||
|
- update
|
||||||
|
- delete
|
||||||
|
- apiGroups:
|
||||||
|
- ""
|
||||||
|
resources:
|
||||||
|
- events
|
||||||
|
verbs:
|
||||||
|
- create
|
||||||
|
- patch
|
||||||
|
- apiGroups:
|
||||||
|
- storage.k8s.io
|
||||||
|
resources:
|
||||||
|
- storageclasses
|
||||||
|
verbs:
|
||||||
|
- get
|
||||||
|
- list
|
||||||
|
- watch
|
||||||
|
---
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
|
kind: RoleBinding
|
||||||
|
metadata:
|
||||||
|
name: local-path-provisioner-bind
|
||||||
|
namespace: local-path-storage
|
||||||
|
roleRef:
|
||||||
|
apiGroup: rbac.authorization.k8s.io
|
||||||
|
kind: Role
|
||||||
|
name: local-path-provisioner-role
|
||||||
|
subjects:
|
||||||
|
- kind: ServiceAccount
|
||||||
|
name: local-path-provisioner-service-account
|
||||||
|
namespace: local-path-storage
|
||||||
|
---
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
|
kind: ClusterRoleBinding
|
||||||
|
metadata:
|
||||||
|
name: local-path-provisioner-bind
|
||||||
|
roleRef:
|
||||||
|
apiGroup: rbac.authorization.k8s.io
|
||||||
|
kind: ClusterRole
|
||||||
|
name: local-path-provisioner-role
|
||||||
|
subjects:
|
||||||
|
- kind: ServiceAccount
|
||||||
|
name: local-path-provisioner-service-account
|
||||||
|
namespace: local-path-storage
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
data:
|
||||||
|
config.json: |-
|
||||||
|
{
|
||||||
|
"nodePathMap":[
|
||||||
|
{
|
||||||
|
"node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
|
||||||
|
"paths":["/opt/local-path-provisioner"]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
helperPod.yaml: |-
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Pod
|
||||||
|
metadata:
|
||||||
|
name: helper-pod
|
||||||
|
spec:
|
||||||
|
priorityClassName: system-node-critical
|
||||||
|
tolerations:
|
||||||
|
- key: node.kubernetes.io/disk-pressure
|
||||||
|
operator: Exists
|
||||||
|
effect: NoSchedule
|
||||||
|
containers:
|
||||||
|
- name: helper-pod
|
||||||
|
image: busybox
|
||||||
|
imagePullPolicy: IfNotPresent
|
||||||
|
setup: |-
|
||||||
|
#!/bin/sh
|
||||||
|
set -eu
|
||||||
|
mkdir -m 0777 -p "$VOL_DIR"
|
||||||
|
teardown: |-
|
||||||
|
#!/bin/sh
|
||||||
|
set -eu
|
||||||
|
rm -rf "$VOL_DIR"
|
||||||
|
kind: ConfigMap
|
||||||
|
metadata:
|
||||||
|
name: local-path-config
|
||||||
|
namespace: local-path-storage
|
||||||
|
---
|
||||||
|
apiVersion: apps/v1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
name: local-path-provisioner
|
||||||
|
namespace: local-path-storage
|
||||||
|
spec:
|
||||||
|
replicas: 1
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
app: local-path-provisioner
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: local-path-provisioner
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- command:
|
||||||
|
- local-path-provisioner
|
||||||
|
- --debug
|
||||||
|
- start
|
||||||
|
- --config
|
||||||
|
- /etc/config/config.json
|
||||||
|
env:
|
||||||
|
- name: POD_NAMESPACE
|
||||||
|
valueFrom:
|
||||||
|
fieldRef:
|
||||||
|
fieldPath: metadata.namespace
|
||||||
|
- name: CONFIG_MOUNT_PATH
|
||||||
|
value: /etc/config/
|
||||||
|
image: rancher/local-path-provisioner:v0.0.28
|
||||||
|
imagePullPolicy: IfNotPresent
|
||||||
|
name: local-path-provisioner
|
||||||
|
volumeMounts:
|
||||||
|
- mountPath: /etc/config/
|
||||||
|
name: config-volume
|
||||||
|
serviceAccountName: local-path-provisioner-service-account
|
||||||
|
volumes:
|
||||||
|
- configMap:
|
||||||
|
name: local-path-config
|
||||||
|
name: config-volume
|
||||||
3
local-path-provisioner/namespace-patch.yaml
Normal file
3
local-path-provisioner/namespace-patch.yaml
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
- op: replace # action
|
||||||
|
path: /metadata/namespace # resource we want to change
|
||||||
|
value: kube-system
|
||||||
9
local-path-provisioner/ssd-storage.yaml
Normal file
9
local-path-provisioner/ssd-storage.yaml
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
apiVersion: storage.k8s.io/v1
|
||||||
|
kind: StorageClass
|
||||||
|
metadata:
|
||||||
|
name: ssd
|
||||||
|
annotations:
|
||||||
|
storageclass.kubernetes.io/is-default-class: "true"
|
||||||
|
provisioner: rancher.io/local-path
|
||||||
|
reclaimPolicy: Delete
|
||||||
|
volumeBindingMode: WaitForFirstConsumer
|
||||||
@@ -5,6 +5,7 @@ metadata:
|
|||||||
annotations:
|
annotations:
|
||||||
"helm.sh/resource-policy": keep
|
"helm.sh/resource-policy": keep
|
||||||
spec:
|
spec:
|
||||||
|
storageClassName: ssd
|
||||||
accessModes:
|
accessModes:
|
||||||
- ReadWriteOnce
|
- ReadWriteOnce
|
||||||
resources:
|
resources:
|
||||||
147
ubuntu/README.md
Normal file
147
ubuntu/README.md
Normal file
@@ -0,0 +1,147 @@
|
|||||||
|
# Ubuntu Server
|
||||||
|
|
||||||
|
- [Ubuntu Server](#ubuntu-server)
|
||||||
|
- [Setup SSH](#setup-ssh)
|
||||||
|
- [Fail2Ban](#fail2ban)
|
||||||
|
- [Automatic Updates](#automatic-updates)
|
||||||
|
- [Disable Swap](#disable-swap)
|
||||||
|
- [Extras](#extras)
|
||||||
|
|
||||||
|
Note these instructions differentiate between an `operator` and a `server`. The operator can be
|
||||||
|
any machine that configure the server. A pipeline, laptop, dedicated server, etc. are all options.
|
||||||
|
The server can be its own operator, though that's not recommended since servers should be ephemeral
|
||||||
|
and the operator will store information about each server.
|
||||||
|
|
||||||
|
## Setup SSH
|
||||||
|
|
||||||
|
On the operator:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export SSH_HOST=kube
|
||||||
|
ssh-keygen -t rsa -b 4096 -C ducoterra@"$SSH_HOST".reeselink.com -f ~/.ssh/id_"$SSH_HOST"_rsa
|
||||||
|
|
||||||
|
# Note: If you get "too many authentication failures" it's likely because you have too many private
|
||||||
|
# keys in your ~/.ssh directory. Use `-o PubkeyAuthentication` to fix it.
|
||||||
|
ssh-copy-id -o PubkeyAuthentication=no -i ~/.ssh/id_$SSH_HOST_rsa.pub ducoterra@"$SSH_HOST".reeselink.com
|
||||||
|
|
||||||
|
cat <<EOF >> ~/.ssh/config
|
||||||
|
|
||||||
|
Host $SSH_HOST
|
||||||
|
Hostname $SSH_HOST.reeselink.com
|
||||||
|
User root
|
||||||
|
ProxyCommand none
|
||||||
|
ForwardAgent no
|
||||||
|
ForwardX11 no
|
||||||
|
Port 22
|
||||||
|
KeepAlive yes
|
||||||
|
IdentityFile ~/.ssh/id_"$SSH_HOST"_rsa
|
||||||
|
EOF
|
||||||
|
|
||||||
|
ssh -o PubkeyAuthentication=no ducoterra@"$SSH_HOST".reeselink.com
|
||||||
|
```
|
||||||
|
|
||||||
|
On the server:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Copy authorized_keys to root
|
||||||
|
sudo cp ~/.ssh/authorized_keys /root/.ssh/authorized_keys
|
||||||
|
|
||||||
|
# Change your password
|
||||||
|
passwd
|
||||||
|
|
||||||
|
sudo su -
|
||||||
|
echo "PasswordAuthentication no" > /etc/ssh/sshd_config.d/01-prohibit-password.conf
|
||||||
|
echo '%sudo ALL=(ALL) NOPASSWD: ALL' > /etc/sudoers.d/01-nopasswd-sudo
|
||||||
|
```
|
||||||
|
|
||||||
|
On the operator:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Test if you can SSH with a password
|
||||||
|
ssh -o PubkeyAuthentication=no ducoterra@"$SSH_HOST".reeselink.com
|
||||||
|
|
||||||
|
# Test that you can log into the server with ssh config
|
||||||
|
ssh $SSH_HOST
|
||||||
|
```
|
||||||
|
|
||||||
|
## Fail2Ban
|
||||||
|
|
||||||
|
On the server:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
apt update
|
||||||
|
apt install -y fail2ban
|
||||||
|
|
||||||
|
# Setup initial rules
|
||||||
|
cat <<EOF > /etc/fail2ban/jail.local
|
||||||
|
# Jail configuration additions for local installation
|
||||||
|
|
||||||
|
# Adjust the default configuration's default values
|
||||||
|
[DEFAULT]
|
||||||
|
# Optional enter an trusted IP never to ban
|
||||||
|
ignoreip = 2600:1700:1e6c:a81f::0/64
|
||||||
|
bantime = 6600
|
||||||
|
backend = auto
|
||||||
|
|
||||||
|
# The main configuration file defines all services but
|
||||||
|
# deactivates them by default. We have to activate those neeeded
|
||||||
|
[sshd]
|
||||||
|
enabled = true
|
||||||
|
EOF
|
||||||
|
|
||||||
|
systemctl enable fail2ban --now
|
||||||
|
tail -f /var/log/fail2ban.log
|
||||||
|
```
|
||||||
|
|
||||||
|
## Automatic Updates
|
||||||
|
|
||||||
|
On the server:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
apt install -y unattended-upgrades
|
||||||
|
|
||||||
|
systemctl enable --now unattended-upgrades.service
|
||||||
|
```
|
||||||
|
|
||||||
|
## Disable Swap
|
||||||
|
|
||||||
|
```bash
|
||||||
|
swapoff -a
|
||||||
|
```
|
||||||
|
|
||||||
|
## Extras
|
||||||
|
|
||||||
|
On the server:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Install glances for system monitoring
|
||||||
|
apt install -y glances
|
||||||
|
|
||||||
|
# Install zsh with autocomplete and suggestions
|
||||||
|
apt install -y zsh zsh-autosuggestions zsh-syntax-highlighting
|
||||||
|
|
||||||
|
cat <<EOF > ~/.zshrc
|
||||||
|
# Basic settings
|
||||||
|
autoload bashcompinit && bashcompinit
|
||||||
|
autoload -U compinit; compinit
|
||||||
|
zstyle ':completion:*' menu select
|
||||||
|
|
||||||
|
# Prompt settings
|
||||||
|
autoload -Uz promptinit
|
||||||
|
promptinit
|
||||||
|
prompt redhat
|
||||||
|
PROMPT_EOL_MARK=
|
||||||
|
|
||||||
|
# Syntax Highlighting
|
||||||
|
source /usr/share/zsh-syntax-highlighting/zsh-syntax-highlighting.zsh
|
||||||
|
source /usr/share/zsh-autosuggestions/zsh-autosuggestions.zsh
|
||||||
|
|
||||||
|
### Custom Commands and Aliases ###
|
||||||
|
EOF
|
||||||
|
|
||||||
|
chsh -s $(which zsh) && chsh -s $(which zsh) ducoterra
|
||||||
|
|
||||||
|
# Cockpit
|
||||||
|
apt install -y cockpit
|
||||||
|
systemctl enable --now cockpit
|
||||||
|
```
|
||||||
15
unifi/README.md
Normal file
15
unifi/README.md
Normal file
@@ -0,0 +1,15 @@
|
|||||||
|
# Unifi Server
|
||||||
|
|
||||||
|
<https://help.ui.com/hc/en-us/articles/220066768-Updating-and-Installing-Self-Hosted-UniFi-Network-Servers-Linux>
|
||||||
|
|
||||||
|
## Install
|
||||||
|
|
||||||
|
```bash
|
||||||
|
apt-get update && apt-get install ca-certificates apt-transport-https
|
||||||
|
echo 'deb [ arch=amd64,arm64 ] https://www.ui.com/downloads/unifi/debian stable ubiquiti' | tee /etc/apt/sources.list.d/100-ubnt-unifi.list
|
||||||
|
wget -O /etc/apt/trusted.gpg.d/unifi-repo.gpg https://dl.ui.com/unifi/unifi-repo.gpg
|
||||||
|
wget -qO - https://www.mongodb.org/static/pgp/server-3.6.asc | apt-key add -
|
||||||
|
echo "deb [trusted=yes] https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/3.6 multiverse" | tee /etc/apt/sources.list.d/mongodb-org-3.6.list
|
||||||
|
apt-get update
|
||||||
|
apt-get update && apt-get install unifi -y
|
||||||
|
```
|
||||||
@@ -3,7 +3,7 @@
|
|||||||
## Updates
|
## Updates
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
ansible-playbook -i ansible/inventory.yaml updates/upgrade-kubernetes-nodes.yaml
|
ansible-playbook -i ansible/inventory.yaml updates/upgrade-dnf.yaml
|
||||||
ansible-playbook -i ansible/inventory.yaml updates/upgrade-colors.yaml
|
|
||||||
ansible-playbook -i ansible/inventory.yaml updates/upgrade-apt.yaml
|
ansible-playbook -i ansible/inventory.yaml updates/upgrade-apt.yaml
|
||||||
|
ansible-playbook -i ansible/inventory.yaml updates/upgrade-pacman.yaml
|
||||||
```
|
```
|
||||||
|
|||||||
@@ -1,5 +1,7 @@
|
|||||||
- name: Upgrade
|
- name: Upgrade
|
||||||
hosts: apt
|
hosts:
|
||||||
|
- ubuntu
|
||||||
|
- raspbian
|
||||||
tasks:
|
tasks:
|
||||||
- name: Update all packages to their latest version
|
- name: Update all packages to their latest version
|
||||||
become: true
|
become: true
|
||||||
|
|||||||
@@ -4,11 +4,10 @@
|
|||||||
|
|
||||||
### Quickstart
|
### Quickstart
|
||||||
|
|
||||||
1. Start Docker
|
|
||||||
2. Run createprojectspace.sh
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
./createprojectspace.sh <server_fqdn> <username>
|
./userspace/scripts/setup.sh
|
||||||
|
./userspace/scripts/upsertuser.sh <server_fqdn> <username>
|
||||||
|
./userspace/scripts/removeuser.sh <server_fqdn> <username>
|
||||||
```
|
```
|
||||||
|
|
||||||
### Update a user
|
### Update a user
|
||||||
40
userspace/certsigner.yaml
Normal file
40
userspace/certsigner.yaml
Normal file
@@ -0,0 +1,40 @@
|
|||||||
|
apiVersion: apps/v1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
name: certsigner
|
||||||
|
namespace: kube-system
|
||||||
|
spec:
|
||||||
|
replicas: 1
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
app: certsigner
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: certsigner
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: certsigner
|
||||||
|
image: python:latest
|
||||||
|
command: ["cat"]
|
||||||
|
tty: true
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
memory: 1Mi
|
||||||
|
cpu: 1m
|
||||||
|
limits:
|
||||||
|
memory: 100Mi
|
||||||
|
cpu: 100m
|
||||||
|
volumeMounts:
|
||||||
|
- mountPath: /keys
|
||||||
|
name: keys
|
||||||
|
- mountPath: /certs
|
||||||
|
name: certs
|
||||||
|
volumes:
|
||||||
|
- name: keys
|
||||||
|
secret:
|
||||||
|
secretName: certsigner
|
||||||
|
- name: certs
|
||||||
|
emptyDir:
|
||||||
|
sizeLimit: 500Mi
|
||||||
|
restartPolicy: Always
|
||||||
10
userspace/scripts/setup.sh
Executable file
10
userspace/scripts/setup.sh
Executable file
@@ -0,0 +1,10 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
# Use
|
||||||
|
# ./setup.sh <server_fqdn>
|
||||||
|
|
||||||
|
export SERVER=$1
|
||||||
|
|
||||||
|
ssh -t $SERVER kubectl -n kube-system create secret generic certsigner --from-file /var/lib/rancher/k3s/server/tls/client-ca.crt --from-file /var/lib/rancher/k3s/server/tls/client-ca.key
|
||||||
|
scp ./userspace/certsigner.yaml $SERVER:~/certsigner.yaml
|
||||||
|
ssh $SERVER kubectl apply -f certsigner.yaml
|
||||||
@@ -119,7 +119,7 @@ exit 0
|
|||||||
fi
|
fi
|
||||||
|
|
||||||
echo "Templating namespace with helm and copying to server"
|
echo "Templating namespace with helm and copying to server"
|
||||||
helm template $KUBE_USER --set user=$KUBE_USER ./helm/namespace | ssh $SERVER "cat - > $SERVER_USER_DIR/namespace.yaml"
|
helm template $KUBE_USER --set user=$KUBE_USER ./userspace | ssh $SERVER "cat - > $SERVER_USER_DIR/namespace.yaml"
|
||||||
|
|
||||||
if [ $? -ne 0 ]; then
|
if [ $? -ne 0 ]; then
|
||||||
echo "Failed to template namespace. Is helm installed?"
|
echo "Failed to template namespace. Is helm installed?"
|
||||||
Reference in New Issue
Block a user